threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "I've been buried in the backend parser/planner/executor now for the last \n2 weeks or so, and I now have a patch for a working implementation of \nSRFs as RTEs (i.e. \"SELECT tbl.* FROM myfunc() AS tbl\"). I think I'm at \na good point to get review and comments. Not everything yet has been \nimplemented per my proposal (see: \nhttp://fts.postgresql.org/db/mw/msg.html?mid=1077099 ) but most of the \nsupport is in place.\n\nHow it currently works:\n-----------------------\n1. At this point, FROM clause SRFs are used as a row source in a manner \nsimilar to the current API, i.e. one row at a time is produced without \nmaterializing.\n\n2. The SRF may be either marked as returning a set or not. A function \nnot marked as returning a set simply produces one row.\n\n3. The SRF may either return a base data type (e.g. TEXT) or a composite \ndata type (e.g. pg_class). If the function returns a base data type, the \nsingle result column is named for the function. If the function returns \na composite type, the result columns get the same names as the \nindividual attributes of the type.\n\n4. The SRF *must* be aliased in the FROM clause. This is similar to the \nrequirement for a subselect used in the FROM clause.\n\n5. example:\ntest=# CREATE TABLE foo (fooid int, foosubid int, fooname text, primary \nkey(fooid,foosubid));\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index \n'foo_pkey' for table 'foo'\nCREATE\ntest=# INSERT INTO foo VALUES(1,1,'Joe');\nINSERT 16693 1\ntest=# INSERT INTO foo VALUES(1,2,'Ed');\nINSERT 16694 1\ntest=# INSERT INTO foo VALUES(2,1,'Mary');\nINSERT 16695 1\ntest=# CREATE FUNCTION getfoo(int) RETURNS setof foo AS 'SELECT * FROM \nfoo WHERE fooid = $1;' LANGUAGE SQL;\nCREATE\ntest=# SELECT * FROM getfoo(1) AS t1;\n fooid | foosubid | fooname\n-------+----------+---------\n 1 | 1 | Joe\n 1 | 2 | Ed\n(2 rows)\n\ntest=# SELECT t1.fooname FROM getfoo(1) AS t1 WHERE t1.foosubid = 1;\n fooname\n---------\n Joe\n(1 row)\n\ntest=# select * from dblink_get_pkey('foo') as t1;\n dblink_get_pkey\n-----------------\n fooid\n foosubid\n(2 rows)\n\nWhat still needs to be done:\n----------------------------\n1. Add a new table_ref node type - DONE\n2. Add support for three modes of operation to RangePortal:\n a. Repeated calls -- DONE\n b. Materialized results -- partially complete\n c. Return query -- I'm starting to wonder how/if this is really\n different than a.) above\n3. Add support to allow the RangePortal to materialize modes a and c,\n if needed for a re-read -- partially complete.\n4. Add a WITH keyword to CREATE FUNCTION, allowing SRF mode to be\n specified -- not yet started.\n\n\nRequest for help:\n-----------------\nSo far I've tested with SQL and C functions. I will also do some testing \nwith PLpgSQL functions. I need testing and feedback from users of the \nother function PLs.\n\nReview, comments, feedback, etc. are appreciated.\n\nThanks,\n\nJoe",
"msg_date": "Mon, 06 May 2002 09:51:15 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "Set Returning Functions (SRF) - request for patch review and comment"
},
{
"msg_contents": "Feedback: you're a legend!\n\nI'll try to patch my CVS and test it at some point...\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Joe Conway\n> Sent: Tuesday, 7 May 2002 12:51 AM\n> To: pgsql-hackers\n> Subject: [HACKERS] Set Returning Functions (SRF) - request for patch\n> review and comment\n> \n> \n> I've been buried in the backend parser/planner/executor now for the last \n> 2 weeks or so, and I now have a patch for a working implementation of \n> SRFs as RTEs (i.e. \"SELECT tbl.* FROM myfunc() AS tbl\"). I think I'm at \n> a good point to get review and comments. Not everything yet has been \n> implemented per my proposal (see: \n> http://fts.postgresql.org/db/mw/msg.html?mid=1077099 ) but most of the \n> support is in place.\n> \n> How it currently works:\n> -----------------------\n> 1. At this point, FROM clause SRFs are used as a row source in a manner \n> similar to the current API, i.e. one row at a time is produced without \n> materializing.\n> \n> 2. The SRF may be either marked as returning a set or not. A function \n> not marked as returning a set simply produces one row.\n> \n> 3. The SRF may either return a base data type (e.g. TEXT) or a composite \n> data type (e.g. pg_class). If the function returns a base data type, the \n> single result column is named for the function. If the function returns \n> a composite type, the result columns get the same names as the \n> individual attributes of the type.\n> \n> 4. The SRF *must* be aliased in the FROM clause. This is similar to the \n> requirement for a subselect used in the FROM clause.\n> \n> 5. example:\n> test=# CREATE TABLE foo (fooid int, foosubid int, fooname text, primary \n> key(fooid,foosubid));\n> NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index \n> 'foo_pkey' for table 'foo'\n> CREATE\n> test=# INSERT INTO foo VALUES(1,1,'Joe');\n> INSERT 16693 1\n> test=# INSERT INTO foo VALUES(1,2,'Ed');\n> INSERT 16694 1\n> test=# INSERT INTO foo VALUES(2,1,'Mary');\n> INSERT 16695 1\n> test=# CREATE FUNCTION getfoo(int) RETURNS setof foo AS 'SELECT * FROM \n> foo WHERE fooid = $1;' LANGUAGE SQL;\n> CREATE\n> test=# SELECT * FROM getfoo(1) AS t1;\n> fooid | foosubid | fooname\n> -------+----------+---------\n> 1 | 1 | Joe\n> 1 | 2 | Ed\n> (2 rows)\n> \n> test=# SELECT t1.fooname FROM getfoo(1) AS t1 WHERE t1.foosubid = 1;\n> fooname\n> ---------\n> Joe\n> (1 row)\n> \n> test=# select * from dblink_get_pkey('foo') as t1;\n> dblink_get_pkey\n> -----------------\n> fooid\n> foosubid\n> (2 rows)\n> \n> What still needs to be done:\n> ----------------------------\n> 1. Add a new table_ref node type - DONE\n> 2. Add support for three modes of operation to RangePortal:\n> a. Repeated calls -- DONE\n> b. Materialized results -- partially complete\n> c. Return query -- I'm starting to wonder how/if this is really\n> different than a.) above\n> 3. Add support to allow the RangePortal to materialize modes a and c,\n> if needed for a re-read -- partially complete.\n> 4. Add a WITH keyword to CREATE FUNCTION, allowing SRF mode to be\n> specified -- not yet started.\n> \n> \n> Request for help:\n> -----------------\n> So far I've tested with SQL and C functions. I will also do some testing \n> with PLpgSQL functions. I need testing and feedback from users of the \n> other function PLs.\n> \n> Review, comments, feedback, etc. are appreciated.\n> \n> Thanks,\n> \n> Joe\n> \n> \n> \n\n",
"msg_date": "Tue, 7 May 2002 09:55:02 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Set Returning Functions (SRF) - request for patch review and\n\tcomment"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> I've been buried in the backend parser/planner/executor now for the last \n> 2 weeks or so, and I now have a patch for a working implementation of \n> SRFs as RTEs (i.e. \"SELECT tbl.* FROM myfunc() AS tbl\"). I think I'm at \n> a good point to get review and comments.\n\nA few random comments ---\n\n> 4. The SRF *must* be aliased in the FROM clause. This is similar to the \n> requirement for a subselect used in the FROM clause.\n\nThis seems unnecessary; couldn't we use the function name as the default\nalias name? The reason we require an alias for a subselect is that\nthere's no obvious automatic choice for a subselect; but there is for a\nfunction.\n\nYou may not want to hear this at this point ;-) but I'd be strongly\ninclined to s/portal/function/ throughout the patch. The implementation\ndoesn't seem to have anything to do with portals as defined by\nportalmem.c, so using the same name just sounds like a recipe for\nconfusion. (I think Alex started with that name because he intended\nto allow fetches from cursors --- but you're not doing that, and if\nsomeone were to add it later he'd probably want to use the name\nRangePortal for that.)\n\nThe patch's approach to checking function execute permissions seems\nwrong, because it only looks at the topmost node of the function\nexpression. Consider\n\tselect ... from myfunc(1, sin(x))\nProbably better to let init_fcache do the checking instead when the\nexpression is prepared for execution.\n\n*** src/backend/executor/execQual.c\t27 Apr 2002 03:45:03 -0000\t1.91\n--- src/backend/executor/execQual.c\t5 May 2002 21:36:55 -0000\n***************\n*** 44,49 ****\n--- 44,52 ----\n #include \"utils/fcache.h\"\n \n \n+ Datum ExecEvalFunc(Expr *funcClause, ExprContext *econtext,\n+ \t\t\t bool *isNull, ExprDoneCond *isDone);\n+ \n\n(and a corresponding \"extern\" in some other .c file) Naughty naughty...\nthis should be in a .h file. But actually you should probably just be\ncalling ExecEvalExpr anyway, rather than hard-wiring the assumption that\nthe top node of the expression tree is a Func. Most of the other places\nthat assume that could be fixed easily by using functions like\nexprType() in place of direct field access.\n\nI've been toying with eliminating Iter nodes, which don't seem to do\nanything especially worthwhile --- it'd make a lot more sense to add\na \"returnsSet\" boolean in Func nodes. Dunno if that simplifies life\nfor you. If you take the above advice you may find you don't really\ncare anymore whether there's an Iter node in the tree.\n\nExecPortalReScan does not look like it works yet (in fact, it looks like\nit will dump core). This is important. It also brings up the question\nof how you are handling parameters passed into the function. I think\nthere's a lot more to that than meets the eye.\n\nI have been thinking that TupleStore ought to be extended to allow\nfetching of existing entries without foreclosing the option of storing\nmore tuples. This would allow you to operate \"on the fly\" without\nnecessarily having to fetch the entire function output on first call.\nYou fetch only as far as you've been requested to provide answers.\n(Which would be a good thing; consider cases with LIMIT for example.)\n\n\nGood to see you making progress on this; it's been a wishlist item\nfor a long time.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 May 2002 12:22:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Set Returning Functions (SRF) - request for patch review and\n\tcomment"
},
{
"msg_contents": "Tom Lane wrote:\n>>4. The SRF *must* be aliased in the FROM clause. This is similar to the \n>>requirement for a subselect used in the FROM clause.\n> \n> This seems unnecessary; couldn't we use the function name as the default\n> alias name? The reason we require an alias for a subselect is that\n> there's no obvious automatic choice for a subselect; but there is for a\n> function.\n\nYeah, I was on the fence about this. The only problem I could see is \nwhen the function returns a base type, what do I use for the column \nalias? Is it OK to use the same alias for the relation and column?\n\n\n> \n> You may not want to hear this at this point ;-) but I'd be strongly\n> inclined to s/portal/function/ throughout the patch. The implementation\n> doesn't seem to have anything to do with portals as defined by\n> portalmem.c, so using the same name just sounds like a recipe for\n\nI was already thinking the same thing. It will be a real PITA, but I do \nbelieve it is the right thing to do.\n\n\n> The patch's approach to checking function execute permissions seems\n> wrong, because it only looks at the topmost node of the function\n> expression. Consider\n> \tselect ... from myfunc(1, sin(x))\n> Probably better to let init_fcache do the checking instead when the\n> expression is prepared for execution.\n\nOK.\n\n\n> + Datum ExecEvalFunc(Expr *funcClause, ExprContext *econtext,\n> + \t\t\t bool *isNull, ExprDoneCond *isDone);\n> + \n> \n> (and a corresponding \"extern\" in some other .c file) Naughty naughty...\n> this should be in a .h file. But actually you should probably just be\n> calling ExecEvalExpr anyway, rather than hard-wiring the assumption that\n> the top node of the expression tree is a Func. Most of the other places\n> that assume that could be fixed easily by using functions like\n> exprType() in place of direct field access.\n\nOK.\n\n\n> \n> I've been toying with eliminating Iter nodes, which don't seem to do\n> anything especially worthwhile --- it'd make a lot more sense to add\n> a \"returnsSet\" boolean in Func nodes. Dunno if that simplifies life\n> for you. If you take the above advice you may find you don't really\n> care anymore whether there's an Iter node in the tree.\n\nActually it gets in my way a bit, and I think I remember some \ndiscussions wrt removing it. But I wasn't sure how large the impact \nwould be on the current API if I messed with it, so I thought I'd leave \nit for now. Do you think it's worth it to address this now?\n\n\n> \n> ExecPortalReScan does not look like it works yet (in fact, it looks like\n> it will dump core). This is important. It also brings up the question\n> of how you are handling parameters passed into the function. I think\n> there's a lot more to that than meets the eye.\n\nYeah, I took a first shot at writing the rescan support, but have not \nyet begun to use/test it. I'd like to get the rest of the patch to an \nacceptable level first, then concentrate on get materialization and \nrescan working.\n\n\n> \n> I have been thinking that TupleStore ought to be extended to allow\n> fetching of existing entries without foreclosing the option of storing\n> more tuples. This would allow you to operate \"on the fly\" without\n> necessarily having to fetch the entire function output on first call.\n> You fetch only as far as you've been requested to provide answers.\n> (Which would be a good thing; consider cases with LIMIT for example.)\n> \n\nHmm, I'll have to look at this more closely. When I get to the \nmaterialization/rescan stuff, I'll see if I can address this idea.\n\nThanks for the review and comments!\n\nJoe\n\n",
"msg_date": "Tue, 07 May 2002 09:58:15 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "Re: Set Returning Functions (SRF) - request for patch review"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Tom Lane wrote:\n>> This seems unnecessary; couldn't we use the function name as the default\n>> alias name? The reason we require an alias for a subselect is that\n>> there's no obvious automatic choice for a subselect; but there is for a\n>> function.\n\n> Yeah, I was on the fence about this. The only problem I could see is \n> when the function returns a base type, what do I use for the column \n> alias? Is it OK to use the same alias for the relation and column?\n\nSure. foo.foo is valid for a column foo in a table foo, so I don't\nsee a problem with it for a function.\n\n>> You may not want to hear this at this point ;-) but I'd be strongly\n>> inclined to s/portal/function/ throughout the patch.\n\n> I was already thinking the same thing. It will be a real PITA, but I do \n> believe it is the right thing to do.\n\nYou could try doing the text substitution on the diff file and then\nre-applying the diff to fresh sources. Might get a couple of merge\nfailures, but should be a lot less painful than doing the edit directly\non the full sources.\n\n>> I've been toying with eliminating Iter nodes, which don't seem to do\n>> anything especially worthwhile --- it'd make a lot more sense to add\n>> a \"returnsSet\" boolean in Func nodes. Dunno if that simplifies life\n>> for you. If you take the above advice you may find you don't really\n>> care anymore whether there's an Iter node in the tree.\n\n> Actually it gets in my way a bit, and I think I remember some \n> discussions wrt removing it. But I wasn't sure how large the impact \n> would be on the current API if I messed with it, so I thought I'd leave \n> it for now. Do you think it's worth it to address this now?\n\nUp to you; probably should wait to see if Iter is still in your way\nafter you do the other thing. I think removing it and instead inserting\nreturnsSet booleans in Oper and Func nodes would be a pretty\nstraightforward exercise, but it'll mean touching even more stuff.\nMight be best to do that as a separate patch.\n\n>> ExecPortalReScan does not look like it works yet (in fact, it looks like\n>> it will dump core). This is important. It also brings up the question\n>> of how you are handling parameters passed into the function. I think\n>> there's a lot more to that than meets the eye.\n\n> Yeah, I took a first shot at writing the rescan support, but have not \n> yet begun to use/test it. I'd like to get the rest of the patch to an \n> acceptable level first, then concentrate on get materialization and \n> rescan working.\n\nFair enough. We should try to get the bulk of the patch applied soon\nso that you don't have code drift problems. The rescan issues should\nnot involve touching nearly as much code.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 May 2002 13:40:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Set Returning Functions (SRF) - request for patch review and\n\tcomment"
},
{
"msg_contents": "Tom Lane wrote:\n> Sure. foo.foo is valid for a column foo in a table foo, so I don't\n> see a problem with it for a function.\n\nFixed\n\n> \n> You could try doing the text substitution on the diff file and then\n> re-applying the diff to fresh sources. Might get a couple of merge\n> failures, but should be a lot less painful than doing the edit directly\n> on the full sources.\n> \n\nGreat idea! Turned out to be a relatively painless 10 minute exercise.\n\n> Up to you; probably should wait to see if Iter is still in your way\n> after you do the other thing. I think removing it and instead inserting\n> returnsSet booleans in Oper and Func nodes would be a pretty\n> straightforward exercise, but it'll mean touching even more stuff.\n> Might be best to do that as a separate patch.\n\nI'd like to wait on this -- I'm already drinking from a firehose ;-)\n\n\n> \n> Fair enough. We should try to get the bulk of the patch applied soon\n> so that you don't have code drift problems. The rescan issues should\n> not involve touching nearly as much code.\n\nI also fixed the execute permissions, switched from ExecEvalFunc to \nExecEvalExpr, and fixed a bug that I found in _outRangeTblEntry (which \nwas preventing creation of a VIEW using a RangeFunction). If this could \nbe applied it would definitely help -- it's getting hard to keep it in \nsync with cvs due to its size. The patch applies cleanly to cvs tip as \nof a few minutes ago, and passes all regression tests.\n\nThanks,\n\nJoe",
"msg_date": "Tue, 07 May 2002 21:34:55 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "SRF patch (was Re: [HACKERS] Set Returning Functions (SRF) - request\n\tfor patch review and comment)"
},
{
"msg_contents": "Joe Conway wrote:\n > Tom Lane wrote:\n >\n >> Sure. foo.foo is valid for a column foo in a table foo, so I\n >> don't see a problem with it for a function.\n >\n > Fixed\n\nSorry -- when I fixed this, I introduced a new bug which only shows for\nfunctions returning composite types, and of course I tested one \nreturning a base type :(\n\nIf you do apply the last srf patch, please apply this one over it.\n\nThanks,\n\nJoe",
"msg_date": "Tue, 07 May 2002 22:17:13 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "Re: SRF patch (was Re: [HACKERS] Set Returning Functions"
},
{
"msg_contents": "On Monday 06 May 2002 18:51, Joe Conway wrote:\n(...)\n> Request for help:\n> -----------------\n> So far I've tested with SQL and C functions. \n(...)\n\nCan you post an example of a function in C?\n(I'm trying out your patch from Friday).\n\n\nThanks,\n\nIan Barwick\n",
"msg_date": "Sat, 11 May 2002 18:01:46 +0200",
"msg_from": "Ian Barwick <barwick@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Set Returning Functions (SRF) - request for patch review and\n\tcomment"
},
{
"msg_contents": "Ian Barwick wrote:\n> On Monday 06 May 2002 18:51, Joe Conway wrote:\n> (...)\n> \n>>Request for help:\n>>-----------------\n>>So far I've tested with SQL and C functions. \n> \n> (...)\n> \n> Can you post an example of a function in C?\n> (I'm trying out your patch from Friday).\n> \n> \n> Thanks,\n> \n> Ian Barwick\n\nSee contrib/dblink. The version in cvs HEAD has two that return sets -- \ndblink() which returns an int, and dblink_get_pkey() which returns text. \nI don't have an example that returns a composite type though. I'll make \none of those for testing some time next week.\n\nJoe\n\n\n",
"msg_date": "Sat, 11 May 2002 20:28:58 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "Re: Set Returning Functions (SRF) - request for patch review"
},
{
"msg_contents": "\n>\n> See contrib/dblink. The version in cvs HEAD has two that return sets --\n> dblink() which returns an int, and dblink_get_pkey() which returns text.\n\nThanks, now I can see what I was doing wrong\n\nYours\n\nIan Barwick\n\n",
"msg_date": "Sun, 12 May 2002 17:54:59 +0200",
"msg_from": "Ian Barwick <barwick@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Set Returning Functions (SRF) - request for patch review"
}
] |
[
{
"msg_contents": "Hello postgresql-hackers,\n\nBeen a while sine I've particiapated on this list so I wanted to say\nthank you for the great product postgresql 7.2.1 is! I have been \ndoing some testing in preperation of a database upgrade from 7.0.3\nto 7.2.1 and I have a few small itches to scratch, so I thought I'd\nget opinions from the experts :)\n\n\nItch #1: Referential Integrity trigger names in psql \\d output.\n\nCurrently the \\d ouput from psql displays non-obvious names for \nwhat the RI trigger actually does. Reading through the source code\nand quering the mailing lists indicate this is easily changed (and \nI have already done this on a test database without any ill effects\nso far).\n\nHere is what the \\d displays from 7.2.1:\n\nBefore:\n--------------------------------------------\ntest=# \\d foo\n Table \"foo\"\n Column | Type | Modifiers \n---------+---------+-----------\n blah_id | integer | not null\n foo | text | not null\nPrimary key: foo_pkey\nTriggers: RI_ConstraintTrigger_30670\n\ntest=# \\d blah \n Table \"blah\"\n Column | Type | Modifiers \n---------+---------+-----------\n blah_id | integer | not null\n blah | text | not null\nPrimary key: blah_pkey\nTriggers: RI_ConstraintTrigger_30672,\n RI_ConstraintTrigger_30674\n\n\nAfter:\n--------------------------------------------\ntest=# \\d foo\n Table \"foo\"\n Column | Type | Modifiers \n---------+---------+-----------\n blah_id | integer | not null\n foo | text | not null\nPrimary key: foo_pkey\nTriggers: RI_blah_id (insert)\n\ntest=# \\d blah\n Table \"blah\"\n Column | Type | Modifiers \n---------+---------+-----------\n blah_id | integer | not null\n blah | text | not null\nPrimary key: blah_pkey\nTriggers: RI_blah_id (delete),\n RI_blah_id (update)\n\n\nThis change was made with a simple update to the pg_trigger\nsystem table for the tgname column.\n\nSearching through the code and the mailing list, it looks like\nthe only constraint to the tgname column is that it needs to be\nunique (although the database schema does not inforce this via \na unique index) since the OID tacked on to the RI_ConstraintTrigger_*\nwas designed to keep this uniqueness.\n\nWhat I would propose is to base the RI_* off the constrain name provided\nduring the RI_trigger creation, if the constrain name is not provided,\nthen to default to the current nameing scheme.\n\nCan anyone think of side-affects of changing the tgname column in the\npg_trigger system table? Does this proposal seem like an acceptable\nsolution? Would there be interest in this if I provided a patch to do\nthis?\n\n\n\nItch #2: Alter ownership on a sequence, etc.\n\nAlter table provides the functionality to change the ownership of a\ntable, but ownership on other structures like sequences, etc can not\nbe changed without dropping and recreating as the new owner. Would\nthere be any interest if I worked on a patch to do this too?\n\n\n\nThanks again for all the hard work and a great database!\n\n- Ryan Bradetich\n\n\n",
"msg_date": "06 May 2002 11:52:00 -0600",
"msg_from": "Ryan Bradetich <rbradetich@uswest.net>",
"msg_from_op": true,
"msg_subject": "a couple of minor itches: RI Trigger Names, and additional Alter\n\townerships commands."
},
{
"msg_contents": "Ryan Bradetich wrote:\n > Can anyone think of side-affects of changing the tgname column in\n > the pg_trigger system table? Does this proposal seem like an\n > acceptable solution? Would there be interest in this if I provided\n > a patch to do this?\n\nFWIW, not exactly what you are proposing, but ALTER TRIGGER RENAME is\navailable in current CVS. See:\nhttp://candle.pha.pa.us/main/writings/pgsql/sgml/sql-altertrigger.html\n\nJoe\n\n",
"msg_date": "Mon, 06 May 2002 13:22:05 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: a couple of minor itches: RI Trigger Names, and additional"
},
{
"msg_contents": "Scrach itch #2. I just figured out how to change the ownership of a\nsequence:\n\n\talter table <sequence> owner to <owner>.\n\nAmazing how easy it is to figure out once you have posted the question\nto the mailing list :)\n\nthanks,\n\n- Ryan\n\n\nOn Mon, 2002-05-06 at 11:52, Ryan Bradetich wrote:\n.. snip ...\n> Itch #2: Alter ownership on a sequence, etc.\n> \n> Alter table provides the functionality to change the ownership of a\n> table, but ownership on other structures like sequences, etc can not\n> be changed without dropping and recreating as the new owner. Would\n> there be any interest if I worked on a patch to do this too?\n> \n> \n> \n> Thanks again for all the hard work and a great database!\n> \n> - Ryan Bradetich\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n\n",
"msg_date": "06 May 2002 17:01:06 -0600",
"msg_from": "Ryan Bradetich <rbradetich@uswest.net>",
"msg_from_op": true,
"msg_subject": "Re: a couple of minor itches: RI Trigger Names, and"
},
{
"msg_contents": "Joe,\n\nThanks for the link. Didn't think to check the patches list :( I found\nand downloaded the patch, but the patch does not apply cleanly (it\ndepends on some new files that are not present yet like: \n\tsrc/backend/commands/tablecmds.c\n)\n\n\nBut from what I have read, and the examples you gave this will do\nexactly what I want :) not automatic, but still gives me the ability to\nrename the trigger to something more obvious!\n\nI'll have to play around with this some more and see if I can get it to\napply... or just wait until 7.2.2 comes out :)\n\nThanks for the excellent patch and pointer!\n\n- Ryan\n\n\nOn Mon, 2002-05-06 at 14:22, Joe Conway wrote:\n> Ryan Bradetich wrote:\n> > Can anyone think of side-affects of changing the tgname column in\n> > the pg_trigger system table? Does this proposal seem like an\n> > acceptable solution? Would there be interest in this if I provided\n> > a patch to do this?\n> \n> FWIW, not exactly what you are proposing, but ALTER TRIGGER RENAME is\n> available in current CVS. See:\n> http://candle.pha.pa.us/main/writings/pgsql/sgml/sql-altertrigger.html\n> \n> Joe\n> \n> \n\n\n",
"msg_date": "07 May 2002 19:42:04 -0600",
"msg_from": "Ryan Bradetich <rbradetich@uswest.net>",
"msg_from_op": true,
"msg_subject": "Re: a couple of minor itches: RI Trigger Names, and"
},
{
"msg_contents": "> Thanks for the link. Didn't think to check the patches list :( I found\n> and downloaded the patch, but the patch does not apply cleanly (it\n> depends on some new files that are not present yet like:\n> \tsrc/backend/commands/tablecmds.c\n> )\n>\n>\n> But from what I have read, and the examples you gave this will do\n> exactly what I want :) not automatic, but still gives me the ability to\n> rename the trigger to something more obvious!\n>\n> I'll have to play around with this some more and see if I can get it to\n> apply... or just wait until 7.2.2 comes out :)\n>\n> Thanks for the excellent patch and pointer!\n\nYou'll actually have to wait for 7.3 - many months off yet!\n\nIn the meantime, can't you just edit the system catalogs directly and then\nrestart your postmaster, instead of applying a patch that may or may not\nwork on old sources?\n\nChris\n\n",
"msg_date": "Wed, 8 May 2002 10:26:51 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: a couple of minor itches: RI Trigger Names, and"
},
{
"msg_contents": "Yep,\n\nThat is what I am doing now. Joe pointed me to his patch that did it\nthe SQL way :) \n\nI would rather not re-invent work that someone else has already and has\ncommitted to the tree. Joe did a nice job, so I'd rather apply his work\nto my older tree if I could. Looks like it will be too much trouble, so\nI'll manually update the system catalogs for now, and patiently wait for\n7.3 :)\n\nThanks,\n\n- Ryan\n\nOn Tue, 2002-05-07 at 20:26, Christopher Kings-Lynne wrote:\n> > Thanks for the link. Didn't think to check the patches list :( I found\n> > and downloaded the patch, but the patch does not apply cleanly (it\n> > depends on some new files that are not present yet like:\n> > \tsrc/backend/commands/tablecmds.c\n> > )\n> >\n> >\n> > But from what I have read, and the examples you gave this will do\n> > exactly what I want :) not automatic, but still gives me the ability to\n> > rename the trigger to something more obvious!\n> >\n> > I'll have to play around with this some more and see if I can get it to\n> > apply... or just wait until 7.2.2 comes out :)\n> >\n> > Thanks for the excellent patch and pointer!\n> \n> You'll actually have to wait for 7.3 - many months off yet!\n> \n> In the meantime, can't you just edit the system catalogs directly and then\n> restart your postmaster, instead of applying a patch that may or may not\n> work on old sources?\n> \n> Chris\n> \n> \n\n\n",
"msg_date": "07 May 2002 20:35:50 -0600",
"msg_from": "Ryan Bradetich <rbradetich@uswest.net>",
"msg_from_op": true,
"msg_subject": "Re: a couple of minor itches: RI Trigger Names, and"
},
{
"msg_contents": "> I would rather not re-invent work that someone else has already and has\n> committed to the tree. Joe did a nice job, so I'd rather apply his work\n> to my older tree if I could. Looks like it will be too much trouble, so\n> I'll manually update the system catalogs for now, and patiently wait for\n> 7.3 :)\n\nThanks!\n\nI took a quick look at back-patching the REL7_2_STABLE branch, but quite \na bit has changed since then -- sorry :(\n\nIf you do undertake this yourself, please note that Tom Lane cleaned up \na few things after my patch was committed -- so you'll want his final \nversion of the renametrig() function, which is now in \n~/src/backend/commands/trigger.c.\n\nIn the meantime, manually updating the system catalogs to rename a \ntrigger should work fine, as long as you restart your backends -- see \nTom's email (if you haven't already) for more detail:\nhttp://archives.postgresql.org/pgsql-hackers/2002-04/msg01091.php\n\nJoe\n\n\n",
"msg_date": "Tue, 07 May 2002 20:55:40 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: a couple of minor itches: RI Trigger Names, and"
}
] |
[
{
"msg_contents": "\nAre the numbers of the directories in the base diretory and the numbers of\nthe directories under that, etc. traceable to a reference somewhere in the\npostgresql server using that data directory (such as the pg_database table\nor such)? If so, is there somewhere this is documented?\n\nThanks,\n\n-- \nLaurette Cisneros\nThe Database Group\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\nWhere's my....bus?\n\n",
"msg_date": "Mon, 6 May 2002 16:31:00 -0700 (PDT)",
"msg_from": "Laurette Cisneros <laurette@nextbus.com>",
"msg_from_op": true,
"msg_subject": "pgsql_data/base mapping"
},
{
"msg_contents": "The numbers are the same as the oids found in pg_database (as you expected). \n\nThis question is better suited for the -general list, for future reference.\n\nRegards,\n\tJeff\n\nOn Monday 06 May 2002 04:31 pm, Laurette Cisneros wrote:\n> Are the numbers of the directories in the base diretory and the numbers of\n> the directories under that, etc. traceable to a reference somewhere in the\n> postgresql server using that data directory (such as the pg_database table\n> or such)? If so, is there somewhere this is documented?\n>\n> Thanks,\n",
"msg_date": "Mon, 6 May 2002 16:42:51 -0700",
"msg_from": "Jeff Davis <list-pgsql-hackers@empires.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql_data/base mapping"
},
{
"msg_contents": "Laurette Cisneros wrote:\n> \n> Are the numbers of the directories in the base diretory and the numbers of\n> the directories under that, etc. traceable to a reference somewhere in the\n> postgresql server using that data directory (such as the pg_database table\n> or such)? If so, is there somewhere this is documented?\n\nYou can use /contrib/oid2name to get the database names and table names\nfrom the oid file numbers.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 3 Jun 2002 13:25:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql_data/base mapping"
}
] |
[
{
"msg_contents": "There has been a discussion on the general list about this area. One of\nthe members produced a test case for demonstrating rapid size increase.\n\nI decided to see if I could induce similar behaviour with a more\n(seemingly) benign example.\n\nI tried this :\n\n1) Create a table and load 100000 rows (with a primary key)\n2) Run several threads update 1 row and commit (loop continously with a\nrest every 100 updates or so)\n3) Run 1 thread that (lazy) vacuums (every 3 minutes or so)\n\nI ran 10 threads in 2) and saw my database grow from the initial size of\n150M by about 1G per hour (I stopped my test after 5 hours @ 4.5G).\n\nThe table concerned uses a large text field... it might be instructive\nto see if this is central to producing this growth (I will see if a more\nconventional table design can exhibit this behaviour if anyone is keen\nto know).\n\nFor those interested the test case I used can be found here :\n\nhttp://homepages.slingshot.co.nz/~markir/tar/test/spin.tar.gz\n\nregards\n\nMark\n\n\n",
"msg_date": "07 May 2002 18:20:51 +1200",
"msg_from": "Mark kirkwood <markir@slingshot.co.nz>",
"msg_from_op": true,
"msg_subject": "Unbounded (Possibly) Database Size Increase - Test Case"
},
{
"msg_contents": "Mark kirkwood <markir@slingshot.co.nz> writes:\n> I ran 10 threads in 2) and saw my database grow from the initial size of\n> 150M by about 1G per hour (I stopped my test after 5 hours @ 4.5G).\n\nWhich files grew exactly? (Main table, indexes, toast table, toast index?)\n\nWas the FSM size parameter set large enough to cover the amount of space\nyou need the system to be able to recycle --- viz, the amount used\nbetween vacuum runs? As with most everything else in PG, the default\nvalue is not real large: 10000 pages = 80MB.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 May 2002 09:45:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unbounded (Possibly) Database Size Increase - Test Case "
},
{
"msg_contents": "On Wed, 2002-05-08 at 01:45, Tom Lane wrote:\n \n> Which files grew exactly? (Main table, indexes, toast table, toast index?)\n\nHere a listing (from another run - I dumped and reloaded before getting\nany of that info last time...)\n\n\n[:/data1/pgdata/7.2/base/23424803]$ du -sk .\n4900806 .\n\n-rw------- 1 postgres dba 1073741824 May 9 21:20 23424806.3\n-rw------- 1 postgres dba 1073741824 May 9 21:19 23424806.2\n-rw------- 1 postgres dba 1073741824 May 9 21:18 23424806.1\n-rw------- 1 postgres dba 1073741824 May 9 21:16 23424806\n-rw------- 1 postgres dba 124444672 May 9 21:16 23424808\n-rw------- 1 postgres dba 587505664 May 9 21:14 23424806.4\n-rw------- 1 postgres dba 5914624 May 9 21:05 23424804\n-rw------- 1 postgres dba 2441216 May 9 21:05 23424809\n\nThese files are for :\n\ngrow=# select relname,oid\ngrow-# from pg_class where oid in\n('23424806','23424808','23424804','23424809'); relname | \noid\n-----------------------+----------\n pg_toast_23424804_idx | 23424808\n pg_toast_23424804 | 23424806\n grow_pk | 23424809\n grow | 23424804\n (4 rows)\n\nso the big guy is the toast table and index\n- BTW the table design is \nCREATE TABLE grow (id integer,body text,CONSTRAINT grow_pk PRIMARY KEY\n(id))\n\nThe row length is big ~ 14K. I am wondering if this behaviour will \"go\naway\" if I use recompile with a 32K page size (also seem to recall I can\ntell Pg not to toast certain column types) \n> \n> Was the FSM size parameter set large enough to cover the amount of space\n> you need the system to be able to recycle --- viz, the amount used\n> between vacuum runs? As with most everything else in PG, the default\n> value is not real large: 10000 pages = 80MB.\n\nI thought I was generous here ...~ 960M free space map\n\nmax_fsm_relations = 100 # min 10, fsm is free space map\nmax_fsm_pages = 120000 # min 1000, fsm is free space map\n\nI think I need to count how many vacuums performed during the test, so I\ncan work out if this amount should have been enough. I timed a vacuum\nnow at 12 minutes. (So with 10 concurrent threads it could take a lot\nlonger during the run )\n\nregards\n\nMark\n \n \n\n\n",
"msg_date": "09 May 2002 21:21:58 +1200",
"msg_from": "Mark kirkwood <markir@slingshot.co.nz>",
"msg_from_op": true,
"msg_subject": "Re: Unbounded (Possibly) Database Size Increase - Test"
},
{
"msg_contents": "Mark kirkwood <markir@slingshot.co.nz> writes:\n>> Was the FSM size parameter set large enough to cover the amount of space\n>> you need the system to be able to recycle --- viz, the amount used\n>> between vacuum runs? As with most everything else in PG, the default\n>> value is not real large: 10000 pages = 80MB.\n\n> I thought I was generous here ...~ 960M free space map\n\n> max_fsm_relations = 100 # min 10, fsm is free space map\n> max_fsm_pages = 120000 # min 1000, fsm is free space map\n\n> I think I need to count how many vacuums performed during the test, so I\n> can work out if this amount should have been enough. I timed a vacuum\n> now at 12 minutes. (So with 10 concurrent threads it could take a lot\n> longer during the run )\n\nKeep in mind also that you need enough FSM entries to keep track of\npartially-full pages. To really lock things down and guarantee no\ntable growth you might need one FSM slot for every page in your\nrelations. In practice you should be able to get away with much less\nthan that: you certainly don't need entries for pages with no free\nspace, and pages with only a little free space shouldn't be worth\ntracking either. But if your situation is 100% update turnover between\nvacuums then you could have a worst-case situation where all the pages\nhave roughly 50% free space right after a vacuum, and if you fail to\ntrack them *all* then you're probably going to see some table growth\nin the next cycle.\n\nI believe that with a more reasonable vacuum frequency (vacuum after\n10% to 25% turnover, say) the FSM requirements should be a lot less.\nBut I have not had time to do any experimentation to arrive at a rule\nof thumb for vacuum frequency vs. FSM requirements. If you or someone\ncould run some experiments, it'd be a big help.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 May 2002 13:59:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unbounded (Possibly) Database Size Increase - Test Case "
},
{
"msg_contents": "On Thu, 2002-05-09 at 14:21, Mark kirkwood wrote:\n> On Wed, 2002-05-08 at 01:45, Tom Lane wrote:\n> \n> > Which files grew exactly? (Main table, indexes, toast table, toast index?)\n> \n> Here a listing (from another run - I dumped and reloaded before getting\n> any of that info last time...)\n> \n> \n> [:/data1/pgdata/7.2/base/23424803]$ du -sk .\n> 4900806 .\n> \n> -rw------- 1 postgres dba 1073741824 May 9 21:20 23424806.3\n> -rw------- 1 postgres dba 1073741824 May 9 21:19 23424806.2\n> -rw------- 1 postgres dba 1073741824 May 9 21:18 23424806.1\n> -rw------- 1 postgres dba 1073741824 May 9 21:16 23424806\n> -rw------- 1 postgres dba 124444672 May 9 21:16 23424808\n> -rw------- 1 postgres dba 587505664 May 9 21:14 23424806.4\n> -rw------- 1 postgres dba 5914624 May 9 21:05 23424804\n> -rw------- 1 postgres dba 2441216 May 9 21:05 23424809\n> \n> These files are for :\n> \n> grow=# select relname,oid\n> grow-# from pg_class where oid in\n> ('23424806','23424808','23424804','23424809'); relname | \n> oid\n> -----------------------+----------\n> pg_toast_23424804_idx | 23424808\n> pg_toast_23424804 | 23424806\n> grow_pk | 23424809\n> grow | 23424804\n> (4 rows)\n> \n> so the big guy is the toast table and index\n> - BTW the table design is \n> CREATE TABLE grow (id integer,body text,CONSTRAINT grow_pk PRIMARY KEY\n> (id))\n\nWas it not the case that lazy vacuum had problems freeing tuples that\nhave toasted fields ?\n\n> The row length is big ~ 14K. I am wondering if this behaviour will \"go\n> away\" if I use recompile with a 32K page size (also seem to recall I can\n> tell Pg not to toast certain column types) \n\n----------\nHannu\n\n\n",
"msg_date": "11 May 2002 01:43:22 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Unbounded (Possibly) Database Size Increase - Test"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> Was it not the case that lazy vacuum had problems freeing tuples that\n> have toasted fields ?\n\nNews to me if so.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 May 2002 19:24:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unbounded (Possibly) Database Size Increase - Test "
},
{
"msg_contents": "On Sat, 2002-05-11 at 11:24, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > Was it not the case that lazy vacuum had problems freeing tuples that\n> > have toasted fields ?\n> \n> News to me if so.\n> \n> \t\t\tregards, tom lane\n\nIt looks like this may in fact be the case.\n\nI performed a number of tests using the previous setup, but shortening the row length and\nusing pg_attribute.attstorage to untoast the text field for some of the tests.\n\nThe difference is striking. \n\nThe behaviour of the untoasted case is pretty much as expected :\nthe database grows a bit and then stabilizes at some size.\n\nHowever I could not get any size stabilization in the toasted case.\n\n\nHere are (some) of my test results :\n\nFsm Siz\t|Threads|Toast\t|Init(M)|End (M)|Stable\t|Stable Time(h)\t|Run Time(h)\n 20000\t| 2\t|Y\t| 166\t| 380\t| N\t| -\t\t|17\n 60000\t| 2\t|Y\t| 166\t| 430\t| N\t| -\t\t|20\n 10000\t| 2\t|N\t| 162\t| 235 \t| Y\t| 0.5 \t\t|1\n 20000\t| 2\t|N\t| 166\t| 235\t| Y\t| 0.5 \t\t|13\n 60000\t| 2\t|N\t| 166\t| 235\t| Y\t| 0.5\t\t|13\n\nlegend :\n\nFsm Siz \t\t= max_fsm_pages\nThreads \t\t= no. update threads\nToast\t\t\t= whether body field was toasted\nInit\t\t\t= initial database size\nEnd \t\t\t= final database size\nStable\t\t\t= whether database growth had stopped\nStable Time\t\t= when stable size was achieved\nRun Time\t\t= length of test run (excluding initial database population)\n\nAverage vacuum time \t\t\t\t= 300s\nTypical (1 thread) entire table update time\t= 2000s\nRow length\t\t\t\t\t= 7.5K\n\nThe scripts I used are here :\n\nhttp://homepages.slingshot.co.nz/~markir/tar/test/spin.tar.gz\n\n\nAt this point I am wondering about sending this in as a bug report - what do you think ?\n\nregards, \n\nMark\n\n",
"msg_date": "19 May 2002 14:59:10 +1200",
"msg_from": "Mark kirkwood <markir@slingshot.co.nz>",
"msg_from_op": true,
"msg_subject": "Re: Unbounded (Possibly) Database Size Increase - Toasting"
},
{
"msg_contents": "Mark kirkwood <markir@slingshot.co.nz> writes:\n> However I could not get any size stabilization in the toasted case.\n\nHmm. Which file(s) were growing, exactly? How many row updates is this\nrun covering?\n\nI'd rather expect the toast indexes to grow given the lack-of-btree-\ncollapse-logic issue. However, the rate of growth ought to be pretty\ntiny --- much less than the amount of data being pumped through, for\nsure.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 19 May 2002 13:37:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unbounded (Possibly) Database Size Increase - Toasting "
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> On Sun, 2002-05-19 at 19:37, Tom Lane wrote:\n>> I'd rather expect the toast indexes to grow given the lack-of-btree-\n>> collapse-logic issue. \n\n> Why sould the toast indexes grow significantly more than the primary key\n> of main table ?\n\nWell, the toast indexes will grow because they're using an OID key,\nand so the range of indexed values keeps increasing. AFAIR Mark didn't\nsay whether he *had* a primary key, let alone what it was --- but it's\npossible that he has one that has a range that's not changing over the\ntest.\n\nIn particular, if the test consists simply of updating the toasted\nfield, that will not change the primary keys at all ... but it will\nchange the toast table's key range, because each new value will get\na new toast OID.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 May 2002 10:08:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unbounded (Possibly) Database Size Increase - Toasting "
},
{
"msg_contents": "On Sun, 2002-05-19 at 19:37, Tom Lane wrote:\n> Mark kirkwood <markir@slingshot.co.nz> writes:\n> > However I could not get any size stabilization in the toasted case.\n> \n> Hmm. Which file(s) were growing, exactly? How many row updates is this\n> run covering?\n> \n> I'd rather expect the toast indexes to grow given the lack-of-btree-\n> collapse-logic issue. \n\nWhy sould the toast indexes grow significantly more than the primary key\nof main table ?\n\n> However, the rate of growth ought to be pretty\n> tiny --- much less than the amount of data being pumped through, for\n> sure.\n\n----------\nHannu\n\n\n",
"msg_date": "20 May 2002 16:27:59 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Unbounded (Possibly) Database Size Increase - Toasting"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> But does PG not have a new index entry for each _version_ of table row ?\n\nSure, but the entries do go away during vacuum.\n\n> Or does lack-of-btree-collapse-logic affect only keys where there are\n> many _different_ keys and not many repeating keys?\n\nThe problem is that once the btree is constructed, the key ranges\nassigned to the existing leaf pages can't grow, only shorten due\nto page splits. So if you've got, say,\n\n\t 1 2 3 | 4 5 6 | 7 8 9\n\n(schematically suggesting 3 leaf pages with 9 keys) and you delete\nkeys 1-3 and vacuum, you now have\n\n\t - - - | 4 5 6 | 7 8 9\n\nLots of free space in leaf page 1, but that doesn't help you when you\nwant to insert keys 10, 11, 12. That leaf page can only be used for\nkeys <= 3, or possibly <= 4, depending on what boundary key is shown\nin the next btree level. So if you reinsert rows with the same range\nof keys as you had before, no index growth. If the range of keys\nmoves, new pages will keep getting added on at the right end of the\nbtree. Old pages at the left end will never go away, even if they\nbecome mostly or entirely empty.\n\nAFAICS we cannot fix this except by reverse-splitting adjacent index\npages when their combined usage falls below some threshold. (The\nreverse split would give us one unused page that could be put in a\nfreelist and then used somewhere else in the index structure.)\nIn principle VACUUM could do this, but it's ticklish to code, especially\ngiven the desire not to acquire exclusive locks while vacuuming.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 May 2002 11:05:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unbounded (Possibly) Database Size Increase - Toasting "
},
{
"msg_contents": "On Mon, 2002-05-20 at 16:08, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > On Sun, 2002-05-19 at 19:37, Tom Lane wrote:\n> >> I'd rather expect the toast indexes to grow given the lack-of-btree-\n> >> collapse-logic issue. \n>\n> > Why sould the toast indexes grow significantly more than the primary key\n> > of main table ?\n> \n> Well, the toast indexes will grow because they're using an OID key,\n> and so the range of indexed values keeps increasing. AFAIR Mark didn't\n> say whether he *had* a primary key, let alone what it was --- but it's\n> possible that he has one that has a range that's not changing over the\n> test.\n\nhis table is this:\n\nCREATE TABLE grow (id integer,body text,CONSTRAINT grow_pk PRIMARY KEY (id)) \n> In particular, if the test consists simply of updating the toasted\n> field, that will not change the primary keys at all ... but it will\n> change the toast table's key range, because each new value will get\n> a new toast OID.\n\nBut does PG not have a new index entry for each _version_ of table row ?\n\nOr does lack-of-btree-collapse-logic affect only keys where there are\nmany _different_ keys and not many repeating keys?\n\n--------------\nHannu\n\n\n",
"msg_date": "20 May 2002 17:39:28 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Unbounded (Possibly) Database Size Increase - Toasting"
}
] |
[
{
"msg_contents": "Hi Alvaro, Hi Nigel,\n\nThanks for your reply. I indeed already tried with a plpgsql function. But\nthat's just my problem : if I call a function from within a view's rule,\nthis function is not executed anymore with the same rights as a user had on\nthe view. So if a user may access a view, but not the table behind, calling\na function in the DO INSTEAD- clause will not execute the function with the\nproper (view) rights on the table ... \n\n(to all) Could anyone - (developers, eventually ?) explain me why the\n(security) context of a function call is not passed along when the function\ngets called from within a view ? I think this feature is for sure not\nsuperfluous, and I could consider having a look into the code to have this\nchanged (but I think this is a VERY big pile of source codes I never ever\nlooked at before, so this would take a lot of efforts ... for me)\n\nKind regards,\n\nPhilippe Bertin.\n\n",
"msg_date": "Tue, 7 May 2002 08:34:12 +0200 ",
"msg_from": "\"Bertin, Philippe\" <philippe.bertin@barco.com>",
"msg_from_op": true,
"msg_subject": "Re: IF- statements in a rule's 'DO INSTEAD SELECT ...'- statement"
},
{
"msg_contents": "On Tue, 7 May 2002, Bertin, Philippe wrote:\n\nHi Phillippe,\n\n> Thanks for your reply. I indeed already tried with a plpgsql function. But\n> that's just my problem : if I call a function from within a view's rule,\n> this function is not executed anymore with the same rights as a user had on\n> the view. So if a user may access a view, but not the table behind, calling\n> a function in the DO INSTEAD- clause will not execute the function with the\n> proper (view) rights on the table ... \n\nOh, sure, you are right.\n\n> (to all) Could anyone - (developers, eventually ?) explain me why the\n> (security) context of a function call is not passed along when the function\n> gets called from within a view ? I think this feature is for sure not\n> superfluous, and I could consider having a look into the code to have this\n> changed (but I think this is a VERY big pile of source codes I never ever\n> looked at before, so this would take a lot of efforts ... for me)\n\nThat feature is added in current CVS I think. Maybe you can look at\ncurrent sources and backport the patch.\n\n\n-- \nAlvaro Herrera (<alvherre[@]dcc.uchile.cl>)\n\"La verdad no siempre es bonita, pero el hambre de ella si\"\n\n",
"msg_date": "Tue, 7 May 2002 11:03:20 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: IF- statements in a rule's 'DO INSTEAD SELECT ...'- statement"
}
] |
[
{
"msg_contents": "Hello,\n\nI'd like to contribute new code for Postgres geometry type 'path' operations\n(including line buffer). Where should I send this?\n\n-----------\nAlex Shevlakov,\nMotivation Free Software consulting,\nMoscow, Russia\n-----------\n\nhttp://motivation.ru",
"msg_date": "Tue, 7 May 2002 14:48:13 +0400",
"msg_from": "Alex Shevlakov <alex@asrv.fcpf.ru>",
"msg_from_op": true,
"msg_subject": "code contribution"
},
{
"msg_contents": "> I'd like to contribute new code for Postgres geometry type 'path' operations\n> (including line buffer). Where should I send this?\n\nSend to pgsql-patches@postgresql.org. It may be helpful to (i.e. please\ndo) post a summary of what you are intending to send to this mailing\nlist so folks have an idea of what is coming...\n\nRegards.\n\n - Tom\n",
"msg_date": "Tue, 07 May 2002 07:01:05 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: code contribution"
},
{
"msg_contents": "And remember to use 'cvs diff -c' to generate context sensitive diffs\nagainst CVS...\n\nChris\n\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Thomas Lockhart\n> Sent: Tuesday, 7 May 2002 10:01 PM\n> To: Alex Shevlakov\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] code contribution\n>\n>\n> > I'd like to contribute new code for Postgres geometry type\n> 'path' operations\n> > (including line buffer). Where should I send this?\n>\n> Send to pgsql-patches@postgresql.org. It may be helpful to (i.e. please\n> do) post a summary of what you are intending to send to this mailing\n> list so folks have an idea of what is coming...\n>\n> Regards.\n>\n> - Tom\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n",
"msg_date": "Wed, 8 May 2002 10:05:21 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: code contribution"
},
{
"msg_contents": "New 'path' functions (test results can be found at http://motivation.ru/grass/buff_q_example.html):\n\nThere are not many (as my task was only limited to extending GRASS vector\ncapabilities to buffering):\n\n/*check if a point is within buffer of line*/\nDatum path_buffer_contain_pt(PG_FUNCTION_ARGS);\n\n/*removes multiple repeated points from path*/ \nDatum path_without_doubles (PG_FUNCTION_ARGS);\nstatic PATH * path_no_dbles(PATH *path);\n\n/*returns closed path which is buffer to another path*/\nDatum return_path_buffer(PG_FUNCTION_ARGS);\n\n/*used to insert points along circle segment between two ends of non-intersecting buffer segments, smoothing the buffer*/\nstatic Point * point_rotate (Point *lpoint, Point *ax_point, float ang,\nint k, int n, int napr);\n\n/*writes path to arcinfo UNGENERATE format which is particularly useful for viewing buffer in GRASS*/\nDatum write_path_to_file(PG_FUNCTION_ARGS);\nint write_path_to_file_internal(PATH *path, char *str );\n\n/*generalize path,i.e., leave each third, or fourth, etc., point*/\nDatum reduce_path_points(PG_FUNCTION_ARGS);\nstatic PATH *path_reduce(PATH * path, int n_reduce);\n\nOn Tue, May 07, 2002 at 07:01:05AM -0700, Thomas Lockhart wrote:\n> > I'd like to contribute new code for Postgres geometry type 'path' operations\n> > (including line buffer). Where should I send this?\n> \n> Send to pgsql-patches@postgresql.org. It may be helpful to (i.e. please\n> do) post a summary of what you are intending to send to this mailing\n> list so folks have an idea of what is coming...\n> \n> Regards.\n> \n> - Tom\n",
"msg_date": "Wed, 8 May 2002 12:27:21 +0400",
"msg_from": "Alex Shevlakov <alex@asrv.fcpf.ru>",
"msg_from_op": true,
"msg_subject": "Re: code contribution"
},
{
"msg_contents": "\nSorry, I am just getting to this. I have the patch in my email box too.\n\nCan you explain what \"buffer of line\" is? I want to know if it is of\ngeneral usefulness.\n\n---------------------------------------------------------------------------\n\nAlex Shevlakov wrote:\n> New 'path' functions (test results can be found at http://motivation.ru/grass/buff_q_example.html):\n> \n> There are not many (as my task was only limited to extending GRASS vector\n> capabilities to buffering):\n> \n> /*check if a point is within buffer of line*/\n> Datum path_buffer_contain_pt(PG_FUNCTION_ARGS);\n> \n> /*removes multiple repeated points from path*/ \n> Datum path_without_doubles (PG_FUNCTION_ARGS);\n> static PATH * path_no_dbles(PATH *path);\n> \n> /*returns closed path which is buffer to another path*/\n> Datum return_path_buffer(PG_FUNCTION_ARGS);\n> \n> /*used to insert points along circle segment between two ends of non-intersecting buffer segments, smoothing the buffer*/\n> static Point * point_rotate (Point *lpoint, Point *ax_point, float ang,\n> int k, int n, int napr);\n> \n> /*writes path to arcinfo UNGENERATE format which is particularly useful for viewing buffer in GRASS*/\n> Datum write_path_to_file(PG_FUNCTION_ARGS);\n> int write_path_to_file_internal(PATH *path, char *str );\n> \n> /*generalize path,i.e., leave each third, or fourth, etc., point*/\n> Datum reduce_path_points(PG_FUNCTION_ARGS);\n> static PATH *path_reduce(PATH * path, int n_reduce);\n> \n> On Tue, May 07, 2002 at 07:01:05AM -0700, Thomas Lockhart wrote:\n> > > I'd like to contribute new code for Postgres geometry type 'path' operations\n> > > (including line buffer). Where should I send this?\n> > \n> > Send to pgsql-patches@postgresql.org. It may be helpful to (i.e. please\n> > do) post a summary of what you are intending to send to this mailing\n> > list so folks have an idea of what is coming...\n> > \n> > Regards.\n> > \n> > - Tom\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 3 Jun 2002 15:25:00 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: code contribution"
},
{
"msg_contents": "\nI am not sure that this has enough general interest to be included in\nour code. If you make a loadable module, we can add it to our web site.\n\n\n---------------------------------------------------------------------------\n\nAlex Shevlakov wrote:\n> New 'path' functions (test results can be found at http://motivation.ru/grass/buff_q_example.html):\n> \n> There are not many (as my task was only limited to extending GRASS vector\n> capabilities to buffering):\n> \n> /*check if a point is within buffer of line*/\n> Datum path_buffer_contain_pt(PG_FUNCTION_ARGS);\n> \n> /*removes multiple repeated points from path*/ \n> Datum path_without_doubles (PG_FUNCTION_ARGS);\n> static PATH * path_no_dbles(PATH *path);\n> \n> /*returns closed path which is buffer to another path*/\n> Datum return_path_buffer(PG_FUNCTION_ARGS);\n> \n> /*used to insert points along circle segment between two ends of non-intersecting buffer segments, smoothing the buffer*/\n> static Point * point_rotate (Point *lpoint, Point *ax_point, float ang,\n> int k, int n, int napr);\n> \n> /*writes path to arcinfo UNGENERATE format which is particularly useful for viewing buffer in GRASS*/\n> Datum write_path_to_file(PG_FUNCTION_ARGS);\n> int write_path_to_file_internal(PATH *path, char *str );\n> \n> /*generalize path,i.e., leave each third, or fourth, etc., point*/\n> Datum reduce_path_points(PG_FUNCTION_ARGS);\n> static PATH *path_reduce(PATH * path, int n_reduce);\n> \n> On Tue, May 07, 2002 at 07:01:05AM -0700, Thomas Lockhart wrote:\n> > > I'd like to contribute new code for Postgres geometry type 'path' operations\n> > > (including line buffer). Where should I send this?\n> > \n> > Send to pgsql-patches@postgresql.org. It may be helpful to (i.e. please\n> > do) post a summary of what you are intending to send to this mailing\n> > list so folks have an idea of what is coming...\n> > \n> > Regards.\n> > \n> > - Tom\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 12 Jun 2002 17:29:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: code contribution"
}
] |
[
{
"msg_contents": "As I set out to do the Windows semaphore thing, I notice it can get quite ugly.\n\nIn the current CVS directory, there is pgsql/src/backend/port directory.\n\nI propose that this become a separate subproject and library. The reason I want\nthis is because the semaphore support, specifically multiple semaphores\nidentified by a single key, has to be implemented with shared memory and\nmultiple semaphores. (Under Windows)\n\nI also have to look at \"ownership\" issues with Windows processes and shared\nobjects. They may need to be own be a more persistent/independent module, a\nDLL.\n\nBy creating a library like \"libpgport.a\" (or on Windows pgport.dll/pgport.lib)\nlittle has to be changed for various ports. If we don't have the library then\nthe discrete object filed would need to be specified. The library will give the\nport writer a greater amount of flexibility.\n\nComments?\n",
"msg_date": "Tue, 07 May 2002 08:42:15 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "OK, lets talk portability."
},
{
"msg_contents": "mlw <markw@mohawksoft.com> writes:\n> In the current CVS directory, there is pgsql/src/backend/port directory.\n\n> I propose that this become a separate subproject and library.\n\nRight offhand, that seems a pointless exercise in relabeling code that's\ngoing to be the same either way. What's the actual value?\n\n> The reason I want this is because the semaphore support, specifically\n> multiple semaphores identified by a single key, has to be implemented\n> with shared memory and multiple semaphores. (Under Windows)\n\nI think you are confusing issues that are now private to the SysV sema\nimplementation with things that you really need to do for Windows.\nTake a look at port/posix_sema.c for a less cluttered view of the\nsemantics you actually need to support. (I don't suppose there's any\nchance that Gates & Co support POSIX semas, leaving you with no work?)\n\nBTW, I have been able to test the named-semas variant of posix_sema.c\non OS X, and it works. I don't have access to any platforms that\nsupport unnamed POSIX semas, which is too bad because that seems much\nthe preferable variant. Can anyone check it out?\n\n\t\t\tregards, tom lane\n\nPS: there's a trivial little test program in port/ipc_test.c; if you\nwant a \"smoke test\" that's simpler than a full Postgres build, try that.\n",
"msg_date": "Tue, 07 May 2002 09:31:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OK, lets talk portability. "
},
{
"msg_contents": "Tom Lane wrote:\n> BTW, I have been able to test the named-semas variant of posix_sema.c\n> on OS X, and it works. I don't have access to any platforms that\n> support unnamed POSIX semas, which is too bad because that seems much\n> the preferable variant. Can anyone check it out?\n\nI did, and yes I was confused. Sorry. Your posix implementation assumes that\nonly a single process will have access to the semaphore list for deletion is\nthis correct? I guess I need to know how much access the child processes need\nto have to the internal control structures, none? Some? All?\n\nAs I embark on my journey back to the dark side, here are my concerns for a\nnative Windows PostgreSQL. I think it is a whole lot more than originally\nthought.\n\n(The Matrix: Do not try to implement the fork() that would be impossible,\ninstead only try to realize the truth, there is no fork())\n\nCygwin does a lot of trickery to implement fork(), but it is not an OS level\nthing. Without fork() we would have to have a way to start a postgres backend\nand send it information about what it should be doing, and what it should be\ndoing it with.\n\nWith no fork(), information that would normally be copied on fork() is not\ncopied. Therefore, we need to know what that information is and how to\npropagate it to the child process (under windows)\n\nFiles, Windows does not have a native open,close,read,write ,lseek support.\nYes, they have some notion of low I/O for compatibility, _open, _close, etc.\nbut the flags and permissions are not identical. The low file API for Windows\nis CreateFile.\n\nSemaphores, and shared memory are easy enough (If have written it in different\nforms before), depending on the level of opaqueness to child processes.\n\n\nThe voice in the back of my head, says we need to define what the portability\nissues are:\n\nprocess control (fork()/spawn() etc.)\nfile operations (read, right, open, close, seek)\nIPC constructs (shared memory, semaphores)\nSystem interface (sync() etc)\n\nAny others?\n\nWe should either use Apache's APR and augment it with semaphores, or come up\nwith a doc which defines how these various things are handled. Obviously, it\nwill grow as we find things that don't work.\n",
"msg_date": "Tue, 07 May 2002 10:03:07 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: OK, lets talk portability."
},
{
"msg_contents": "mlw <markw@mohawksoft.com> writes:\n> I did, and yes I was confused. Sorry. Your posix implementation assumes that\n> only a single process will have access to the semaphore list for deletion is\n> this correct? I guess I need to know how much access the child processes need\n> to have to the internal control structures, none? Some? All?\n\nNone. The postmaster creates the semas, and the postmaster deletes 'em.\nThe children only use them via the PGSemaphore structs they find in\nshared memory. (The SysV implementation assumes that too btw.)\n\n> With no fork(), information that would normally be copied on fork() is not\n> copied. Therefore, we need to know what that information is and how to\n> propagate it to the child process (under windows)\n\nThis will be a royal mess. Three or four years ago, when PG actually\ndid fork/exec to start a backend, we were careful to arrange to pass\neverything the backend needed to know as command-line parameters (or\nelse keep it in shared memory). We have been busily breaking that\nseparation ever since we went over to fork-no-exec, however. In the\ncurrent scheme of things there is no chance whatever of a backend\nworking unless it inherits the global/static variables of the\npostmaster.\n\nAnd no, I don't want to undo those changes. Especially not if the\nonly reason for it is to not have to use Cygwin on Windows. Most\nof these changes made the startup code substantially simpler,\nfaster, and more reliable.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 May 2002 10:32:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OK, lets talk portability. "
},
{
"msg_contents": "Tom Lane wrote:\n> And no, I don't want to undo those changes. Especially not if the\n> only reason for it is to not have to use Cygwin on Windows. Most\n> of these changes made the startup code substantially simpler,\n> faster, and more reliable.\n\nThen I think the notion of a pure Windows version is dead in the water. Writing\na fork()-like API for Windows is, of course, doable as evidenced by cygwin, and\nfrom a general theory seems like a pretty straight forward thing to do (with a\nfew low level tricks of course) but the details are pretty scary.\n\nHas anyone done a profile of PostgreSQL running on a windows box and identified\ncygwin bottlenecks which we could augment with native code?\n",
"msg_date": "Tue, 07 May 2002 10:44:08 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: OK, lets talk portability."
},
{
"msg_contents": "On Tue, 7 May 2002, mlw wrote:\n\n> Tom Lane wrote:\n> > And no, I don't want to undo those changes. Especially not if the\n> > only reason for it is to not have to use Cygwin on Windows. Most\n> > of these changes made the startup code substantially simpler,\n> > faster, and more reliable.\n>\n> Then I think the notion of a pure Windows version is dead in the water.\n> Writing a fork()-like API for Windows is, of course, doable as evidenced\n> by cygwin, and from a general theory seems like a pretty straight\n> forward thing to do (with a few low level tricks of course) but the\n> details are pretty scary.\n\nHow is Apache doing this? I believe they do allow the pre-forked model to\nwork, so how are they getting around those limitations?\n\n\n",
"msg_date": "Tue, 7 May 2002 11:50:50 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: OK, lets talk portability."
},
{
"msg_contents": "\"Marc G. Fournier\" wrote:\n> \n> On Tue, 7 May 2002, mlw wrote:\n> \n> > Tom Lane wrote:\n> > > And no, I don't want to undo those changes. Especially not if the\n> > > only reason for it is to not have to use Cygwin on Windows. Most\n> > > of these changes made the startup code substantially simpler,\n> > > faster, and more reliable.\n> >\n> > Then I think the notion of a pure Windows version is dead in the water.\n> > Writing a fork()-like API for Windows is, of course, doable as evidenced\n> > by cygwin, and from a general theory seems like a pretty straight\n> > forward thing to do (with a few low level tricks of course) but the\n> > details are pretty scary.\n> \n> How is Apache doing this? I believe they do allow the pre-forked model to\n> work, so how are they getting around those limitations?\n\nApache and PostgreSQL are quite different in their requirements of shared\nmemory. Apache (2.x) simply uses CreateProcess and passes duplicate file\nhandles.\n",
"msg_date": "Tue, 07 May 2002 10:58:29 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: OK, lets talk portability."
},
{
"msg_contents": "On 7 May 2002, Hannu Krosing wrote:\n\n> On Tue, 2002-05-07 at 15:31, Tom Lane wrote:\n> > mlw <markw@mohawksoft.com> writes:\n> > > In the current CVS directory, there is pgsql/src/backend/port directory.\n> >\n> > > I propose that this become a separate subproject and library.\n> >\n> > Right offhand, that seems a pointless exercise in relabeling code that's\n> > going to be the same either way. What's the actual value?\n> >\n> > > The reason I want this is because the semaphore support, specifically\n> > > multiple semaphores identified by a single key, has to be implemented\n> > > with shared memory and multiple semaphores. (Under Windows)\n> >\n> > I think you are confusing issues that are now private to the SysV sema\n> > implementation with things that you really need to do for Windows.\n> > Take a look at port/posix_sema.c for a less cluttered view of the\n> > semantics you actually need to support. (I don't suppose there's any\n> > chance that Gates & Co support POSIX semas, leaving you with no work?)\n>\n> A quick google search came up with\n>\n> http://sources.redhat.com/pthreads-win32/announcement.html\n>\n>\n> Unfortunately it seems to be the \"wrong kind of free\" software:\n>\n> Pthreads-win32 is free software, distributed under the GNU Lesser\n> General Public License (LGPL).\n>\n> Or can we accept dependancies on LGPL libs for some ports.\n\nWhat someone installs on their Windows box is their problem ... doesn't\nmean we can't make use of it :) Its not something that will be part of\nthe distribution itself, only something that needs to be availble :)\n\n\n",
"msg_date": "Tue, 7 May 2002 13:10:13 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: OK, lets talk portability."
},
{
"msg_contents": "On Tue, 2002-05-07 at 15:31, Tom Lane wrote:\n> mlw <markw@mohawksoft.com> writes:\n> > In the current CVS directory, there is pgsql/src/backend/port directory.\n> \n> > I propose that this become a separate subproject and library.\n> \n> Right offhand, that seems a pointless exercise in relabeling code that's\n> going to be the same either way. What's the actual value?\n> \n> > The reason I want this is because the semaphore support, specifically\n> > multiple semaphores identified by a single key, has to be implemented\n> > with shared memory and multiple semaphores. (Under Windows)\n> \n> I think you are confusing issues that are now private to the SysV sema\n> implementation with things that you really need to do for Windows.\n> Take a look at port/posix_sema.c for a less cluttered view of the\n> semantics you actually need to support. (I don't suppose there's any\n> chance that Gates & Co support POSIX semas, leaving you with no work?)\n\nA quick google search acme up with\n\n http://sources.redhat.com/pthreads-win32/announcement.html\n\n\n--------------\nHannu\n\n",
"msg_date": "07 May 2002 18:57:42 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: OK, lets talk portability."
},
{
"msg_contents": "On Tue, 2002-05-07 at 15:31, Tom Lane wrote:\n> mlw <markw@mohawksoft.com> writes:\n> > In the current CVS directory, there is pgsql/src/backend/port directory.\n> \n> > I propose that this become a separate subproject and library.\n> \n> Right offhand, that seems a pointless exercise in relabeling code that's\n> going to be the same either way. What's the actual value?\n> \n> > The reason I want this is because the semaphore support, specifically\n> > multiple semaphores identified by a single key, has to be implemented\n> > with shared memory and multiple semaphores. (Under Windows)\n> \n> I think you are confusing issues that are now private to the SysV sema\n> implementation with things that you really need to do for Windows.\n> Take a look at port/posix_sema.c for a less cluttered view of the\n> semantics you actually need to support. (I don't suppose there's any\n> chance that Gates & Co support POSIX semas, leaving you with no work?)\n\nA quick google search came up with\n\n http://sources.redhat.com/pthreads-win32/announcement.html\n\n\nUnfortunately it seems to be the \"wrong kind of free\" software: \n\nPthreads-win32 is free software, distributed under the GNU Lesser\nGeneral Public License (LGPL).\n\nOr can we accept dependancies on LGPL libs for some ports.\n\n--------------\nHannu\n\n",
"msg_date": "07 May 2002 19:00:40 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: OK, lets talk portability."
},
{
"msg_contents": "On 7 May 2002, Hannu Krosing wrote:\n\n> On Tue, 2002-05-07 at 15:31, Tom Lane wrote:\n> > mlw <markw@mohawksoft.com> writes:\n> > > In the current CVS directory, there is pgsql/src/backend/port directory.\n> >\n> > > I propose that this become a separate subproject and library.\n> >\n> > Right offhand, that seems a pointless exercise in relabeling code that's\n> > going to be the same either way. What's the actual value?\n> >\n> > > The reason I want this is because the semaphore support, specifically\n> > > multiple semaphores identified by a single key, has to be implemented\n> > > with shared memory and multiple semaphores. (Under Windows)\n> >\n> > I think you are confusing issues that are now private to the SysV sema\n> > implementation with things that you really need to do for Windows.\n> > Take a look at port/posix_sema.c for a less cluttered view of the\n> > semantics you actually need to support. (I don't suppose there's any\n> > chance that Gates & Co support POSIX semas, leaving you with no work?)\n>\n> A quick google search acme up with\n>\n> http://sources.redhat.com/pthreads-win32/announcement.html\n\nDamn ... doesn't implement fork(), but does implement semaphores :)\nSooooo close :)\n\n\n Semaphores\n ---------------------------\n sem_init\n sem_destroy\n sem_post\n sem_wait\n sem_trywait\n sem_timedwait\n sem_open (returns an error ENOSYS)\n sem_close (returns an error ENOSYS)\n sem_unlink (returns an error ENOSYS)\n\n",
"msg_date": "Tue, 7 May 2002 14:45:52 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: OK, lets talk portability."
},
{
"msg_contents": "\"Marc G. Fournier\" wrote:\n> \n> On 7 May 2002, Hannu Krosing wrote:\n> \n> > On Tue, 2002-05-07 at 15:31, Tom Lane wrote:\n> > > mlw <markw@mohawksoft.com> writes:\n> > > > In the current CVS directory, there is pgsql/src/backend/port directory.\n> > >\n> > > > I propose that this become a separate subproject and library.\n> > >\n> > > Right offhand, that seems a pointless exercise in relabeling code that's\n> > > going to be the same either way. What's the actual value?\n> > >\n> > > > The reason I want this is because the semaphore support, specifically\n> > > > multiple semaphores identified by a single key, has to be implemented\n> > > > with shared memory and multiple semaphores. (Under Windows)\n> > >\n> > > I think you are confusing issues that are now private to the SysV sema\n> > > implementation with things that you really need to do for Windows.\n> > > Take a look at port/posix_sema.c for a less cluttered view of the\n> > > semantics you actually need to support. (I don't suppose there's any\n> > > chance that Gates & Co support POSIX semas, leaving you with no work?)\n> >\n> > A quick google search acme up with\n> >\n> > http://sources.redhat.com/pthreads-win32/announcement.html\n> \n> Damn ... doesn't implement fork(), but does implement semaphores :)\n> Sooooo close :)\n\nWindows has semaphores, and looking at Tom's API, this is the least of our\nproblems. \n\nIf we can come up with a fork() free PostgreSQL, the rest is easy.\n\nWe need to come up with a set of macros and function API that handle:\n\nNative file operations.\nProcess control \nShared memory, semaphores, and other IPC\nSockets (Some Windows specific crap will need to be written, but the berkeley\nAPI is fine)\nSystem interface, sync(), fdatasync() and such.\n",
"msg_date": "Tue, 07 May 2002 13:47:10 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: OK, lets talk portability."
},
{
"msg_contents": "On Tue, 2002-05-07 at 19:44, mlw wrote:\n> Tom Lane wrote:\n> > And no, I don't want to undo those changes. Especially not if the\n> > only reason for it is to not have to use Cygwin on Windows. Most\n> > of these changes made the startup code substantially simpler,\n> > faster, and more reliable.\n> \n> Then I think the notion of a pure Windows version is dead in the water. Writing\n> a fork()-like API for Windows is, of course, doable as evidenced by cygwin, and\n> from a general theory seems like a pretty straight forward thing to do (with a\n> few low level tricks of course) but the details are pretty scary.\n\nThere is still another way - use threads. \n\nThere you have of course the opposite problem - to determine what to\n_not_ share, but AFAIK this has been done already at least once.\n\nAnd there seems to be some consensus that doing things that would\neventually make it easier to use threaded model will probably increase\ncode quality in general.\n\n---------------\nHannu\n\n\n",
"msg_date": "07 May 2002 23:28:39 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: OK, lets talk portability."
},
{
"msg_contents": "> mlw <markw@mohawksoft.com> writes:\n> > In the current CVS directory, there is pgsql/src/backend/port directory.\n>\n> BTW, I have been able to test the named-semas variant of posix_sema.c\n> on OS X, and it works. I don't have access to any platforms that\n> support unnamed POSIX semas, which is too bad because that seems much\n> the preferable variant. Can anyone check it out?\n>\n\nThey are supported on QNX, so I will check it. They are also faster than\nnamed ones.\n\n-- igor\n\n\n",
"msg_date": "Tue, 7 May 2002 15:55:50 -0500",
"msg_from": "\"Igor Kovalenko\" <Igor.Kovalenko@motorola.com>",
"msg_from_op": false,
"msg_subject": "Re: OK, lets talk portability. "
}
] |
[
{
"msg_contents": "Doesn't appear that pg_sema is picking up the semaphore implementation\non FreeBSD.\n\n\nbash-2.05a$ uname -a\nFreeBSD knight.zort.ca 4.5-RELEASE FreeBSD 4.5-RELEASE #3: Sun Feb 3\n22:26:40 EST 2002 \nroot@knight.barchord.com:/usr/obj/usr/src/sys/KNIGHT i386\n\n\n\n\nIn file included from ../../../../src/include/storage/proc.h:20,\n from varsup.c:19:\n../../../../src/include/storage/pg_sema.h:60: syntax error before `*'\n../../../../src/include/storage/pg_sema.h:60: warning: type defaults to\n`int' in declaration of `PGSemaphore'\n../../../../src/include/storage/pg_sema.h:60: warning: data definition\nhas no type or storage class\n../../../../src/include/storage/pg_sema.h:66: syntax error before `sema'\n../../../../src/include/storage/pg_sema.h:68: syntax error before `sema'\n../../../../src/include/storage/pg_sema.h:70: syntax error before `sema'\n../../../../src/include/storage/pg_sema.h:72: syntax error before `sema'\n../../../../src/include/storage/pg_sema.h:74: syntax error before `sema'\nIn file included from varsup.c:19:\n../../../../src/include/storage/proc.h:36: syntax error before\n`PGSemaphoreData'\ngmake[4]: *** [varsup.o] Error 1\n\n\n\n",
"msg_date": "07 May 2002 09:41:02 -0400",
"msg_from": "Rod Taylor <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "pg_sema.h"
},
{
"msg_contents": "Figured this one out a while ago.\n\nConfigure was running with a --no-create (DOH!)\n--\nRod\n----- Original Message -----\nFrom: \"Rod Taylor\" <rbt@zort.ca>\nTo: <pgsql-hackers@postgresql.org>\nSent: Tuesday, May 07, 2002 9:41 AM\nSubject: [HACKERS] pg_sema.h\n\n\n> Doesn't appear that pg_sema is picking up the semaphore\nimplementation\n> on FreeBSD.\n>\n>\n> bash-2.05a$ uname -a\n> FreeBSD knight.zort.ca 4.5-RELEASE FreeBSD 4.5-RELEASE #3: Sun Feb\n3\n> 22:26:40 EST 2002\n> root@knight.barchord.com:/usr/obj/usr/src/sys/KNIGHT i386\n>\n>\n>\n>\n> In file included from ../../../../src/include/storage/proc.h:20,\n> from varsup.c:19:\n> ../../../../src/include/storage/pg_sema.h:60: syntax error before\n`*'\n> ../../../../src/include/storage/pg_sema.h:60: warning: type defaults\nto\n> `int' in declaration of `PGSemaphore'\n> ../../../../src/include/storage/pg_sema.h:60: warning: data\ndefinition\n> has no type or storage class\n> ../../../../src/include/storage/pg_sema.h:66: syntax error before\n`sema'\n> ../../../../src/include/storage/pg_sema.h:68: syntax error before\n`sema'\n> ../../../../src/include/storage/pg_sema.h:70: syntax error before\n`sema'\n> ../../../../src/include/storage/pg_sema.h:72: syntax error before\n`sema'\n> ../../../../src/include/storage/pg_sema.h:74: syntax error before\n`sema'\n> In file included from varsup.c:19:\n> ../../../../src/include/storage/proc.h:36: syntax error before\n> `PGSemaphoreData'\n> gmake[4]: *** [varsup.o] Error 1\n>\n>\n>\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n",
"msg_date": "Tue, 7 May 2002 12:19:59 -0400",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: pg_sema.h"
},
{
"msg_contents": "Rod Taylor <rbt@zort.ca> writes:\n> Doesn't appear that pg_sema is picking up the semaphore implementation\n> on FreeBSD.\n\n> In file included from ../../../../src/include/storage/proc.h:20,\n> from varsup.c:19:\n> ../../../../src/include/storage/pg_sema.h:60: syntax error before `*'\n> ../../../../src/include/storage/pg_sema.h:60: warning: type defaults to\n> `int' in declaration of `PGSemaphore'\n\nDid you rerun configure?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 May 2002 12:41:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_sema.h "
}
] |
[
{
"msg_contents": "Michael Enke (michael.enke@wincor-nixdorf.com) reports a bug with a severity of 2\nThe lower the number the more severe it is.\n\nShort Description\nlower()/upper() bug on ->multibyte<- DB\n\nLong Description\nOS: Linux Kernel 2.4.4, PostgreSQL version 7.2.1\nlower() and upper() doesn't work like expected for multibyte\ndatabases. It is working fine for one-byte encoding.\nThe behaviour can be reproduced as follows:\nat initdb: LC_CTYPE was set to de_DE\ncreatedb -E UTF-8 name\nexport PGCLIENTENCODING=LATIN1\npsql -U name\n--------------------------------------------------\n=> select lower('�'); -- german umlaut A, capital\nERROR: Could not convert UTF-8 to ISO8859-1\n-- I expected to see: � german umlaut a, lower case\n--------------------------------------------------\n=> select lower('�'); -- german umlaut a, lower case\nERROR: Could not convert UTF-8 to ISO8859-1\n-- I expected to see: � german umlaut a, lower case\n--------------------------------------------------\n=> select upper('�'); -- it doesn't translate\n�\n-- I expected to see: �\n--------------------------------------------------\n=> select upper('�'); -- this works fine\n�\n--------------------------------------------------\n\nThe same happens to � and � (O umlaut, U umlaut)\n\nIf you want to reproduce this and don't have �/� on your keyboard,\nyou can create a table with one column, type varchar(1) (on a MB DB).\ncreate a file with following input:\nae is \\u00e4\nAE is \\u00c4\nfrom java use the command:\nnative2ascii -reverse -utf8 <this-file> <new-file>\nIn <new-file> you will see:\nin the first line 2 bytes: A(with tilde on top) and Euro Symbol,\nin the second line 2 byte: A(with tilde on top) and a dotted box\nunset PGCLIENTENCODING, call psql:\ninsert into table values('<copy and paste first two bytes>');\ninsert into table values('<copy and paste second two bytes>');\nexport PGCLIENTENCODING=LATIN1\npsql: select * from table; will show you the a-umlaut and A-umlaut.\n\nSample Code\n\n\nNo file was uploaded with this report\n\n",
"msg_date": "Tue, 7 May 2002 10:51:12 -0400 (EDT)",
"msg_from": "pgsql-bugs@postgresql.org",
"msg_from_op": true,
"msg_subject": "Bug #659: lower()/upper() bug on ->multibyte<- DB"
},
{
"msg_contents": "> Short Description\n> lower()/upper() bug on ->multibyte<- DB\n> \n> Long Description\n> OS: Linux Kernel 2.4.4, PostgreSQL version 7.2.1\n> lower() and upper() doesn't work like expected for multibyte\n> databases. It is working fine for one-byte encoding.\n> The behaviour can be reproduced as follows:\n> at initdb: LC_CTYPE was set to de_DE\n> createdb -E UTF-8 name\n> export PGCLIENTENCODING=LATIN1\n> psql -U name\n> --------------------------------------------------\n> => select lower('D'); -- german umlaut A, capital\n> ERROR: Could not convert UTF-8 to ISO8859-1\n> -- I expected to see: d german umlaut a, lower case\n\nThis is not a bug but an expected behavior. Locale support expects an\ninput string is encoded in ISO-8859-1 (because you set locale to\nde_DE) while you supply UTF-8. Try an explicit encoding converion\nfunction:\n\nselect lower(convert('D'), 'LATIN1');\n\nNote that '\\304' must be an actual german umlaut A, capital character,\nnot an octal espcaped notion.\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 08 May 2002 12:09:47 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Bug #659: lower()/upper() bug on ->multibyte<- DB"
},
{
"msg_contents": "Hello,\n\n> This is not a bug but an expected behavior. Locale support expects an\n> input string is encoded in ISO-8859-1 (because you set locale to\n> de_DE) while you supply UTF-8.\n\nWhat is the difference between an insert of string and a call to a function with a string argument?\nInsert works well, output also, only the functions lower(), upper() and initcap() make problems.\nThis is also ok: select a from a where a = 'X'; -- X is german umlaut a, lowercase / german umlaut A, capital\n\n> Try an explicit encoding converion function:\n> \n> select lower(convert('D'), 'LATIN1');\n\nI tried: select lower(convert('X'), 'LATIN1'); -- X is german umlaut A, capital\nbut the result was the same:\nERROR: Could not convert UTF-8 to ISO8859-1\n\nI than compiled postgres without locale support. I created a DB with -E UTF-8\nI created a table and inserted UTF-8 char \"0x00C4\" (german umlaut A, capital)\nI called \"select lower(a) from a;\"\nNow, without locale support, I didn't get the error but I also didn't get\nthe right result. The right result would be UTF-8 char \"0x00E4\" (german umlaut a, lower case)\n!independent of the locale!\n\nRegards,\nMichael Enke\n\nTatsuo Ishii wrote:\n> \n> > Short Description\n> > lower()/upper() bug on ->multibyte<- DB\n> >\n> > Long Description\n> > OS: Linux Kernel 2.4.4, PostgreSQL version 7.2.1\n> > lower() and upper() doesn't work like expected for multibyte\n> > databases. It is working fine for one-byte encoding.\n> > The behaviour can be reproduced as follows:\n> > at initdb: LC_CTYPE was set to de_DE\n> > createdb -E UTF-8 name\n> > export PGCLIENTENCODING=LATIN1\n> > psql -U name\n> > --------------------------------------------------\n> > => select lower('D'); -- german umlaut A, capital\n> > ERROR: Could not convert UTF-8 to ISO8859-1\n> > -- I expected to see: d german umlaut a, lower case\n> \n> This is not a bug but an expected behavior. Locale support expects an\n> input string is encoded in ISO-8859-1 (because you set locale to\n> de_DE) while you supply UTF-8. Try an explicit encoding converion\n> function:\n> \n> select lower(convert('D'), 'LATIN1');\n> \n> Note that '\\304' must be an actual german umlaut A, capital character,\n> not an octal espcaped notion.\n> --\n> Tatsuo Ishii\n",
"msg_date": "Wed, 08 May 2002 11:15:07 +0200",
"msg_from": "\"Enke, Michael\" <michael.enke@wincor-nixdorf.com>",
"msg_from_op": false,
"msg_subject": "Re: Bug #659: lower()/upper() bug on ->multibyte<- DB"
},
{
"msg_contents": "> > This is not a bug but an expected behavior. Locale support expects an\n> > input string is encoded in ISO-8859-1 (because you set locale to\n> > de_DE) while you supply UTF-8.\n> \n> What is the difference between an insert of string and a call to a function with a string argument?\n\nYou input \"select lower('X')\" as ISO-8859-1 encoded, then it is sent\nto the backend. The backend convert it to UTF-8. Then lower() is\ncalled with an UTF-8 string input. lower() calls tolower() which\nexpects the input being ISO-8859-1 since you set locale to de_DE.\nThis is the source of the problem.\n\n> > select lower(convert('D'), 'LATIN1');\n> \n> I tried: select lower(convert('X'), 'LATIN1'); -- X is german umlaut A, capital\n> but the result was the same:\n> ERROR: Could not convert UTF-8 to ISO8859-1\n\nOops. That should be:\n\nselect convert(lower(convert('X', 'LATIN1')),'LATIN1','UNICODE');\n\nIt looks ugly, but works.\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 08 May 2002 21:30:01 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Bug #659: lower()/upper() bug on ->multibyte<- DB"
},
{
"msg_contents": "Tatsuo Ishii wrote:\n> > What is the difference between an insert of string and a call to a function with a string argument?\n> \n> You input \"select lower('X')\" as ISO-8859-1 encoded, then it is sent\n> to the backend. The backend convert it to UTF-8. Then lower() is\n> called with an UTF-8 string input. lower() calls tolower() which\n> expects the input being ISO-8859-1 since you set locale to de_DE.\n> This is the source of the problem.\n\nExcuse me, this seems not the be the source of the problem.\nIf I call select lower(table_col) from table;\nthen I also don't get back the lower case character but the original case if it is a multibyte char.\nThere I have no input from the client to the backend.\nI did now also remove all below data directory, exported LC_CTYPE to de_DE.utf8, made an initdb.\nWith pg_controldata I see LC_CTYPE is de_DE.utf8\nNow I no longer get the ERROR: cannot convert UTF-8 to ISO8859-1, but the translation doesn't work:\nMB chars are not translated, I get back the original case.\nBTW: mbsrtowcs(), wctrans(), towctrans(), wcsrtombs() makes the job with de_DE.utf8.\n\n> Oops. That should be:\n> \n> select convert(lower(convert('X', 'LATIN1')),'LATIN1','UNICODE');\n> It looks ugly, but works.\n\nSorry, it doesn't work. The same here, I get back the case I put in at X, not the lower case.\n\nRegards,\nMichael\n",
"msg_date": "Wed, 08 May 2002 16:54:55 +0200",
"msg_from": "\"Enke, Michael\" <michael.enke@wincor-nixdorf.com>",
"msg_from_op": false,
"msg_subject": "Re: Bug #659: lower()/upper() bug on ->multibyte<- DB"
},
{
"msg_contents": "> > You input \"select lower('X')\" as ISO-8859-1 encoded, then it is sent\n> > to the backend. The backend convert it to UTF-8. Then lower() is\n> > called with an UTF-8 string input. lower() calls tolower() which\n> > expects the input being ISO-8859-1 since you set locale to de_DE.\n> > This is the source of the problem.\n> \n> Excuse me, this seems not the be the source of the problem.\n> If I call select lower(table_col) from table;\n> then I also don't get back the lower case character but the original case if it is a multibyte char.\n\nThis doesn't work by the same reason above. The backend extracts\ntable_col from the table which is encoded in UTF-8, while lower()\nexpects ISO-8859-1. Try:\n\nselect convert(lower(convert(table_col, 'LATIN1')),'LATIN1','UNICODE')\nfrom your_table;\n\n> I did now also remove all below data directory, exported LC_CTYPE to de_DE.utf8, made an initdb.\n> With pg_controldata I see LC_CTYPE is de_DE.utf8\n> Now I no longer get the ERROR: cannot convert UTF-8 to ISO8859-1, but the translation doesn't work:\n> MB chars are not translated, I get back the original case.\n\nI don't think using de_DE.utf8 helps. The locale support just calls\ntolower(), which is not be able to handle multibyte chars.\n\n> > Oops. That should be:\n> > \n> > select convert(lower(convert('X', 'LATIN1')),'LATIN1','UNICODE');\n> > It looks ugly, but works.\n> \n> Sorry, it doesn't work. The same here, I get back the case I put in at X, not the lower case.\n\nAre you sure to use de_DE locale (not de_DE.utf8)?\nIncluded are sample scripts being work with me using de_DE locale.\nHere is also my pg_controldata output.\n\n$ pg_controldata\npg_control version number: 71\nCatalog version number: 200201121\nDatabase state: IN_PRODUCTION\npg_control last modified: Thu May 9 08:37:20 2002\nCurrent log file id: 0\nNext log file segment: 1\nLatest checkpoint location: 0/18C860\nPrior checkpoint location: 0/1503A0\nLatest checkpoint's REDO location: 0/172054\nLatest checkpoint's UNDO location: 0/0\nLatest checkpoint's StartUpID: 8\nLatest checkpoint's NextXID: 217\nLatest checkpoint's NextOID: 24748\nTime of latest checkpoint: Thu May 9 08:37:17 2002\nDatabase block size: 8192\nBlocks per segment of large relation: 131072\nLC_COLLATE: de_DE\nLC_CTYPE: de_DE\n--\nTatsuo Ishii",
"msg_date": "Thu, 09 May 2002 10:06:13 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Bug #659: lower()/upper() bug on ->multibyte<- DB"
},
{
"msg_contents": "> Are you sure to use de_DE locale (not de_DE.utf8)?\n> Included are sample scripts being work with me using de_DE locale.\n> Here is also my pg_controldata output.\n\nSorry, fogot to include the execution results:\n--\nTatsuo Ishii",
"msg_date": "Thu, 09 May 2002 10:27:01 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Bug #659: lower()/upper() bug on ->multibyte<- DB"
},
{
"msg_contents": "Tatsuo Ishii wrote:\n> I don't think using de_DE.utf8 helps. The locale support just calls\n> tolower(), which is not be able to handle multibyte chars.\n> \n> > > Oops. That should be:\n> > >\n> > > select convert(lower(convert('X', 'LATIN1')),'LATIN1','UNICODE');\n> > > It looks ugly, but works.\n> >\n> > Sorry, it doesn't work. The same here, I get back the case I put in at X, not the lower case.\n> \n> Are you sure to use de_DE locale (not de_DE.utf8)?\n> Included are sample scripts being work with me using de_DE locale.\n\nOk, this is working now (I cann't reproduce why not at the first time).\nIs it planned to implement it so that I can write lower()/ upper() for multibyte\naccording to SQL standard (without convert)?\nI could do it if you tell me where the final tolower()/toupper() happens.\n(but not before middle of June).\n\nRegards,\nMichael\n",
"msg_date": "Fri, 10 May 2002 12:27:45 +0200",
"msg_from": "\"Enke, Michael\" <michael.enke@wincor-nixdorf.com>",
"msg_from_op": false,
"msg_subject": "Re: Bug #659: lower()/upper() bug on ->multibyte<- DB"
},
{
"msg_contents": "[Cc:ed to hackers]\n\n(trying select convert(lower(convert('X', 'LATIN1')),'LATIN1','UNICODE');)\n\n> Ok, this is working now (I cann't reproduce why not at the first time).\n\nGood.\n\n> Is it planned to implement it so that I can write lower()/ upper() for multibyte\n> according to SQL standard (without convert)?\n\nSQL standard? The SQL standard says nothing about locale. So making\nlower() (and others) \"locale aware\" is far different from the SQL\nstandard of point of view. Of course this does not mean \"locale\nsupport\" is should not be a part of PostgreSQL's implementation of\nSQL. However, we should be aware the limitation of \"locale support\"\n(as well as multibyte support). They are just the stopgap util CREATE\nCHARACTER SET etc. is implemnted IMO.\n\n> I could do it if you tell me where the final tolower()/toupper() happens.\n> (but not before middle of June).\n\nFor the short term solution making convert() hiding from users might\nbe a good idea (what I mean here is kind of auto execution of\nconvert()). The hardest part is there's no idea how we could find a\nrelationship bewteen particular locale and the encoding. For example,\nyou know that for de_DE locale using LATIN1 encoding is appropreate,\nbut PostgreSQL does not.\n--\nTatsuo Ishii\n",
"msg_date": "Sat, 11 May 2002 10:36:53 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Bug #659: lower()/upper() bug on ->multibyte<- DB"
},
{
"msg_contents": "Tatsuo Ishii wrote:\n> \n> [Cc:ed to hackers]\n> \n> (trying select convert(lower(convert('X', 'LATIN1')),'LATIN1','UNICODE');)\n> \n> > Ok, this is working now (I cann't reproduce why not at the first time).\n> \n> Good.\n> \n> > Is it planned to implement it so that I can write lower()/ upper() for multibyte\n> > according to SQL standard (without convert)?\n> \n> SQL standard? The SQL standard says nothing about locale. So making\n> lower() (and others) \"locale aware\" is far different from the SQL\n> standard of point of view. Of course this does not mean \"locale\n> support\" is should not be a part of PostgreSQL's implementation of\n> SQL. However, we should be aware the limitation of \"locale support\"\n> (as well as multibyte support). They are just the stopgap util CREATE\n> CHARACTER SET etc. is implemnted IMO.\n> \n> > I could do it if you tell me where the final tolower()/toupper() happens.\n> > (but not before middle of June).\n> \n> For the short term solution making convert() hiding from users might\n> be a good idea (what I mean here is kind of auto execution of\n> convert()). The hardest part is there's no idea how we could find a\n> relationship bewteen particular locale and the encoding. For example,\n> you know that for de_DE locale using LATIN1 encoding is appropreate,\n> but PostgreSQL does not.\n\nI think it is really not hard to do this for UTF-8. I don't have to know the\nrelation between the locale and the encoding. Look at this:\nWe can use the LC_CTYPE from pg_controldata or alternatively the LC_CTYPE\nat server startup. For nearly every locale (de_DE, ja_JP, ...) there exists\nalso a locale *.utf8 (de_DE.utf8, ja_JP.utf8, ...) at least for the actual Linux glibc.\nWe don't need to know more than this. If we call\nsetlocale(LC_CTYPE, <value of LC_CTYPE extended with .utf8 if not already given>)\nthen glibc is aware of doing all the conversions. I attach a small demo program\nwhich set the locale ja_JP.utf8 and is able to translate german umlaut A (upper) to\ngerman umlaut a (lower).\nWhat I don't know (have to ask a glibc delveloper) is:\nWhy there exists dozens of locales *.utf8 and what is the difference\nbetween all /usr/lib/locale/*.utf8/LC_CTYPE?\nBut for all existing locales *.utf8, the conversion of german umlauts is working properly.\n\nRegards,\nMichael\n\nPS: I'm not in my office for the next 3 weeks and therefore not able to read my mails.",
"msg_date": "Mon, 13 May 2002 11:57:21 +0200",
"msg_from": "\"Enke, Michael\" <michael.enke@wincor-nixdorf.com>",
"msg_from_op": false,
"msg_subject": "Re: Bug #659: lower()/upper() bug on ->multibyte<- DB"
},
{
"msg_contents": "> I think it is really not hard to do this for UTF-8. I don't have to know the\n> relation between the locale and the encoding. Look at this:\n> We can use the LC_CTYPE from pg_controldata or alternatively the LC_CTYPE\n> at server startup. For nearly every locale (de_DE, ja_JP, ...) there exists\n> also a locale *.utf8 (de_DE.utf8, ja_JP.utf8, ...) at least for the actual Linux glibc.\n\nMy Linux box does not have *.utf8 locales at all. Probably not so many\nplatforms have them up to now, I guess.\n\n> We don't need to know more than this. If we call\n> setlocale(LC_CTYPE, <value of LC_CTYPE extended with .utf8 if not already given>)\n> then glibc is aware of doing all the conversions. I attach a small demo program\n> which set the locale ja_JP.utf8 and is able to translate german umlaut A (upper) to\n> german umlaut a (lower).\n\nInteresting idea, but the problem is we have to decide to use exactly\none locale before initdb. In my understanding, users willing to use\nUnicode (UTF-8) tend to use multiple languages. This is natural since\nUnicode claims it can handle several languages. For example, user\nmight want to have a table like this in a UTF-8 database:\n\ncreate table t1(\n english text,\t-- English message\n germany text,\t-- Germany message\n japanese text\t-- Japanese message\n);\n\nIf you have set the local to, say de_DE, then:\n\nselect lower(japanese) from t1;\n\nwould be executed in de_DE.utf8 locale, and I doubt it produces any\nmeaningfull results for Japanese.\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 14 May 2002 10:29:54 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug #659: lower()/upper() bug on"
},
{
"msg_contents": "Le Mardi 14 Mai 2002 03:29, Tatsuo Ishii a écrit :\n> For example, user\n> might want to have a table like this in a UTF-8 database:\n>\n> create table t1(\n> english text, -- English message\n> germany text, -- Germany message\n> japanese text -- Japanese message\n> );\n\nOr just \nCREATE table t1(\n text_locale varchar, \n text_content text\n);\nwhich is my case.\nJust my 2 cents.\n/Jean-Michel POURE\n",
"msg_date": "Tue, 14 May 2002 08:35:12 +0200",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug #659: lower()/upper() bug on"
},
{
"msg_contents": "> > My Linux box does not have *.utf8 locales at all. Probably not so many\n> > platforms have them up to now, I guess.\n> \n> What linux do you use ?\n\nKind of variant of RH6.2.\n\n> At least newer Redhat Linuxen have them and I suspect that all newer\n> glibc's are capable of using them.\n\nI guess many RH6.2 or RH6.2 based are still surviving...\n\n> > If you have set the local to, say de_DE, then:\n> > \n> > select lower(japanese) from t1;\n> >\n> > would be executed in de_DE.utf8 locale, and I doubt it produces any\n> > meaningfull results for Japanese.\n> \n> IIRC it may, as I think that it will include full UTF8 upper/lower\n> tables, at least on Linux.\n> \n> For example en_US will produce right upper/lower results for Estonian,\n> though collation is off and some chars are missing if using iso-8859-1.\n\nAre you sure that say, de_DE.utf8 locale produce meaningful results\nfor any other languages? If so, why are there so many *.utf8 locales?\n\n> btw, does Japanese language have distinct upper and lower case letters ?\n\nThere are \"full width alphabets\" in Japanese. Thoes include not only\nASCII letters but also some European characters.\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 14 May 2002 16:52:55 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug #659: lower()/upper() bug on"
},
{
"msg_contents": "On Tue, 2002-05-14 at 03:29, Tatsuo Ishii wrote:\n> > I think it is really not hard to do this for UTF-8. I don't have to know the\n> > relation between the locale and the encoding. Look at this:\n> > We can use the LC_CTYPE from pg_controldata or alternatively the LC_CTYPE\n> > at server startup. For nearly every locale (de_DE, ja_JP, ...) there exists\n> > also a locale *.utf8 (de_DE.utf8, ja_JP.utf8, ...) at least for the actual Linux glibc.\n> \n> My Linux box does not have *.utf8 locales at all. Probably not so many\n> platforms have them up to now, I guess.\n\nWhat linux do you use ?\n\nAt least newer Redhat Linuxen have them and I suspect that all newer\nglibc's are capable of using them.\n\n> \n> > We don't need to know more than this. If we call\n> > setlocale(LC_CTYPE, <value of LC_CTYPE extended with .utf8 if not already given>)\n> > then glibc is aware of doing all the conversions. I attach a small demo program\n> > which set the locale ja_JP.utf8 and is able to translate german umlaut A (upper) to\n> > german umlaut a (lower).\n> \n> Interesting idea, but the problem is we have to decide to use exactly\n> one locale before initdb. In my understanding, users willing to use\n> Unicode (UTF-8) tend to use multiple languages. This is natural since\n> Unicode claims it can handle several languages. For example, user\n> might want to have a table like this in a UTF-8 database:\n> \n> create table t1(\n> english text,\t-- English message\n> germany text,\t-- Germany message\n> japanese text\t-- Japanese message\n> );\n> \n> If you have set the local to, say de_DE, then:\n> \n> select lower(japanese) from t1;\n>\n> would be executed in de_DE.utf8 locale, and I doubt it produces any\n> meaningfull results for Japanese.\n\nIIRC it may, as I think that it will include full UTF8 upper/lower\ntables, at least on Linux.\n\nFor example en_US will produce right upper/lower results for Estonian,\nthough collation is off and some chars are missing if using iso-8859-1.\n\nbtw, does Japanese language have distinct upper and lower case letters ?\n\n--------------\nHannu\n\n\n",
"msg_date": "14 May 2002 10:35:44 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug #659: lower()/upper() bug on"
},
{
"msg_contents": "> > Are you sure that say, de_DE.utf8 locale produce meaningful results\n> > for any other languages?\n> \n> there are often subtle differences, but upper() and lower() are much\n> more likely to produce right results than collation order or date/money\n> formats.\n> \n> in fact seem to be only 10 distinct LC_CTYPE files for ~110 locales with\n> most european-originated languages having the same and only \n> tr_TR, zh_??, fr_??,da_DK, de_??, ro_RO, sr_YU, ja_JP and ko_KR having\n> their own.\n\nI see. So the remaining problem would be how to detect the existence\nof *.utf8 collation at the configure time.\n\n> > If so, why are there so many *.utf8 locales?\n> \n> As I understand it, a locale should cover all locale-specific issues\n> \n> > > btw, does Japanese language have distinct upper and lower case letters ?\n> > \n> > There are \"full width alphabets\" in Japanese. Thoes include not only\n> > ASCII letters but also some European characters.\n> \n> Are these ASCII and European characters uppercased in some\n> Japanese-specific way ?\n\nProbably not, but I'm not sure since my Linux box does not have *.utf8\nlocales.\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 14 May 2002 22:18:52 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug #659: lower()/upper() bug on"
},
{
"msg_contents": "On Tue, 2002-05-14 at 09:52, Tatsuo Ishii wrote:\n> \n> Are you sure that say, de_DE.utf8 locale produce meaningful results\n> for any other languages?\n\nthere are often subtle differences, but upper() and lower() are much\nmore likely to produce right results than collation order or date/money\nformats.\n\nin fact seem to be only 10 distinct LC_CTYPE files for ~110 locales with\nmost european-originated languages having the same and only \ntr_TR, zh_??, fr_??,da_DK, de_??, ro_RO, sr_YU, ja_JP and ko_KR having\ntheir own.\n\n> If so, why are there so many *.utf8 locales?\n\nAs I understand it, a locale should cover all locale-specific issues\n \n> > btw, does Japanese language have distinct upper and lower case letters ?\n> \n> There are \"full width alphabets\" in Japanese. Thoes include not only\n> ASCII letters but also some European characters.\n\nAre these ASCII and European characters uppercased in some\nJapanese-specific way ?\n\n--------------\nHannu\n\n",
"msg_date": "14 May 2002 15:40:38 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug #659: lower()/upper() bug on"
},
{
"msg_contents": "Tatsuo Ishii wrote:\n> \n> > > Are you sure that say, de_DE.utf8 locale produce meaningful results\n> > > for any other languages?\n> >\n> > there are often subtle differences, but upper() and lower() are much\n> > more likely to produce right results than collation order or date/money\n> > formats.\n> >\n> > in fact seem to be only 10 distinct LC_CTYPE files for ~110 locales with\n> > most european-originated languages having the same and only\n> > tr_TR, zh_??, fr_??,da_DK, de_??, ro_RO, sr_YU, ja_JP and ko_KR having\n> > their own.\n> \n> I see. So the remaining problem would be how to detect the existence\n> of *.utf8 collation at the configure time.\n> \n> > > If so, why are there so many *.utf8 locales?\n> >\n> > As I understand it, a locale should cover all locale-specific issues\n> >\n> > > > btw, does Japanese language have distinct upper and lower case letters ?\n> > >\n> > > There are \"full width alphabets\" in Japanese. Thoes include not only\n> > > ASCII letters but also some European characters.\n> >\n> > Are these ASCII and European characters uppercased in some\n> > Japanese-specific way ?\n> \n> Probably not, but I'm not sure since my Linux box does not have *.utf8\n> locales.\n\nCould you give me the UTF-8 bytecode for one japanese upper case char and\nfor the same char the lower case?\nI will check in de_DE locale if this translations works.\n\nMichael\n",
"msg_date": "Fri, 17 May 2002 11:55:42 +0200",
"msg_from": "\"Enke, Michael\" <michael.enke@Wincor-Nixdorf.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug #659: lower()/upper() bug on"
},
{
"msg_contents": "> > > > There are \"full width alphabets\" in Japanese. Thoes include not only\n> > > > ASCII letters but also some European characters.\n> > >\n> > > Are these ASCII and European characters uppercased in some\n> > > Japanese-specific way ?\n> > \n> > Probably not, but I'm not sure since my Linux box does not have *.utf8\n> > locales.\n> \n> Could you give me the UTF-8 bytecode for one japanese upper case char and\n> for the same char the lower case?\n> I will check in de_DE locale if this translations works.\n\nOk, here is the data you requested. The first three bytes (0xefbca1)\nrepresents full-width capital \"A\", the rest three bytes (0xefbd81)\nrepresents full-width lower case \"a\".",
"msg_date": "Mon, 20 May 2002 17:25:07 +0900 (JST)",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug #659: lower()/upper() bug on"
},
{
"msg_contents": "Tatsuo Ishii wrote:\n> \n> > > > > There are \"full width alphabets\" in Japanese. Thoes include not only\n> > > > > ASCII letters but also some European characters.\n> > > >\n> > > > Are these ASCII and European characters uppercased in some\n> > > > Japanese-specific way ?\n> > >\n> > > Probably not, but I'm not sure since my Linux box does not have *.utf8\n> > > locales.\n> >\n> > Could you give me the UTF-8 bytecode for one japanese upper case char and\n> > for the same char the lower case?\n> > I will check in de_DE locale if this translations works.\n> \n> Ok, here is the data you requested. The first three bytes (0xefbca1)\n> represents full-width capital \"A\", the rest three bytes (0xefbd81)\n> represents full-width lower case \"a\".\n\nThank you for the data, it is working in ja_JP.utf8 and in de_DE.utf8\nI send you my test program as attachment.\n\nRegards,\nMichael",
"msg_date": "Wed, 12 Jun 2002 09:48:18 +0200",
"msg_from": "\"Enke, Michael\" <michael.enke@wincor-nixdorf.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug #659: lower()/upper() bug on"
}
] |
[
{
"msg_contents": "Hi, I experienced a problem using pg_dump with 7.1.3. I don't know whether\n7.2 also has the same behaviour. AFAIK everything is dumped out using the\norder of OIDs. But the following example shows that it works incorrectly\nif a PLPGSQL function is recreated, assuming this PLPGSQL function was\nused by an SQL function. Recreating the PLPGSQL function, the SQL function\ncan use it again, but their OID orders will be changed. This causes\npg_dump to dump the database into an incorrect hierarchy. A shell script\nexample can be seen below. Regards, Zoltan\n\n\n#!/bin/sh\nlibdir=/usr/lib/pgsql\ndatabase=pg_dump_test\n\ncreatedb $database\n\necho \"\ncreate function plpgsql_call_handler() returns opaque as\n'$libdir/plpgsql.so' language 'c';\ncreate trusted procedural language 'plpgsql' handler plpgsql_call_handler\nlancompiler 'PL/pgSQL';\n\ncreate function test_a (integer) returns integer as\n'\ndeclare\n a integer;\nbegin\n a := count(*) from pg_shadow;\n return a;\nend;\n' language 'plpgsql';\n\ncreate function test_b(integer) returns integer as\n'select test_a (\\$1);' language 'sql';\" | psql -U postgres $database\n\npg_dump $database > dump1\n\necho \"\ndrop function test_a(integer);\nselect test_b(5);\" | psql -U postgres $database\n\necho \"\ncreate function test_a (integer) returns integer as\n'\ndeclare\n a integer;\nbegin\n a := count(*) from pg_shadow;\n return a;\nend;\n' language 'plpgsql';\n\nselect test_b(5);\" | psql -U postgres $database\n\npg_dump $database > dump2\n\ndropdb $database\n\nmore dump?\n\n",
"msg_date": "Tue, 7 May 2002 17:57:05 +0200 (CEST)",
"msg_from": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu>",
"msg_from_op": true,
"msg_subject": "7.1.3: pg_dump hierarchy problem"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: mlw [mailto:markw@mohawksoft.com]\n> Sent: Tuesday, May 07, 2002 7:03 AM\n> To: Tom Lane\n> Cc: Marc G. Fournier; PostgreSQL-development\n> Subject: Re: [HACKERS] OK, lets talk portability.\n> \n> \n> Tom Lane wrote:\n> > BTW, I have been able to test the named-semas variant of \n> posix_sema.c\n> > on OS X, and it works. I don't have access to any platforms that\n> > support unnamed POSIX semas, which is too bad because that \n> seems much\n> > the preferable variant. Can anyone check it out?\n> \n> I did, and yes I was confused. Sorry. Your posix \n> implementation assumes that\n> only a single process will have access to the semaphore list \n> for deletion is\n> this correct? I guess I need to know how much access the \n> child processes need\n> to have to the internal control structures, none? Some? All?\n> \n> As I embark on my journey back to the dark side, here are my \n> concerns for a\n> native Windows PostgreSQL. I think it is a whole lot more \n> than originally\n> thought.\n> \n> (The Matrix: Do not try to implement the fork() that would be \n> impossible,\n> instead only try to realize the truth, there is no fork())\n> \n> Cygwin does a lot of trickery to implement fork(), but it is \n> not an OS level\n> thing. Without fork() we would have to have a way to start a \n> postgres backend\n> and send it information about what it should be doing, and \n> what it should be\n> doing it with.\n> \n> With no fork(), information that would normally be copied on \n> fork() is not\n> copied. Therefore, we need to know what that information is and how to\n> propagate it to the child process (under windows)\n> \n> Files, Windows does not have a native open,close,read,write \n> ,lseek support.\n> Yes, they have some notion of low I/O for compatibility, \n> _open, _close, etc.\n> but the flags and permissions are not identical. The low file \n> API for Windows\n> is CreateFile.\n> \n> Semaphores, and shared memory are easy enough (If have \n> written it in different\n> forms before), depending on the level of opaqueness to child \n> processes.\n> \n> \n> The voice in the back of my head, says we need to define what \n> the portability\n> issues are:\n> \n> process control (fork()/spawn() etc.)\n> file operations (read, right, open, close, seek)\n> IPC constructs (shared memory, semaphores)\n> System interface (sync() etc)\n> \n> Any others?\n> \n> We should either use Apache's APR and augment it with \n> semaphores, or come up\n> with a doc which defines how these various things are \n> handled. Obviously, it\n> will grow as we find things that don't work.\n\nA native port to Win32 has already been accomplished by several groups\n(including CONNX Solutions Inc., where I work).\nThere is also one from Japan:\nhttp://hp.vector.co.jp/authors/VA023283/PostgreSQLe.html\nI saw some others when I looked around.\n\nTrying to implement fork() is a bad idea, I agree. We used\nCreateProcess instead to launch a new server.\n\nMy idea of how portability ought to be accomplished is to leverage what\nothers have done and use that.\n\nI have done a port of Pthreads to NT, which might be useful for a\nthreaded version of the server, but I think a better approach would be\nto use an OS compatibility layer like ACE. ACE might also be useful for\nweb servers and things of that nature. It may also be possible to\ncreate a server core from ACE that would outperform the PosgreSQL\nconnection engine. JAWS (for instance) is a freely available web server\nthat grotesqely outperforms Apache and the other free ones (which makes\nme puzzle about why JAWS is not more popular). Anyway, here is the ACE\nlink:\nhttp://www.cs.wustl.edu/~schmidt/ACE.html\n",
"msg_date": "Tue, 7 May 2002 10:36:19 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: OK, lets talk portability."
},
{
"msg_contents": "Dann Corbit wrote:\n> A native port to Win32 has already been accomplished by several groups\n> (including CONNX Solutions Inc., where I work).\n> There is also one from Japan:\n> http://hp.vector.co.jp/authors/VA023283/PostgreSQLe.html\n> I saw some others when I looked around.\n\nIt is the license issues, I think. \n> \n> Trying to implement fork() is a bad idea, I agree. We used\n> CreateProcess instead to launch a new server.\n\nThat's my big issue, fork() is important to postgres because it assumes all the\nchild's global and static variables will be a copy of what was in the parent,\nas well as system resources being cleaned up.\n\nThink about file handles, they have to be tracked and handled on a process\nlevel. The entire dynamic and static data memory area would have to be copied.\nMemory pointers allocated with \"malloc()\" would also have to be valid in the\nchild, which means that the heap would have to be copied too. So, in short, the\nwhole data area.\n\nI think abandoning cygwin will require more work than is justified, we would\njust end up rewriting it. So, if we are going to require cygwin or something\nlike it, then I think we should spend our efforts to profile and optimize the\ncygwin version.\n\nI guess it comes down to the reason why we intend to get rid of the requirement\nof cygwin on Windows. If it is performance, we may be able to spot optimize the\ncode under cygwin, and improve performance. If it is a license issue, then that\nis not a technical discussion, it is a legal one. Actions we take to remove a\ncygwin requirement, in the license case, are probably of limited technical\nmerit, but of creating code (which probably already exists) for the PostgreSQL\nproject with a license we can live with.\n",
"msg_date": "Tue, 07 May 2002 16:41:57 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: OK, lets talk portability."
},
{
"msg_contents": "Le Mardi 7 Mai 2002 22:41, mlw a écrit :\n> I think abandoning cygwin will require more work than is justified, we\n> would just end up rewriting it. So, if we are going to require cygwin or\n> something like it, then I think we should spend our efforts to profile and\n> optimize the cygwin version.\n>\n> I guess it comes down to the reason why we intend to get rid of the\n> requirement of cygwin on Windows. If it is performance, we may be able to\n> spot optimize the code under cygwin, and improve performance. If it is a\n> license issue, then that is not a technical discussion, it is a legal one.\n> Actions we take to remove a cygwin requirement, in the license case, are\n> probably of limited technical merit, but of creating code (which probably\n> already exists) for the PostgreSQL project with a license we can live with.\n\nThere is other issues :\n\n1) Cygwin installation.\n\nPresently, Cygwin installer is a nice toy but it is primarily designed for \nhackers. In order to install PostgreSQL, you need to install a minimum set of \npackages. As no real dependency between packages exist, a newbee will not \nknow which packages should be downloaded and which should not. Also, Cygwin \ninstaller does not allow the automatic installation of PostgreSQL within a \nservice.\n\nThe result is that newbees eather download ***all*** Cygwin packages or simply \nsay no. Furthermore, after installation, people are facing another issue \nwhich is the Unix world. Users have a hard time understanding that PostgreSQL \nconfiguration is stored in /var/lib/pgsql/...\n\nSo my personal opinion is that if PostgreSQL relies on the present Cygwin \nversion, it has no chance to get a standard solution under Windows.\n\n2) Cygwin static implementation\n\nWhen I contacted Cygwin team, they said there used to be a static version of \nCygwin which is no longer maintained. They also told me there was little work \nto get it working again. Maybe you could fork a Cygwin static version into a \ndll. This may sound as a ***bizarre*** idea, don't flame me, but they \nPostgreSQL would only depend on this dll with static Cygwin built-in.\n\n3) Existing version of PostgreSQL under Windows\nDid anyone test http://hp.vector.co.jp/authors/VA023283/PostgreSQLe.html\n\nCheers,\nJean-Michel POURE\n\n",
"msg_date": "Wed, 8 May 2002 10:37:08 +0200",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": false,
"msg_subject": "Re: OK, lets talk portability."
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: mlw [mailto:markw@mohawksoft.com]\n> Sent: Tuesday, May 07, 2002 7:44 AM\n> To: Tom Lane\n> Cc: Marc G. Fournier; PostgreSQL-development\n> Subject: Re: [HACKERS] OK, lets talk portability.\n> \n> \n> Tom Lane wrote:\n> > And no, I don't want to undo those changes. Especially not if the\n> > only reason for it is to not have to use Cygwin on Windows. Most\n> > of these changes made the startup code substantially simpler,\n> > faster, and more reliable.\n> \n> Then I think the notion of a pure Windows version is dead in \n> the water. Writing\n> a fork()-like API for Windows is, of course, doable as \n> evidenced by cygwin, and\n> from a general theory seems like a pretty straight forward \n> thing to do (with a\n> few low level tricks of course) but the details are pretty scary.\n> \n> Has anyone done a profile of PostgreSQL running on a windows \n> box and identified\n> cygwin bottlenecks which we could augment with native code?\n\nPW32:\nhttp://pw32.sourceforge.net/\nhas a better fork than Cygwin(), but it is still awful. Much better to\nstart a new server with CreateProcess() on Win32 [absurdly faster].\n\nThe idea of fork() translation is to copy the heap and auto data. But\nthe heap can split, so it isn't even really guaranteed to work.\nQuite frankly, fork() in Win32 is a very, very bad idea. Not just for\nefficiency reasons but also for reliability reasons.\n",
"msg_date": "Tue, 7 May 2002 11:02:46 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: OK, lets talk portability."
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: mlw [mailto:markw@mohawksoft.com]\n> Sent: Tuesday, May 07, 2002 7:58 AM\n> To: Marc G. Fournier\n> Cc: Tom Lane; PostgreSQL-development\n> Subject: Re: [HACKERS] OK, lets talk portability.\n> \n> \n> \"Marc G. Fournier\" wrote:\n> > \n> > On Tue, 7 May 2002, mlw wrote:\n> > \n> > > Tom Lane wrote:\n> > > > And no, I don't want to undo those changes. Especially \n> not if the\n> > > > only reason for it is to not have to use Cygwin on \n> Windows. Most\n> > > > of these changes made the startup code substantially simpler,\n> > > > faster, and more reliable.\n> > >\n> > > Then I think the notion of a pure Windows version is dead \n> in the water.\n> > > Writing a fork()-like API for Windows is, of course, \n> doable as evidenced\n> > > by cygwin, and from a general theory seems like a pretty straight\n> > > forward thing to do (with a few low level tricks of \n> course) but the\n> > > details are pretty scary.\n> > \n> > How is Apache doing this? I believe they do allow the \n> pre-forked model to\n> > work, so how are they getting around those limitations?\n> \n> Apache and PostgreSQL are quite different in their \n> requirements of shared\n> memory. Apache (2.x) simply uses CreateProcess and passes \n> duplicate file\n> handles.\n\nThe way to make CreateProcess() work for PostgreSQL is very simple.\n\nBy the time of the fork(), not much has been done. Some needed\ncalculations can simply be stored into shared memory (which is trivial\nto implement). Some other tasks can simply be executed by the cloned\nprocess, exactly as they were executed in the server.\n\nUsing fork() on Win32 is pointless, hopless, awful. Don't even think\nabout it. It's a death warrant.\n",
"msg_date": "Tue, 7 May 2002 11:26:17 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: OK, lets talk portability."
},
{
"msg_contents": "Dann Corbit wrote:\n> \n> > -----Original Message-----\n> > From: mlw [mailto:markw@mohawksoft.com]\n> > Sent: Tuesday, May 07, 2002 7:58 AM\n> > To: Marc G. Fournier\n> > Cc: Tom Lane; PostgreSQL-development\n> > Subject: Re: [HACKERS] OK, lets talk portability.\n> >\n> >\n> > \"Marc G. Fournier\" wrote:\n> > >\n> > > On Tue, 7 May 2002, mlw wrote:\n> > >\n> > > > Tom Lane wrote:\n> > > > > And no, I don't want to undo those changes. Especially\n> > not if the\n> > > > > only reason for it is to not have to use Cygwin on\n> > Windows. Most\n> > > > > of these changes made the startup code substantially simpler,\n> > > > > faster, and more reliable.\n> > > >\n> > > > Then I think the notion of a pure Windows version is dead\n> > in the water.\n> > > > Writing a fork()-like API for Windows is, of course,\n> > doable as evidenced\n> > > > by cygwin, and from a general theory seems like a pretty straight\n> > > > forward thing to do (with a few low level tricks of\n> > course) but the\n> > > > details are pretty scary.\n> > >\n> > > How is Apache doing this? I believe they do allow the\n> > pre-forked model to\n> > > work, so how are they getting around those limitations?\n> >\n> > Apache and PostgreSQL are quite different in their\n> > requirements of shared\n> > memory. Apache (2.x) simply uses CreateProcess and passes\n> > duplicate file\n> > handles.\n> \n> The way to make CreateProcess() work for PostgreSQL is very simple.\n> \n> By the time of the fork(), not much has been done. Some needed\n> calculations can simply be stored into shared memory (which is trivial\n> to implement). Some other tasks can simply be executed by the cloned\n> process, exactly as they were executed in the server.\n> \n> Using fork() on Win32 is pointless, hopless, awful. Don't even think\n> about it. It's a death warrant.\n\nPreaching to the choir my friend.\n",
"msg_date": "Tue, 07 May 2002 17:48:30 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: OK, lets talk portability."
},
{
"msg_contents": "Just a note:\n\nApache 2.0.36 just released and this is in the release notes:\n\n *) Deprecated the apr_lock.h API. Please see the following files\n for the improved thread and process locking and signaling: \n apr_proc_mutex.h, apr_thread_mutex.h, apr_thread_rwlock.h,\n apr_thread_cond.h, and apr_global_mutex.h. [Aaron Bannert]\n\nChris\n\n> > By the time of the fork(), not much has been done. Some needed\n> > calculations can simply be stored into shared memory (which is trivial\n> > to implement). Some other tasks can simply be executed by the cloned\n> > process, exactly as they were executed in the server.\n> > \n> > Using fork() on Win32 is pointless, hopless, awful. Don't even think\n> > about it. It's a death warrant.\n> \n> Preaching to the choir my friend.\n\n",
"msg_date": "Wed, 8 May 2002 09:57:23 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: OK, lets talk portability."
}
] |
[
{
"msg_contents": "Hi,\n\nI'm currently updating pgAdmin to support schemas and have a couple of\n(well three) questions:\n\n1) How can I specify the schema in CREATE OPERATOR?\n'OPERATOR(pg_catalog.+)' notation gives an error :-(\n\n2) There are default comments for the system schemas in pg_description,\nbut COMMENT ON SCHEMA isn't implemented. Is it safe to assume it will be\nbefore 7.3?\n\n3) Likewise, Tom stated that DROP SCHEMA isn't yet implemented - is it\nsafe for me to assume that it will be before release?\n\nAnd whilst I'm writing, many thanks for the hard work you've put into\nthis Tom.\n\nRegards, Dave.\n\n",
"msg_date": "Tue, 7 May 2002 20:05:00 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Couple of schema queries..."
},
{
"msg_contents": "\"Dave Page\" <dpage@vale-housing.co.uk> writes:\n> 1) How can I specify the schema in CREATE OPERATOR?\n> 'OPERATOR(pg_catalog.+)' notation gives an error :-(\n\nYou don't need the OPERATOR() decoration there, only in expressions.\n(Yes, the documentation is still skimpy.)\n\n> 2) There are default comments for the system schemas in pg_description,\n> but COMMENT ON SCHEMA isn't implemented. Is it safe to assume it will be\n> before 7.3?\n\nIt should be.\n\n> 3) Likewise, Tom stated that DROP SCHEMA isn't yet implemented - is it\n> safe for me to assume that it will be before release?\n\nIt will be.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 May 2002 23:30:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Couple of schema queries... "
}
] |
[
{
"msg_contents": "Translation from UNIX to Win32:\nhttp://www.byte.com/art/9410/sec14/art3.htm\n",
"msg_date": "Tue, 7 May 2002 12:46:30 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: OK, lets talk portability."
},
{
"msg_contents": "> Translation from UNIX to Win32:\n> http://www.byte.com/art/9410/sec14/art3.htm\n\nThe problem here is that the strategies getting sold aren't \"How Do We Make it \nPortable?\" ones, but rather \"How do we port it to Windows, and throw away the \nUnix version?\"\n\nThe \"neat technologies\" are _not_ portability ventures, but rather porting \nventures, one way trips down the \"Richmond Highway.\"\n\nTo have something that will truly be portable, it can't directly use fork(). \nIt has to use [something else] which gets translated at compile time to \nfork(), on Unix, and presumably to CreateProcess() on Windows.\n\nIt's probably possible; I doubt it's simple nor much of an improvement.\n\nAnd it begs the question: Is it really greatly advantageous to invoke a lot of \n\"breakage\" on existing code just to get the engine to work on Windows, when \n\"Windows 2004\" will probably have some built-in DBMS technology that tries to \nkill off _all_ Windows-based DBMSes that don't come from Microsoft?\n--\n(reverse (concatenate 'string \"gro.gultn@\" \"enworbbc\"))\nhttp://www.cbbrowne.com/info/rdbms.html\nIt is better to be a smart ass than a dumb ass. \n\n-- \n(concatenate 'string \"chris\" \"@cbbrowne.com\")\nhttp://www.cbbrowne.com/info/linuxdistributions.html\n\"The primary difference between computer salesmen and used car\nsalesmen is that used car salesmen know when they're lying to you.\"",
"msg_date": "Tue, 07 May 2002 18:06:51 -0400",
"msg_from": "cbbrowne@cbbrowne.com",
"msg_from_op": false,
"msg_subject": "Re: OK, lets talk portability. "
},
{
"msg_contents": "> -----Original Message-----\n> From: Lamar Owen [mailto:lamar.owen@wgcr.org] \n> Sent: Monday, August 26, 2002 10:50 AM\n> To: Bruce Momjian; Tom Lane\n> Cc: Sir Mordred The Traitor; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] @(#)Mordred Labs advisory 0x0007: \n> Remove DoS in PostgreSQL\n> \n> \n> On Monday 26 August 2002 12:59 pm, Bruce Momjian wrote:\n> > Tom Lane wrote:\n> > > It may indeed make sense to put a range check here, but \n> I'm getting \n> > > tired of hearing the words \"dos attack\" applied to \n> conditions that \n> > > cannot be exploited to cause any real problem. All you are \n> > > accomplishing is to spread FUD among people who aren't \n> sufficiently \n> > > familiar with the code to evaluate the seriousness of problems...\n> \n> > It isn't fun to have our code nit-picked apart, and Sir-* is \n> > over-hyping the vulnerability, but it is a valid concern. \n> The length \n> > should probably be clipped to a reasonable length and a \n> comment put in \n> > the code describing why.\n> \n> The pseudo-security-alert format used isn't terribly \n> palatable here, IMHO. On \n> BugTraq it might fly -- but not here. \n\nAn alarmist style when posting a serious error is a good idea.\n\"Hey guys, I found a possible problem...\"\nDoes not seem to generate the needed level of excitement.\nDOS attacks means that business stops. I think that should generate a\nfurrowed brow, to say the least.\n\n> A simple 'Hey guys, I \n> found a possible \n> problem when.....' without the big-sounding fluff would sit \n> better with me, \n> at least. The substance of the message is perhaps valuable \n> -- but the \n> wrapper distracts from the substance.\n\nAs long as the needed data is included (here is how to reproduce the\nproblem...) I don't see any problem.\n \n> And dealing with a real name would be nice, IMHO. Otherwise \n> we may end up \n> with 'SMtT' as the nickname -- Hmmm, 'SMitTy' perhaps? :-) \n> Reminds me of \n> 'Uncle George' who did quite a bit for the Alpha port and \n> then disappeared.\n\nIf he wants to call himself 'Sir Modred' or 'Donald Duck' or 'Jack the\nRipper' or whatever, I don't see how it matters. He is providing a\nvaluable service by location of serious problems. These are the sort of\nthing that must be addressed. This is the *EXACT* sort of information\nthat is needed to make PostgreSQL become as robust as Oracle,\nSQL*Server, DB/2, etc.\n\nEvery free database engine project should be so lucky as to have a 'Sir\nModred'\n\nIMO-YMMV.\n",
"msg_date": "Mon, 26 Aug 2002 11:23:49 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: @(#)Mordred Labs advisory 0x0007: Remove DoS in PostgreSQL"
},
{
"msg_contents": "> An alarmist style when posting a serious error is a good idea. \"Hey\n> guys, I found a possible problem...\" Does not seem to generate the\n> needed level of excitement. DOS attacks means that business stops. I\n> think that should generate a furrowed brow, to say the least.\n\nObviously people have forgotten past history. The Symbolics guys had\n_great_ techniques for this that were well documented:\n\nIt is considered artful to append many messages on a subject, leaving\nonly the most inflammatory lines from each, and reply to all in one\nswift blow. The choice of lines to support your argument can make or\nbreak your case.\n-- from the Symbolics Guidelines for Sending Mail\n%\nState opinions in the syntax of fact: \"...as well as the bug in LMFS\nwhere you have to expunge directories to get rid of files.....\"\n-- from the Symbolics Guidelines for Sending Mail\n%\nPeople can be set wondering by loading obscure personal patchable\nsystems, and sending bug reports. Who would not stop and wonder upon\nseeing \"Experimental TD80-TAPE 1.17, MegaDeath 2.5...\"? The same for\nprovocatively-named functions and variables in stack traces.\n-- from the Symbolics Guidelines for Sending Mail\n%\nKnow the list of \"large, chronic problems\". If there is any problem\nwith the window system, blame it on the activity system. Any lack of\nuser functionality should be attributed to the lack of a command\nprocessor. A suprisingly large number of people will believe that you\nhave thought in depth about the issue to which you are alluding when you\ndo.\n-- from the Symbolics Guidelines for Sending Mail\n%\nKnow how to blow any problem up into insolubility. Know how to use the\nphrase \"The new ~A system\" to insult its argument, e.g., \"I guess this\ndestructuring LET thing is fixed in the new Lisp system\", or better yet,\nPROLOG.\n-- from the Symbolics Guidelines for Sending Mail\n%\nNever hit someone head on, always sideswipe. Never say, \"Foo's last\npatch was brain-damaged\", but rather, \"While fixing the miscellaneous\nbugs in 243.xyz [foo's patch], I found....\"\n-- from the Symbolics Guidelines for Sending Mail\n%\nIdiosyncratic indentations, double-spacing, capitalization, etc., while\nstamps of individuality, leave one an easy target for parody.\n-- from the Symbolics Guidelines for Sending Mail\n%\nStrong language gets results. \"The reloader is completely broken in\n242\" will open a lot more eyes than \"The reloader doesn't load files\nwith intermixed spaces, asterisks, and <'s in their names that are\nbigger than 64K\". You can always say the latter in a later paragraph.\n-- from the Symbolics Guidelines for Sending Mail\n%\nIncluding a destination in the CC list that will cause the recipients'\nmailer to blow out is a good way to stifle dissent.\n-- from the Symbolics Guidelines for Sending Mail\n%\nWhen replying, it is often possible to cleverly edit the original\nmessage in such a way as to subtly alter its meaning or tone to your\nadvantage while appearing that you are taking pains to preserve the\nauthor's intent. As a bonus, it will seem that your superior\nintellect is cutting through all the excess verbiage to the very heart\nof the matter. -- from the Symbolics Guidelines for Sending Mail\n%\nReferring to undocumented private communications allows one to claim\nvirtually anything: \"we discussed this idea in our working group last\nyear, and concluded that it was totally brain-damaged\".\n-- from the Symbolics Guidelines for Sending Mail\n%\nPoints are awarded for getting the last word in. Drawing the\nconversation out so long that the original message disappears due to\nbeing indented off the right hand edge of the screen is one way to do\nthis. Another is to imply that anyone replying further is a hopeless\ncretin and is wasting everyone's valuable time.\n-- from the Symbolics Guidelines for Sending Mail\n%\nKeeping a secret \"Hall Of Flame\" file of people's mail indiscretions,\nor copying messages to private mailing lists for subsequent derision,\nis good fun and also a worthwhile investment in case you need to\nblackmail the senders later. -- from the Symbolics Guidelines for\nSending Mail\n%\nUsers should cultivate an ability to make the simplest molehill into a\nmountain by finding controversial interpretations of innocuous\nsounding statements that the sender never intended or imagined.\n-- from the Symbolics Guidelines for Sending Mail\n%\nObversely, a lot of verbal mileage can also be gotten by sending out\nincomprehensible, cryptic, confusing or unintelligible messages, and\nthen iteratively \"correcting\" the \"mistaken interpretations\" in the\nreplys. -- from the Symbolics Guidelines for Sending Mail\n%\nTrivialize a user's bug report by pointing out that it was fixed\nindependently long ago in a system that hasn't been released yet.\n-- from the Symbolics Guidelines for Sending Mail\n%\nSend messages calling for fonts not available to the recipient(s).\nThis can (in the case of Zmail) totally disable the user's machine and\nmail system for up to a whole day in some circumstances.\n-- from the Symbolics Guidelines for Sending Mail\n--\n(concatenate 'string \"aa454\" \"@freenet.carleton.ca\")\nhttp://cbbrowne.com/info/emacs.html\nFrisbeetarianism: The belief that when you die, your soul goes up on\nthe roof and gets stuck...\n\n\n",
"msg_date": "Mon, 26 Aug 2002 14:34:02 -0400",
"msg_from": "cbbrowne@cbbrowne.com",
"msg_from_op": false,
"msg_subject": "How To Make Things Appear More Dramatic"
},
{
"msg_contents": "On Monday 26 August 2002 02:23 pm, Dann Corbit wrote:\n> An alarmist style when posting a serious error is a good idea.\n> \"Hey guys, I found a possible problem...\"\n> Does not seem to generate the needed level of excitement.\n> DOS attacks means that business stops. I think that should generate a\n> furrowed brow, to say the least.\n\nThe historical style on this list has avoided histrionics -- although I have \nmyself been guilty of the hyperbole problem. Making a big stink in no wise \nguarantees it being heard, and may very well cause some to bristle, as Tom \nhas done. It just doesn't fit the style of this list, that's all.\n\n> As long as the needed data is included (here is how to reproduce the\n> problem...) I don't see any problem.\n\nWhen you have to read and process nearly 1,000 e-mails a day (as I have had to \ndo, although my average is a mere 400 or so per day), the subject line and \nthe first screenful of the message will be looked at, and no more. The \nsubstance needs to be early in the message, and the subject needs to be short \nand descriptive. These are just simply traditions, protocols, and ettiquette \nfor Internet mailing lists, as well as other fora such as Usenet.\n\nIf someone wants me to pay attention to a message, the subject needs to be on \nthe point, and the point needs to be early in the message. Otherwise I may \nsimply be so rushed when it arrives in my mailboxen (more than one, as I have \nautorouting mail filters in place) that it gets ignored. I know I am not \nalone in processing mail this way.\n\n> > And dealing with a real name would be nice, IMHO. Otherwise\n> > we may end up\n\n> If he wants to call himself 'Sir Modred' or 'Donald Duck' or 'Jack the\n> Ripper' or whatever, I don't see how it matters. He is providing a\n> valuable service by location of serious problems. These are the sort of\n> thing that must be addressed. This is the *EXACT* sort of information\n> that is needed to make PostgreSQL become as robust as Oracle,\n> SQL*Server, DB/2, etc.\n\nI'm sorry, but I have more respect for someone who isn't afraid to use their \nreal name. I've been on both sides of that fence. Even in the security \nbusiness, where people routinely use pseudonyms, I personally prefer to know \ntheir real name. If I _know_ Aleph One is Elias Levy, then that's easy \nenough. If the information is easily available, then that's enough. \n\nSo, it makes a difference to me, like it, lump it, or think it's insane.\n\nAnd, yes, I agree he IS providing a valuable service -- with that I have no \ncomplaints. But there is a distinct civility and culture to this list, and \nI'd like to see it stay that way.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 26 Aug 2002 14:40:54 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: @(#)Mordred Labs advisory 0x0007: Remove DoS in PostgreSQL"
},
{
"msg_contents": "Lamar Owen wrote:\n> And, yes, I agree he IS providing a valuable service -- with that I have no \n> complaints. But there is a distinct civility and culture to this list, and \n> I'd like to see it stay that way.\n\nWell, when someone is a \"Sir\", we do give them a little more latitude. \n\n(Oh, hold one, that isn't his real name. I see now.) ;-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 26 Aug 2002 14:49:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: @(#)Mordred Labs advisory 0x0007: Remove DoS in PostgreSQL"
},
{
"msg_contents": "> -----Original Message-----\n> From: Jeff Hoffmann [mailto:jeff@propertykey.com] \n> Sent: Monday, April 14, 2003 8:54 PM\n> To: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] Are we losing momentum?\n> \n> \n> Mike Mascari wrote:\n> > cbbrowne@cbbrowne.com wrote:\n> >>I wouldn't be too sanguine about that, from two perspectives:\n> >>\n> >> a) There's a moving target, here, in that Microsoft seems to be\n> >> looking for the next \"new thing\" to be the elimination of\n> >> the use of \"files\" in favor of the filesystem being treated\n> >> as a database.\n\nThis is a very, very good idea. In fact IBM has been doing it for\nyears. For that matter, so has OpenVMS. What's that -- 30 year old\ntechnology?\n\nI have always thought that a native file system should be a hierarchy\nlike Adabas(IBM Mainframe), DBMS(OpenVMS) or Raima(PC's & UNIX) for a\nmodel. It is a very natural fit. The OS contains disk devices which\ncontain directories, subdirectories, and files. Set ownership model\nseems to fit perfectly.\n\n> > They ought to get their database up to speed first, it \n> seems to me. I \n> > agree Microsoft's view of data management is a moving target.\n> \n> Not to mention the fact that there's a significant number of NT 4 \n> servers still out there -- what is that, 7 years old? A lot \n> of places \n> aren't upgrading because they don't need to & don't want to shell out \n> the cash. (And it should go without saying that Microsoft is \n> none too \n> happy with it.) With Windows 2K3 just coming out and who \n> knows how much \n> longer until the next version (or ther version after that, who knows \n> when these \"features\" will actually show up), there's still a \n> significant window in there for conventional database servers, \n> especially for the price conscious out there.\n\nSQL*Server is a very good database. The optimizer is outstanding for\ncomplex queries.\n\nThere are clearly places where PostgreSQL does have a distinct\nadvantage. Price a 1000 user system for SQL*Server and PostgreSQL and\nyou will see that we can hire a couple of DBA's just for the price\ndifference. Since you can purchase PostgreSQL support, that is no\nlonger a significant advantage for MS.\n\nAnd about MySQL:\nIt's also commercial. You are not supposed to use it except for a\nsingle machine for personal use unless you are a non-profit organization\nor unless absolutely everything you do is GPL[1]. Hence, you have to\nlicense it to deploy applications. In order to have transactions, you\nhave to use another commercial product that they bolt into MySQL --\nSleepycat software's database. Now you have two license systems to\nworry about. \n\nCompared to PostgreSQL, both of these tools cost an arm and a leg.\nSQL*Server is closed. You have to rely on MS to fix any problems that\ncrop up. MySQL has a very restrictive license [for those who might\nhappen to bother to read such things] for both modifications to the code\nand also redistribution of applications.\n\n[1] I realize that people cheat on this all the time. In theory, they\ncould all go to jail for it. It is certainly not a risk I would be\nwilling to take. I have also bumped into people who had no idea that\ncommercial use requires a commercial license for MySQL. There are\nprobably lots of people in that boat too.\n\n",
"msg_date": "Mon, 14 Apr 2003 21:30:23 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: Are we losing momentum?"
},
{
"msg_contents": "> And about MySQL:\n> It's also commercial. You are not supposed to use it except for a\n> single machine for personal use unless you are a non-profit organization\n> or unless absolutely everything you do is GPL[1]. Hence, you have to\n> license it to deploy applications. In order to have transactions, you\n> have to use another commercial product that they bolt into MySQL --\n> Sleepycat software's database. Now you have two license systems to\n> worry about.\n\nJust a correction - you need to use the InnoDB database engine, which is\nfree and GPL and bundled with MySQL.\n\nChris\n\n",
"msg_date": "Tue, 15 Apr 2003 12:42:04 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Are we losing momentum?"
},
{
"msg_contents": "Dann Corbit wrote:\n> There are clearly places where PostgreSQL does have a distinct\n> advantage. Price a 1000 user system for SQL*Server and PostgreSQL and\n> you will see that we can hire a couple of DBA's just for the price\n> difference. Since you can purchase PostgreSQL support, that is no\n> longer a significant advantage for MS.\n\nStart looking at \"Enterprise\" licenses for any of the big guys and the\npricing does get pretty scary.\n\n> And about MySQL: It's also commercial. You are not supposed to use it\n> except for a single machine for personal use unless you are a\n> non-profit organization or unless absolutely everything you do is\n> GPL[1].\n\nOn the one hand, if they are calling it \"Open Source\", then this is NOT\na fair statement.\n\nOn the other hand, if you look at their web site, they certainly do tip\ntheir hat to the FUD/Paranoia about the \"infectiveness\" of the GPL.\n\nThey don't expressly say:\n \"Yes, you should be paranoid about the GPL because it infects anything\n it ever touches with some frightening license virus worse than SARS.\"\n\nInstead, they loudly use the line \"... for users who prefer not to be\nrestricted by the terms of the GPL\", of course, neither confirming or\ndenying any particular paranoia about what the impact of those terms may\nor may not be.\n\n> Hence, you have to license it to deploy applications. In order to\n> have transactions, you have to use another commercial product that\n> they bolt into MySQL -- Sleepycat software's database. Now you have\n> two license systems to worry about.\n\nIncorrect. You have to use another commercial product that they bolt\nonto MySQL -- InnoDB, from the Norwegian company, Innobase.\n<http://www.innodb.com/>\n\nSleepycat DB was used to prototype the notion of having transactions,\nbut since that introduces Yet Another Continent to the set of licensing\ncomplications, and probably wasn't sufficiently 'in their interests,'\nit's not the Preferred Transactional Engine...\n--\n(reverse (concatenate 'string \"moc.enworbbc@\" \"enworbbc\"))\nhttp://cbbrowne.com/info/oses.html\nRules of the Evil Overlord #217. \"If I'm wearing the key to the hero's\nshackles around my neck and his former girlfriend now volunteers to\nbecome my mistress and we are all alone in my bedchamber on my bed and\nshe offers me a goblet of wine, I will politely decline the offer.\"\n<http://www.eviloverlord.com/>\n\n",
"msg_date": "Tue, 15 Apr 2003 00:56:12 -0400",
"msg_from": "cbbrowne@cbbrowne.com",
"msg_from_op": false,
"msg_subject": "Re: Are we losing momentum? "
},
{
"msg_contents": "On Monday 14 April 2003 09:30 pm, Dann Corbit wrote:\n>\n> And about MySQL:\n> It's also commercial. You are not supposed to use it except for a\n> single machine for personal use unless you are a non-profit organization\n> or unless absolutely everything you do is GPL[1]. Hence, you have to\n> license it to deploy applications. In order to have transactions, you\n> have to use another commercial product that they bolt into MySQL --\n> Sleepycat software's database. Now you have two license systems to\n> worry about.\n>\n> Compared to PostgreSQL, both of these tools cost an arm and a leg.\n> SQL*Server is closed. You have to rely on MS to fix any problems that\n> crop up. MySQL has a very restrictive license [for those who might\n> happen to bother to read such things] for both modifications to the code\n> and also redistribution of applications.\n>\n> [1] I realize that people cheat on this all the time. In theory, they\n> could all go to jail for it. It is certainly not a risk I would be\n> willing to take. I have also bumped into people who had no idea that\n> commercial use requires a commercial license for MySQL. There are\n> probably lots of people in that boat too.\n\nCan you point me to the relevent portions of the license?\n\nI tried to go through the license, but it basically seemed free (as in GPL) to \nme. My impression is that you can't statically link Sleepycat's Berkeley DB \nwith software unless it is released under a free license (reasonable, kind of \nlike the GPL, if you consider that reasonable). They sell a commercial \nversion, which allows you to statically link it. I sort of get the same idea \nfrom MySQL: as long as you aren't trying to distribute it, you're fine (even \nin-house changes).\n\nAlso, aren't mysql and sleepycat in the standard distribution of Debian? I \nwould think the debian developers would be interested to know if the likes of \nsleepycat and mysql don't abide by the DFSG. That's actually one of the \nthings I've always liked about Debian: read one set of guidelines, and trust \nthe developers to ensure compliance across the entire OS (as long as you stay \nour of non-free). At least, so I thought...\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Mon, 14 Apr 2003 22:07:27 -0700",
"msg_from": "Jeff Davis <jdavis-pgsql@empires.org>",
"msg_from_op": false,
"msg_subject": "Re: Are we losing momentum?"
},
{
"msg_contents": "On Tue, 2003-04-15 at 06:07, Jeff Davis wrote:\n> Also, aren't mysql and sleepycat in the standard distribution of Debian? I \n> would think the debian developers would be interested to know if the likes of \n> sleepycat and mysql don't abide by the DFSG. That's actually one of the \n> things I've always liked about Debian: read one set of guidelines, and trust \n> the developers to ensure compliance across the entire OS (as long as you stay \n> our of non-free). At least, so I thought...\n\nI looked at the copyright information on the mysql-server package in\nDebian and also at http://www.mysql.com/doc/en/MySQL_licenses.html:\n\nThe MySQL documentation is in non-free (and is not therefore officially\npart of Debian). MySQL itself is GPL, so you can do what you like with\nit, whatever FUD their website puts out, so long as you make your source\ncode available. If you want to fork MySQL, you can!\n\nSleepycat is free so long as source code is released.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"And they questioned Him, saying \"...Is it lawful for \n us to pay taxes to Caesar, or not? ...And He said to \n them \"...render to Caesar the things that are \n Caesar's, and to God the things that are God's.\" \n Luke 20:21,22,25 \n\n",
"msg_date": "15 Apr 2003 07:09:26 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: Are we losing momentum?"
}
] |
[
{
"msg_contents": "A nice paper on one alternative to consider:\nhttp://www.infy.com/knowledge_capital/thought-papers/porting.pdf\n",
"msg_date": "Tue, 7 May 2002 12:55:07 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: OK, lets talk portability."
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Hannu Krosing [mailto:hannu@tm.ee]\n> Sent: Tuesday, May 07, 2002 11:29 AM\n> To: mlw\n> Cc: Tom Lane; Marc G. Fournier; PostgreSQL-development\n> Subject: Re: [HACKERS] OK, lets talk portability.\n> \n> \n> On Tue, 2002-05-07 at 19:44, mlw wrote:\n> > Tom Lane wrote:\n> > > And no, I don't want to undo those changes. Especially not if the\n> > > only reason for it is to not have to use Cygwin on Windows. Most\n> > > of these changes made the startup code substantially simpler,\n> > > faster, and more reliable.\n> > \n> > Then I think the notion of a pure Windows version is dead \n> in the water. Writing\n> > a fork()-like API for Windows is, of course, doable as \n> evidenced by cygwin, and\n> > from a general theory seems like a pretty straight forward \n> thing to do (with a\n> > few low level tricks of course) but the details are pretty scary.\n> \n> There is still another way - use threads. \n> \n> There you have of course the opposite problem - to determine what to\n> _not_ share, but AFAIK this has been done already at least once.\n> \n> And there seems to be some consensus that doing things that would\n> eventually make it easier to use threaded model will probably increase\n> code quality in general.\n\nUnfortunately, it opens up another can of worms.\n\nWith a fork() [or CreateProcess()] model, the newly spun binary can\ninherit a set of rights compatible to the user who attaches. With a\nthreading model, everyone has the same set of rights. Easier to sleep\nat night if you know it is impossible for them to do damage to by\nperforming an action they should not be able to do. Especially\nimportant if extension functions are added that spawn operating system\ntasks.\n",
"msg_date": "Tue, 7 May 2002 15:01:52 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: OK, lets talk portability."
}
] |
[
{
"msg_contents": "The requirement for Cygwin is a non-issue as I see it.\n\nSeveral groups have already done Win32 ports that do not require Cygwin.\n\nThe company I work for is one example.\n\nThe group in Japan:\nhttp://hp.vector.co.jp/authors/VA023283/PostgreSQLe.html\nis another.\n\nI saw some others besides these.\n\nIn other words, you don't have to have Cygwin to run PostgreSQL under\nWin32 environments.\n",
"msg_date": "Tue, 7 May 2002 15:04:30 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: OK, lets talk portability."
}
] |
[
{
"msg_contents": "\nHi,\n\n\nI am running postgresql - 7.1 on Redhat 2.2.16-22 kernel.\nSometimes this the system becomes totally unusable with\n\"VFS: file-max limit 4096 reached\" error? The system is setup to have\n4096 file descriptor as default. I was wondering how many file descriptor\ndoes postgres open per backend process?\n\nThank you,\n--thanh\n\n\n",
"msg_date": "Tue, 7 May 2002 15:36:30 -0700 (PDT)",
"msg_from": "jade <jade@vanzoest.com>",
"msg_from_op": true,
"msg_subject": "postgresql 7.1 file descriptor"
},
{
"msg_contents": "jade wrote:\n> \n> Hi,\n> \n> I am running postgresql - 7.1 on Redhat 2.2.16-22 kernel.\n> Sometimes this the system becomes totally unusable with\n> \"VFS: file-max limit 4096 reached\" error? The system is setup to have\n> 4096 file descriptor as default. I was wondering how many file descriptor\n> does postgres open per backend process?\n\nOne for each file in your database, one for the communication's socket, etc.\nYour best bet is to add more files.\n",
"msg_date": "Wed, 08 May 2002 12:00:18 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: postgresql 7.1 file descriptor"
},
{
"msg_contents": "jade <jade@vanzoest.com> writes:\n> I am running postgresql - 7.1 on Redhat 2.2.16-22 kernel.\n> Sometimes this the system becomes totally unusable with\n> \"VFS: file-max limit 4096 reached\" error? The system is setup to have\n> 4096 file descriptor as default. I was wondering how many file descriptor\n> does postgres open per backend process?\n\nPotentially lots. Increase your system's NFILE limit, or reduce the max\nnumber of backends you will allow PG to start, or update to 7.2 which\nwill let you set a per-backend file limit (see MAX_FILES_PER_PROCESS;\nAFAIR that parameter did not exist in 7.1).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 May 2002 12:31:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: postgresql 7.1 file descriptor "
},
{
"msg_contents": "On Tue, 7 May 2002, jade wrote:\n\n> \n> Hi,\n> \n> \n> I am running postgresql - 7.1 on Redhat 2.2.16-22 kernel.\n> Sometimes this the system becomes totally unusable with\n> \"VFS: file-max limit 4096 reached\" error? The system is setup to have\n> 4096 file descriptor as default. I was wondering how many file descriptor\n> does postgres open per backend process?\n\nI think it depends on what those backends are doing. I just ran a pgbench \n-c 100 -t 10 on my machine, and went from \"2253 712 8192\" in \n/proc/sys/fs/file-nr to \"5316 3825 8192\" in the same file.\n\nAccording to /usr/src/linux-2.4/Documentation/sysctl/fs.txt on my machine, \nthe first number is the number of file handles allocated, the second is \nthe number in use, and the third is the max.\n\nSo, running 100 simos on my personal workstation used about 3000 file \nhandles allocated, with about the same number showing use.\n\nTo easiest way to change these settings is through the use of sysctl. If \nyour machine doesn't have sysctl installed, find it and install it, it's a \nVERY easy way to change system / kernel settings.\n\nAfter install, look at man sysctl for more info.\n\nEast explanation, sysctl -a shows all settings, sysctl -p process and sets \nall settings found in /etc/sysctl.conf\n\nOn production servers, the number for file-max is often set to 32768 or \nhigher (on our big boxes we have it set to 65536 and often see usage of \nwell over 20000 under load.)\n\n",
"msg_date": "Wed, 8 May 2002 11:21:49 -0600 (MDT)",
"msg_from": "Scott Marlowe <scott.marlowe@ihs.com>",
"msg_from_op": false,
"msg_subject": "Re: postgresql 7.1 file descriptor"
},
{
"msg_contents": "On Wed, 8 May 2002, Tom Lane wrote:\n\n> jade <jade@vanzoest.com> writes:\n> > I am running postgresql - 7.1 on Redhat 2.2.16-22 kernel.\n> > Sometimes this the system becomes totally unusable with\n> > \"VFS: file-max limit 4096 reached\" error? The system is setup to have\n> > 4096 file descriptor as default. I was wondering how many file descriptor\n> > does postgres open per backend process?\n>\n> Potentially lots. Increase your system's NFILE limit, or reduce the max\n> number of backends you will allow PG to start, or update to 7.2 which\n> will let you set a per-backend file limit (see MAX_FILES_PER_PROCESS;\n> AFAIR that parameter did not exist in 7.1).\n\nHi All,\n\nThanks for all the response -- it was all very helpful! I don't want to limit\nthe my backend processes because each response is consider \"important\" so\nrunning any backend limitation is consider unaccceptable. So, I've increased\nthe system NFIle limit.. so far so good :-)\n\nAre they any way to pool connections through a single backend process?\n\nthanks,\nThanh\n\n",
"msg_date": "Wed, 8 May 2002 23:18:17 -0700 (PDT)",
"msg_from": "jade <jade@vanzoest.com>",
"msg_from_op": true,
"msg_subject": "Re: [SQL] postgresql 7.1 file descriptor "
}
] |
[
{
"msg_contents": "Does there exist a checklist for adding a new global system catalog table to\nPostgreSQL?\n",
"msg_date": "Tue, 7 May 2002 22:11:15 -0500 ",
"msg_from": "\"Lee, Shawn\" <lee13@uillinois.edu>",
"msg_from_op": true,
"msg_subject": "Creating new system catalog"
},
{
"msg_contents": "Try:\n\nsrc/backend/catalog/README\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Lee, Shawn\n> Sent: Wednesday, 8 May 2002 11:11 AM\n> To: 'pgsql-hackers@postgresql.org'\n> Subject: [HACKERS] Creating new system catalog\n> \n> \n> Does there exist a checklist for adding a new global system \n> catalog table to\n> PostgreSQL?\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n",
"msg_date": "Wed, 8 May 2002 11:26:13 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Creating new system catalog"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Try:\n> src/backend/catalog/README\n>> \n>> Does there exist a checklist for adding a new global system \n>> catalog table to PostgreSQL?\n\nIt's not much of a checklist. You could try looking at Rod Taylor's\nrecent pg_depend patches (I think he's done one with a bootstrapped\ncatalog and one without).\n\nBut a more interesting question is what you're doing that you think\nneeds a new catalog. As a general rule I'd say that you'd be best\noff consulting the hackers list before embarking on such a project,\nnot after...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 May 2002 01:16:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Creating new system catalog "
}
] |
[
{
"msg_contents": "Dear all,\n\nI would like to try compile Debian DPKG and APT-GET under Windows.\n\nThis would allow us to create a modern Windows installer for Cygwin, KDE2/3 \nand PostgreSQL. There are already existing project at \nhttp://debian-cygwin.sourceforge.net/ and http://kde-cygwin.sourceforge.net/. \n\nThe example of http://fink.sourceforge.net/ for MacOSX is quite interesting \nbecause it includes PostgreSQL. The installation method is so much better \nthan Cygwin!!! There is no GUI installer yet, but it will come for sure.\n\nIMHO, this would be the best solution complying with the open-source \nphilosophy and goals. Basically, this would provide the framework for :\n\n- creating a PostgreSQL + Cygwin modern GUI installer. All required .DEB \npackages would be downloaded and installed from Debian mirrors, with little \nuser intervention. PostgreSQL would be installed as a service.\n\n- a cross-platform GUI environment for (future) pgAdmin. Please note I am not \ndealing with pgAdmin future. Dave Page thinks Mono is the solution. I don't \nagree and think we should use Cygwin-KDE or native Windows libraries.\n\nIs it safe to compile Cygwin using Dev-C++? Is anyone interested in the \nproject?\n\nMark : as you said you were a Windows code-storm, how much time do you think \nit would take for someone like you to port DPKD + APT-GET to Windows and \ncreate a debian GUI package installer for Cygwin?\n\nCheers,\nJean-Michel POURE\n",
"msg_date": "Wed, 8 May 2002 12:12:42 +0200",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Cygwin / Debian dpkg / PostgreSQL / KDE2 and 3"
},
{
"msg_contents": "On Wed, 8 May 2002, Jean-Michel POURE wrote:\n\n> This would allow us to create a modern Windows installer for Cygwin, KDE2/3\n> and PostgreSQL. There are already existing project at\n> http://debian-cygwin.sourceforge.net/\nI'm subscribed to debian-win32@lists.debian.org mailing list but do no\nactive development there. (I think it would be sane to ask this question\nthere but I do not like to start cross-posting so I leave it in your\nhands to post your question there.)\n\nIf I'm not completely wrong this project had some success in porting dpkg\nand apt might be available in the not so far future - but I might be\nwrong.\n\n> and http://kde-cygwin.sourceforge.net/.\nI do not know this project.\n\n> IMHO, this would be the best solution complying with the open-source\n> philosophy and goals.\nI like this project and I hope it will evolve - but my Win knowledge is\nnearly zero.\n\n> - creating a PostgreSQL + Cygwin modern GUI installer. All required .DEB\n> packages would be downloaded and installed from Debian mirrors, with little\n> user intervention. PostgreSQL would be installed as a service.\nFor sure this is the goal.\n\n> - a cross-platform GUI environment for (future) pgAdmin. Please note I am not\n> dealing with pgAdmin future. Dave Page thinks Mono is the solution. I don't\n> agree and think we should use Cygwin-KDE or native Windows libraries.\nOr alternatively wxGtk/wxWindows which has more options to choose the\nprogramming language if I'm not completely wrong.\n\n> Mark : as you said you were a Windows code-storm, how much time do you think\n> it would take for someone like you to port DPKD + APT-GET to Windows and\n> create a debian GUI package installer for Cygwin?\nPlease make sure to ask on debian-win32@lists.debian.org before starting an\nadditional project. You can subscribe to this list on the general Debian\nsubscription page\n http://www.debian.org/MailingLists/subscribe\n\nKind regards and good luck for the project\n\n Andreas.\n",
"msg_date": "Wed, 8 May 2002 12:42:33 +0200 (CEST)",
"msg_from": "\"Tille, Andreas\" <TilleA@rki.de>",
"msg_from_op": false,
"msg_subject": "Re: Cygwin / Debian dpkg / PostgreSQL / KDE2 and 3"
},
{
"msg_contents": "Jean-Michel POURE wrote:\n> - creating a PostgreSQL + Cygwin modern GUI installer. All required .DEB \n> packages would be downloaded and installed from Debian mirrors, with little \n> user intervention. PostgreSQL would be installed as a service.\n> \n> - a cross-platform GUI environment for (future) pgAdmin. Please note I am not \n> dealing with pgAdmin future. Dave Page thinks Mono is the solution. I don't \n> agree and think we should use Cygwin-KDE or native Windows libraries.\n\nThis may be a stupid question, but could the new Mozilla be used as a\ncross-platform GUI? Seems like a natural because it is supposed to be\nmore of a platform with a browser built on top.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 4 Jun 2002 01:08:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Cygwin / Debian dpkg / PostgreSQL / KDE2 and 3"
},
{
"msg_contents": "Java is another possibility, since it is already cross platform.\n\nDave\nOn Tue, 2002-06-04 at 01:08, Bruce Momjian wrote:\n> Jean-Michel POURE wrote:\n> > - creating a PostgreSQL + Cygwin modern GUI installer. All required .DEB \n> > packages would be downloaded and installed from Debian mirrors, with little \n> > user intervention. PostgreSQL would be installed as a service.\n> > \n> > - a cross-platform GUI environment for (future) pgAdmin. Please note I am not \n> > dealing with pgAdmin future. Dave Page thinks Mono is the solution. I don't \n> > agree and think we should use Cygwin-KDE or native Windows libraries.\n> \n> This may be a stupid question, but could the new Mozilla be used as a\n> cross-platform GUI? Seems like a natural because it is supposed to be\n> more of a platform with a browser built on top.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n> \n\n\n\n",
"msg_date": "04 Jun 2002 05:29:39 -0400",
"msg_from": "Dave Cramer <dave@fastcrypt.com>",
"msg_from_op": false,
"msg_subject": "Re: Cygwin / Debian dpkg / PostgreSQL / KDE2 and 3"
},
{
"msg_contents": "On 4 Jun 2002, Dave Cramer wrote:\n\n> Java is another possibility, since it is already cross platform.\n... with certain licensing issues ...\n\nJust a remark\n\n Andreas.\n",
"msg_date": "Tue, 4 Jun 2002 15:09:35 +0200 (CEST)",
"msg_from": "\"Tille, Andreas\" <TilleA@rki.de>",
"msg_from_op": false,
"msg_subject": "Re: Cygwin / Debian dpkg / PostgreSQL / KDE2 and 3"
},
{
"msg_contents": "The jdbc driver has a method to return the exported keys from a table,\nwe require essentially what the following select returns, but it is\nslow, can anyone suggest ways to optimize it?\n\nRegards,\nDave\n\n SELECT\n c.relname as primary,\n c2.relname as foreign,\n t.tgconstrname,\n ic.relname as fkeyname,\n af.attnum as fkeyseq,\n ipc.relname as pkeyname,\n ap.attnum as pkeyseq,\n t.tgdeferrable,\n t.tginitdeferred,\n t.tgnargs,t.tgargs,\n p1.proname as updaterule,\n p2.proname as deleterule\nFROM\n pg_trigger t,\n pg_trigger t1,\n pg_class c,\n pg_class c2,\n pg_class ic,\n pg_class ipc,\n pg_proc p1,\n pg_proc p2,\n pg_index if,\n pg_index ip,\n pg_attribute af,\n pg_attribute ap\nWHERE\n (t.tgrelid=c.oid\n AND t.tgisconstraint\n AND t.tgconstrrelid=c2.oid\n AND t.tgfoid=p1.oid\n and p1.proname like '%%upd')\n\n and\n (t1.tgrelid=c.oid\n and t1.tgisconstraint\n and t1.tgconstrrelid=c2.oid\n AND t1.tgfoid=p2.oid\n and p2.proname like '%%del')\n\n AND c2.relname='users'\n\n AND\n (if.indrelid=c.oid\n AND if.indexrelid=ic.oid\n and ic.oid=af.attrelid\n AND if.indisprimary)\n\n and\n (ip.indrelid=c2.oid\n and ip.indexrelid=ipc.oid\n and ipc.oid=ap.attrelid\n and ip.indisprimary)\n;\n\n\n",
"msg_date": "04 Jun 2002 09:48:56 -0400",
"msg_from": "Dave Cramer <dave@fastcrypt.com>",
"msg_from_op": false,
"msg_subject": "looking for optimum select for exported keys"
}
] |
[
{
"msg_contents": "Do we want a Win32 native version of PostgreSQL?\n\nThe only reasons *not* to use Cygwin is licensing, installation hassles, and\nmaybe stability or performance. Therefore, there is no strong technical reason\nto defend its removal, only a philosophical one.\n\nThe debates on licensing on this list go on for weeks and people feel\npassionately about the subject. It seems odd that no one speaks out about the\nGNU requirement of cygwin.\n\nIf there is a desire to create a PostgreSQL that is \"fork\" free, then we should\ndo it now. If now strong desire exists, then we should make an entry in the FAQ\nand move on.\n\nIf we want to be \"portable\" (and this should help us with a threading model\nlater on) we need to cleanup all of the global variables.\n\nPostgreSQL's postmaster should not touch any global variables that are defined\noutside something like a pg_global structure and should not touch any static\nvariables at all. If postmaster initializes a variable that will get cloned on\na fork(), conceptually it is a shared global variable and belongs in\npg_globals. Going all the way and replacing all globals and statics with a\nstruct should allow threading with TLS. (Thread Local Storage)\n\nPort lib. Regardless where it comes from, the porting code should be a self\ncontained library, not a list of objects. On Windows, a .DLL can do some things\neasier than an application. Also, having a library allows more flexibility as\nto how a port is designed.\n\nWe should spec out our port interface. This includes file, semaphores, shared\nmemory, signals/events, process control, IPC, system resources, etc. This will\ngrow as we re-port to other environments like Windows.\n\nany comments?\n",
"msg_date": "Wed, 08 May 2002 08:35:20 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Path to PostgreSQL portabiliy"
},
{
"msg_contents": "mlw <markw@mohawksoft.com> writes:\n> Port lib. Regardless where it comes from, the porting code should be a\n> self contained library, not a list of objects. On Windows, a .DLL can\n> do some things easier than an application. Also, having a library\n> allows more flexibility as to how a port is designed.\n\nThat may be necessary on Windoze, but on any other platform breaking out\nan essential part of the backend as a library strikes me as a dead loss.\nYou create extra risk of installation mistakes, can't-find-library\nstartup failures, version mismatch problems, etc, etc --- for zero gain\nthat I can see.\n\nFor comparison you may want to observe the opinion expressed some time\nago by Peter E. that we should fold plpgsql and the other PL's into\nthe backend, instead of having them as dynamic-linked libraries.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 May 2002 10:03:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Path to PostgreSQL portabiliy "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n> > Port lib. Regardless where it comes from, the porting code should be a\n> > self contained library, not a list of objects. On Windows, a .DLL can\n> > do some things easier than an application. Also, having a library\n> > allows more flexibility as to how a port is designed.\n> \n> That may be necessary on Windoze, but on any other platform breaking out\n> an essential part of the backend as a library strikes me as a dead loss.\n> You create extra risk of installation mistakes, can't-find-library\n> startup failures, version mismatch problems, etc, etc --- for zero gain\n> that I can see.\n\nIt does not need, and probably should not be by default, a shared library under\nUNIX. A static library is fine. The issue is whether or not it makes sense to\ntry and design all porting layers the same, or allow the port engineer the\nflexibility to create what they need the way they need to do it. \n\nA side note:\nThe \"Windoze\" comment says a lot Tom. Believe me, I am currently no fan of\nWindows, but there is something to be said about doing a good job supporting\nsuch a popular platform, regardless of our personal opinions. When I was\nworking at DMN, I had to make sure we could find country music and Brittany\nSpears. Distasteful, but certainly something that needed to be done.\n\nIMHO, I think a great PostgreSQL implementation for Win32 is a nail in the\ncoffin for Windows. If we give them a great database, which runs well under\nWindows, for free, MSSQL will now have a serious competitor for the medium to\nsmall marketplace.\n\nOnce MSSQL has viable cross-platform competition in this space, one less\nrequirement for Windows will exist. Right now, if you implement on Windows, you\nare most likely going to use MSSQL and be stuck there. With a good Win32\nPostgreSQL, an engineer can implement on PostgreSQL for Windows, and easily\nmove it to a \"real\" environment for stability. \n\nI see it as an important step.\n",
"msg_date": "Wed, 08 May 2002 10:16:16 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Path to PostgreSQL portabiliy"
},
{
"msg_contents": "mlw writes:\n > [ snip ]\n > Port lib. Regardless where it comes from, the porting code should be a self\n > contained library, not a list of objects. On Windows, a .DLL can do some things\n > easier than an application. Also, having a library allows more flexibility as\n > to how a port is designed.\n > \n > We should spec out our port interface. This includes file, semaphores, shared\n > memory, signals/events, process control, IPC, system resources, etc. This will\n > grow as we re-port to other environments like Windows.\n\nIn other words ignore the POSIX capabilities/features of the largely\ncompatible Unix systems and invent a layer over them to aid porting to\nmore POSIXly challenged systems (i.e. Windows)...\n\nSeems like the wrong way of doing things - change the majority to aid\nthe minority! Doesn't the current method of relying on POSIX\ncompatability layers on Windows make more sense?\n\nEven if such a 'port library' was the way forward, it should be just\nusing an existing one, i.e. Apache [A]PR. No use replicating all the\neffort!\n\nLooking into APR got me back to thinking about a PostgreSQL and mmap -\nwhat's the stance on it? Useable? In the archives someone was looking\ninto mmap use for WAL, but this hasn't reappeared for 7.3... I'm\nthinking about using mmap for COPY FROM...\n\nLee.\n",
"msg_date": "Wed, 8 May 2002 16:36:15 +0100",
"msg_from": "Lee Kindness <lkindness@csl.co.uk>",
"msg_from_op": false,
"msg_subject": "Path to PostgreSQL portabiliy"
},
{
"msg_contents": "> The debates on licensing on this list go on for weeks and people feel\n> passionately about the subject. It seems odd that no one speaks out about the\n> GNU requirement of cygwin.\n\nWe respect the licensing requirements for that product. And certainly\nthe licensing requirement for cygwin are no less onerous than for other\nproducts installed on a Windoze platform, or for Windoze itself.\n\nMy impression on the licensing requirement is that there is an\ninconvenience factor in installing cygwin separately, and a cost factor\nin trying to deliver an integrated build.\n\nBut I'm actually not certain about *any* onerous requirements for\ncygwin, now that I look at it. \n\n<disclaimer>\nIf we've already covered this, just remind me what Truth is, no need to\ngo over old territory.\n</disclaimer>\n\nHere are some points and questions:\n\n1) cygwin is licensed under GPL. So is GNU/Linux, which provides the\nsame APIs as cygwin does. Linux does not pollute application licenses,\npresumably because Linux itself is not *required* to run the\napplication; it could be run on another system just as well. That is\ntrue for PostgreSQL's relationship to cygwin on Windows, right? Or has\nGNU managed to carefully sort out all GPL vs LGPL issues for\napplications and libraries to solve it that way?\n\n2) If (1) does not exempt the PostgreSQL app from GPL polution, then why\nnot distribute PostgreSQL on Windows using a GPL license? It would be a\nlicense fork, but there is no expectation that the GPL licensed code\nwould be anything other than a strict copy of the BSD code. And the\nlatter does not preclude anyone from taking the code and distributing it\nunder another license, as long as the BSD license is distributed also.\nThere is no problem distributing the PostgreSQL sources with the cygwin\npackage, so the requirements for the cygwin license can be fully met. I\nthink that this would be supported by the rest of the community, as long\nas it was not an excuse to discuss GPL vs BSD for the main code base.\n\n3) If (2) is the case, then development could continue under the BSD\nlicense, since developers could use the BSD-original code for their\ndevelopment work. So there is no risk of \"backflow polution\".\n\nThoughts (specific to PostgreSQL on cygwin/windoze, which is not a\nhappening thing at the moment)?\n\n - Thomas\n",
"msg_date": "Wed, 08 May 2002 08:37:02 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Path to PostgreSQL portabiliy"
},
{
"msg_contents": "> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of mlw\n> Sent: Wednesday, May 08, 2002 10:16 AM\n> To: Tom Lane\n> Cc: PostgreSQL-development; Jan Wieck; Marc G. Fournier; Dann Corbit\n> Subject: Re: [HACKERS] Path to PostgreSQL portabiliy\n>\n> IMHO, I think a great PostgreSQL implementation for Win32 is a nail in the\n> coffin for Windows. If we give them a great database, which runs\n> well under\n> Windows, for free, MSSQL will now have a serious competitor for\n> the medium to\n> small marketplace.\n>\n> Once MSSQL has viable cross-platform competition in this space, one less\n> requirement for Windows will exist. Right now, if you implement\n> on Windows, you\n> are most likely going to use MSSQL and be stuck there. With a good Win32\n> PostgreSQL, an engineer can implement on PostgreSQL for Windows,\n> and easily\n> move it to a \"real\" environment for stability.\n>\n> I see it as an important step.\n\n... and for IT staff who do their play-work on the Windows laptops, and to\nhelp compete against MySQL, which has a strong, out-of-the-box Windows\nbinary, and for people who think it's easier to install and play with things\non Windows first, and ...\n\nIt seems like there are lot of open paths discussions, though:\n\n. make cygwin perform better (does it perform badly? is it unstable?)\n\n. make cygwin easier to install\n\n. make windows native (req's semaphore, fork, some shell utils, etc.)\n\nI've installed PG+Cygwin on a few dozen machines, but always to let people\nplay before the real *nix install. Can anyone speak to _really_ using PG +\nCygwin?\n\nJoel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton\nKnowledge Management & Technology Consultant\n\n",
"msg_date": "Wed, 8 May 2002 11:37:32 -0400",
"msg_from": "\"Joel Burton\" <joel@joelburton.com>",
"msg_from_op": false,
"msg_subject": "Re: Path to PostgreSQL portabiliy"
},
{
"msg_contents": "Joel Burton wrote:\n> ... and for IT staff who do their play-work on the Windows laptops, and to\n> help compete against MySQL, which has a strong, out-of-the-box Windows\n> binary, and for people who think it's easier to install and play with things\n> on Windows first, and ...\n> \n> It seems like there are lot of open paths discussions, though:\n> \n> . make cygwin perform better (does it perform badly? is it unstable?)\n\nI don't know if a native Win32 binary will perform better, I do know that Linux\nrunning PostgreSQL performs better than Windows running cygwin and PostgreSQL\non the same machine. The extent of what that means is unclear.\n\n> \n> . make cygwin easier to install\n\nOr just have a stripped down cygwin runtime.\n\n> \n> . make windows native (req's semaphore, fork, some shell utils, etc.)\n\nHence this whole conversation.\n\n> \n> I've installed PG+Cygwin on a few dozen machines, but always to let people\n> play before the real *nix install. Can anyone speak to _really_ using PG +\n> Cygwin?\n\nAs I think of it, I don't think a cygwin PostgreSQL will *ever* be taken\nseriously by the Windows crowd, just as a Wine/CorelDraw wasn't taken seriously\nby the Linux crowd.\n\nIf we want to support Windows, we should support Windows. Cygwin will not be\naccepted by any serious IT team.\n",
"msg_date": "Wed, 08 May 2002 11:46:39 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Path to PostgreSQL portabiliy"
},
{
"msg_contents": "Lee Kindness wrote:\n> \n> mlw writes:\n> > [ snip ]\n> > Port lib. Regardless where it comes from, the porting code should be a self\n> > contained library, not a list of objects. On Windows, a .DLL can do some things\n> > easier than an application. Also, having a library allows more flexibility as\n> > to how a port is designed.\n> >\n> > We should spec out our port interface. This includes file, semaphores, shared\n> > memory, signals/events, process control, IPC, system resources, etc. This will\n> > grow as we re-port to other environments like Windows.\n> \n> In other words ignore the POSIX capabilities/features of the largely\n> compatible Unix systems and invent a layer over them to aid porting to\n> more POSIXly challenged systems (i.e. Windows)...\n> \n> Seems like the wrong way of doing things - change the majority to aid\n> the minority! Doesn't the current method of relying on POSIX\n> compatability layers on Windows make more sense?\n\nDepends, do you want the Windows version to actually be used? I have been\nwriting software for over 20 years now, and sometimes you just have to hold\nyour nose. It would be nice if we could code what we want, the way we want, in\nthe language we want, on the platforms we want.\n\nWindows represents a HUGE user base, it also represents a platform for which a\nreal good native PostgreSQL should do well. There are, to my knowledge, no good\nand free databases available for Windows. \n\nPostgreSQL on Windows could be very cool as a serious poster child for why\nopen-source is the way to go.\n\n> \n> Even if such a 'port library' was the way forward, it should be just\n> using an existing one, i.e. Apache [A]PR. No use replicating all the\n> effort!\n\nAbsolutely, I think Apache's APR is pretty good.\n",
"msg_date": "Wed, 08 May 2002 11:53:30 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Path to PostgreSQL portabiliy"
},
{
"msg_contents": "> -----Original Message-----\n> From: markw@snoopy.mohawksoft.com [mailto:markw@snoopy.mohawksoft.com]On\n> Behalf Of mlw\n> Sent: Wednesday, May 08, 2002 11:47 AM\n> To: Joel Burton\n> Cc: Tom Lane; PostgreSQL-development; Jan Wieck; Marc G. Fournier; Dann\n> Corbit\n> Subject: Re: [HACKERS] Path to PostgreSQL portabiliy\n>\n>\n> As I think of it, I don't think a cygwin PostgreSQL will *ever* be taken\n> seriously by the Windows crowd, just as a Wine/CorelDraw wasn't\n> taken seriously\n> by the Linux crowd.\n>\n> If we want to support Windows, we should support Windows. Cygwin\n> will not be\n> accepted by any serious IT team.\n\nWell, I think it's a bit different than Wine, a _huge_ binary trying to\nemulate every call of an operating system (and making things more than a bit\nslower).\n\nIf there is a stripped down, out-of-the-box install that includes cygwin, do\nyou think that will turn people off? It would be essentially invisible.\n\nThere was a native PG (7.0.3, IIRC) floating around on the web, so _someone_\nhas done this before.\n\n- J.\n\nJoel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton\nKnowledge Management & Technology Consultant\n\n",
"msg_date": "Wed, 8 May 2002 11:55:36 -0400",
"msg_from": "\"Joel Burton\" <joel@joelburton.com>",
"msg_from_op": false,
"msg_subject": "Re: Path to PostgreSQL portabiliy"
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> 2) If (1) does not exempt the PostgreSQL app from GPL polution, then why\n> not distribute PostgreSQL on Windows using a GPL license?\n\nGiven the cygwin licensing terms stated at\n\thttp://cygwin.com/licensing.html\nit appears to me that we need not open that can of worms (and I'd much\nrather not muddy the licensing waters that way, regardless of any\narguments about whether it would hurt or not...)\n\nAs near as I can tell, we *could* develop a self-contained installation\npackage for PG+cygwin without any licensing problem. So that set of\nproblems could be solved with a reasonable amount of work. I'm still\nunclear on whether there are serious technical problems (performance,\nstability) with using cygwin.\n\n(Actually, even if there are performance or stability problems, an\neasily-installable package would still address the needs of people who\nwant to \"try it out\" or \"get their feet wet\". And maybe that's all we\nneed to do. We always have said that we recommend a Unix platform for\nproduction-grade PG installations, and IMNSHO that recommendation would\nnot change one iota if there were a native rather than cygwin-based\nWindows port. So I'm unconvinced that we have a problem to solve\nanyway...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 May 2002 11:57:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Path to PostgreSQL portabiliy "
},
{
"msg_contents": "Joel Burton wrote:\n> \n> > -----Original Message-----\n> > From: markw@snoopy.mohawksoft.com [mailto:markw@snoopy.mohawksoft.com]On\n> > Behalf Of mlw\n> > Sent: Wednesday, May 08, 2002 11:47 AM\n> > To: Joel Burton\n> > Cc: Tom Lane; PostgreSQL-development; Jan Wieck; Marc G. Fournier; Dann\n> > Corbit\n> > Subject: Re: [HACKERS] Path to PostgreSQL portabiliy\n> >\n> >\n> > As I think of it, I don't think a cygwin PostgreSQL will *ever* be taken\n> > seriously by the Windows crowd, just as a Wine/CorelDraw wasn't\n> > taken seriously\n> > by the Linux crowd.\n> >\n> > If we want to support Windows, we should support Windows. Cygwin\n> > will not be\n> > accepted by any serious IT team.\n> \n> Well, I think it's a bit different than Wine, a _huge_ binary trying to\n> emulate every call of an operating system (and making things more than a bit\n> slower).\n> \n> If there is a stripped down, out-of-the-box install that includes cygwin, do\n> you think that will turn people off? It would be essentially invisible.\n\nI was thinking about this earlier, one problem with cygwin is it doesn't act\nlike a Windows program, it requires its own file layout.\n\nCygwin's purpose in life to to allow basically UNIX centric people to be\ncomfortable on Windows. It, by no means, is anything that a Windows centric\nperson wants to deal with, or would deal with if there were an alternative.\n",
"msg_date": "Wed, 08 May 2002 11:58:34 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Path to PostgreSQL portabiliy"
},
{
"msg_contents": "...\n> As near as I can tell, we *could* develop a self-contained installation\n> package for PG+cygwin without any licensing problem.\n\nRight. That was my opinion also. But istm that however the discussion\nsettles out, there is a path to success.\n\n - Thomas\n",
"msg_date": "Wed, 08 May 2002 09:10:44 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Path to PostgreSQL portabiliy"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> ...\n> > As near as I can tell, we *could* develop a self-contained installation\n> > package for PG+cygwin without any licensing problem.\n> \n> Right. That was my opinion also. But istm that however the discussion\n> settles out, there is a path to success.\n\nThese last couple days have really started me thinking about Windows again. I\ndeveloped Windows software for over a decade, geez much longer than that, I\nwrote my first Windows program using the Windows 1.03 SDK. (I am in a 12 step\nprogram now, but you guys are causing a relapse!)\n\nListen, here is purely my opinion on the matter, I am speaking from my\nexperience as a Windows user, developer, and author (Tricks of the Windows 3.1\nMasters).\n\nIt is useless to spend serious time on a cygwin version. Yea, it is cool and\nall, but it won't be used. From the eyes of a Windows user cygwin is a hack and\na mess. An IT guy that only knows Windows will never use it, and if presented\nwith a program that forces a UNIX like directory tree on their hard drive and\nUNIX like tools to manage it, they will delete the program and curse the time\nspent installing it.\n\nPerformance may also be an issue, I don't know for sure, but it is suspected.\nThe cygwin fork troubles me as well. It may work, but I would not call it a\n\"production\" technique, how about you? Would you bet your business on cygwin\nand a hacked fork()?\n\nNo matter what steps you take, cygwin will not be seen by Windows users as\nanything but a sloppy/messy/horrible hack. It is a fact of life. You are\nwelcome to disagree, but I assure you it is true.\n\n From a usefulness perspective, a cygwin version of PostgreSQL will be nothing\nmore than a proof of concept, a test bed, or a demo. It will never be used as a\nserious database. How much work does that warrant?\n",
"msg_date": "Wed, 08 May 2002 12:29:07 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Path to PostgreSQL portabiliy"
},
{
"msg_contents": "On Wed, 8 May 2002, Tom Lane wrote:\n\n> Thomas Lockhart <lockhart@fourpalms.org> writes:\n> > 2) If (1) does not exempt the PostgreSQL app from GPL polution, then why\n> > not distribute PostgreSQL on Windows using a GPL license?\n>\n> Given the cygwin licensing terms stated at\n> \thttp://cygwin.com/licensing.html\n> it appears to me that we need not open that can of worms (and I'd much\n> rather not muddy the licensing waters that way, regardless of any\n> arguments about whether it would hurt or not...)\n>\n> As near as I can tell, we *could* develop a self-contained installation\n> package for PG+cygwin without any licensing problem. So that set of\n> problems could be solved with a reasonable amount of work. I'm still\n> unclear on whether there are serious technical problems (performance,\n> stability) with using cygwin.\n\nThe last time I tried to play with it, any sort of load tended to blow\naway the whole IPC side of things ... it was stable to \"play with\", but\nfor any *serious* work ... this may have changed though, as it has been\nawhile since I played with it last ...\n\n\n",
"msg_date": "Wed, 8 May 2002 13:41:18 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Path to PostgreSQL portabiliy "
},
{
"msg_contents": "On Wednesday 08 May 2002 11:37 am, Thomas Lockhart wrote:\n> 1) cygwin is licensed under GPL. So is GNU/Linux, which provides the\n> same APIs as cygwin does. Linux does not pollute application licenses,\n> presumably because Linux itself is not *required* to run the\n\nThe Linux kernel is not under a pure GPL. \n\nCOPYING in the kernel source says this, prepended to the GPL:\n NOTE! This copyright does *not* cover user programs that use kernel\n services by normal system calls - this is merely considered normal use\n of the kernel, and does *not* fall under the heading of \"derived work\".\n Also note that the GPL below is copyrighted by the Free Software\n Foundation, but the instance of code that it refers to (the Linux\n kernel) is copyrighted by me and others who actually wrote it.\n\n Also note that the only valid version of the GPL as far as the kernel\n is concerned is _this_ particular version of the license (ie v2, not\n v2.2 or v3.x or whatever), unless explicitly otherwise stated.\n\n Linus Torvalds\n\n--------------------------------------------------------------------------\n\nDoes cygwin make the same statement?\n\n> 2) If (1) does not exempt the PostgreSQL app from GPL polution, then why\n> not distribute PostgreSQL on Windows using a GPL license? \n\n[snip]\n\n> 3) If (2) is the case, then development could continue under the BSD\n> license, since developers could use the BSD-original code for their\n> development work. So there is no risk of \"backflow polution\".\n\nCan PostgreSQL, Inc be the GPL distributor for these purposes, being a \nseparate entity from the PostgreSQL Global Development Group?\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 8 May 2002 14:49:39 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Path to PostgreSQL portabiliy"
},
{
"msg_contents": "mlw wrote:\n>\n> No matter what steps you take, cygwin will not be seen by Windows users as\n> anything but a sloppy/messy/horrible hack. It is a fact of life. You are\n> welcome to disagree, but I assure you it is true.\n\nJust to clarify here: is it confirmed that having the complete cygwin\ndistribution is a necessary condition to having a running PostgreSQL on\nwindows? Is it not possible that, having built postgresql with the full\ncygwin, it would be possible to make a nice clean setup.exe package\nwhich bundles the postgresql executables, the required cygwin dlls and\nother niceties into an easy install package? Given that, I do not think\nyour putative windows user would care at all about what was going on\nunder the covers. As long as the install was clean, there were utilities\n(pgadmin?) to start working with the database right away, and things\n\"just worked\", the ugliness (or exquisite symmetry... I am not an\nexpert) of the fork() implementation really would not be an issue :)\n\nOf course, an imaginary beautiful packaging regime hinges on the\npossibility of bundling the cygwin api libraries cleanly without\nbundling all the rest of the cygwin scruft (unix directory heirarchy,\netc etc). Anyone have any light to shed on cygwin's \"packagability\"?\n\nP.\n",
"msg_date": "Wed, 08 May 2002 12:53:57 -0700",
"msg_from": "Paul Ramsey <pramsey@refractions.net>",
"msg_from_op": false,
"msg_subject": "Re: Path to PostgreSQL portabiliy"
},
{
"msg_contents": "Paul Ramsey wrote:\n> \n> mlw wrote:\n> >\n> > No matter what steps you take, cygwin will not be seen by Windows users as\n> > anything but a sloppy/messy/horrible hack. It is a fact of life. You are\n> > welcome to disagree, but I assure you it is true.\n> \n> Just to clarify here: is it confirmed that having the complete cygwin\n> distribution is a necessary condition to having a running PostgreSQL on\n> windows? Is it not possible that, having built postgresql with the full\n> cygwin, it would be possible to make a nice clean setup.exe package\n> which bundles the postgresql executables, the required cygwin dlls and\n> other niceties into an easy install package? Given that, I do not think\n> your putative windows user would care at all about what was going on\n> under the covers. As long as the install was clean, there were utilities\n> (pgadmin?) to start working with the database right away, and things\n> \"just worked\", the ugliness (or exquisite symmetry... I am not an\n> expert) of the fork() implementation really would not be an issue :)\n\nWindows users expect to have C:\\my programs\\postgres as the install location. A\nperson who has used or looked at MSSQL would expect to deal with the real file\nsystem. The cygwin environment shields the UNIX program from Windows, the\nWindows user would expect the program to deal with the system as is.\n\nThe Windows user that would install PostgreSQL would expect it to be a real\nwindows program, but would be savvy enough (and prejudiced enough) to know if\nit weren't.\n",
"msg_date": "Wed, 08 May 2002 16:12:17 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Path to PostgreSQL portabiliy"
},
{
"msg_contents": "> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Paul Ramsey\n> Sent: Wednesday, May 08, 2002 3:54 PM\n> To: mlw\n> Cc: PostgreSQL-development\n> Subject: Re: [HACKERS] Path to PostgreSQL portabiliy\n>\n>\n> mlw wrote:\n> >\n> > No matter what steps you take, cygwin will not be seen by\n> Windows users as\n> > anything but a sloppy/messy/horrible hack. It is a fact of life. You are\n> > welcome to disagree, but I assure you it is true.\n>\n> Just to clarify here: is it confirmed that having the complete cygwin\n> distribution is a necessary condition to having a running PostgreSQL on\n> windows? Is it not possible that, having built postgresql with the full\n> cygwin, it would be possible to make a nice clean setup.exe package\n> which bundles the postgresql executables, the required cygwin dlls and\n> other niceties into an easy install package? Given that, I do not think\n> your putative windows user would care at all about what was going on\n> under the covers. As long as the install was clean, there were utilities\n> (pgadmin?) to start working with the database right away, and things\n> \"just worked\", the ugliness (or exquisite symmetry... I am not an\n> expert) of the fork() implementation really would not be an issue :)\n>\n> Of course, an imaginary beautiful packaging regime hinges on the\n> possibility of bundling the cygwin api libraries cleanly without\n> bundling all the rest of the cygwin scruft (unix directory heirarchy,\n> etc etc). Anyone have any light to shed on cygwin's \"packagability\"?\n\nCertainly, we don't need all of cygwin (eg bison, gcc, perl, et al). We'd\nneed the dll, sh, rm, and few other things. I'm not sure if it would need to\nbe in the standard cygwin file structure; I know that you can reconfigure\nthis when you use cygwin (I used to). In any event, instead of having to\nhave a novice pick & guess which of >100 packages they need, we could put\ntogether the 5 or 6 they need.\n\nI'm not sure I agree entirely with mlw: some Windows admins will be afraid\nof cygwin, but, I'll bet more than a few won't even notice that its being\nused (especially if we can change the dir names, provide windows shortcuts\nto the commands like initdb, create database, pg_ctl, etc., which would be\ntrivial to do).\n\nStill unanswered is real data on whether cygwin would be good for serious\nproduction use by real people. However, for the test/play/try-out model, I\nthink cygwin would be a fine solution, and wouldn't (shouldn't?) require too\nmuch work.\n\n- J.\n\n",
"msg_date": "Wed, 8 May 2002 18:09:41 -0400",
"msg_from": "\"Joel Burton\" <joel@joelburton.com>",
"msg_from_op": false,
"msg_subject": "Re: Path to PostgreSQL portabiliy"
},
{
"msg_contents": "\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of mlw\n> Sent: Wednesday, May 08, 2002 4:12 PM\n> To: Paul Ramsey\n> Cc: PostgreSQL-development\n> Subject: Re: [HACKERS] Path to PostgreSQL portabiliy\n>\n> Windows users expect to have C:\\my programs\\postgres as the\n> install location. A\n> person who has used or looked at MSSQL would expect to deal with\n> the real file\n> system. The cygwin environment shields the UNIX program from Windows, the\n> Windows user would expect the program to deal with the system as is.\n>\n> The Windows user that would install PostgreSQL would expect it to\n> be a real\n> windows program, but would be savvy enough (and prejudiced\n> enough) to know if\n> it weren't.\n\nIt's not the nicest thing, but the root for the mini-cygwin/PG could be\nc:\\program files\\postgresql. Then PG itself could be something like\nc:\\program files\\postgresql\\bin. Java, for instance, comes packed in\nc:\\program files\\javasoft\\_version_number_\\bin.\n\nIn any event, for people that want to play around, test it out, do some PG\nwork on their laptop at night, etc., I don't think they'd really care that\nit's not a \"real\" windows program. I'm a dedicated unix weenie, and I have\nPG + cygwin on the windows partition of my machines. It's very convenient at\ntimes.\n\nJoel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton\nKnowledge Management & Technology Consultant\n\n",
"msg_date": "Wed, 8 May 2002 18:24:51 -0400",
"msg_from": "\"Joel Burton\" <joel@joelburton.com>",
"msg_from_op": false,
"msg_subject": "Re: Path to PostgreSQL portabiliy"
},
{
"msg_contents": "I think that cygwin is just a hack that allows you to distribute a software\ndevelopped for UNIX under Windows. I see the point for using it in a company\nthat wants to have a bigger market and doesn't really care about the speed\nof its application (because cygwin slows down your application quite a bit).\nBut an open source project should try to have the best implementation, most\nof all the main site. If you want a port to Windows with cygnus, let other\nsattelite projects do it as they already exist, and add links to these\nprojects to the home page.\nMay be its time again to dig up this old subject of thread support. Even the\nUnix implementation could have a mixed multi-process multi-thread\narchitecture. The postmaster could start a certain number of processes each\nprocess beeing able to be multi-threaded if the platform supports it. If you\nreally don't want of threads it is possible to implement a event\ndemultiplexer loop and each process can benefit of any delay to do something\nelse. Then you benefit of idle time in the connexion and even after further\noptimization you can benefit of idle time during I/O.\nOne good start point is the GNU Pth library that implements cooperative\nthreads. It's a good start because it is supported on many platforms and\nit's easier to implement thread support step by step: cooperative in an\nevent demultiplexing way and then preemptive. Pth has a Posix layer to keep\nall things more standard.\n\nRegards,\nNicolas\n\n----- Original Message -----\nFrom: \"Paul Ramsey\" <pramsey@refractions.net>\nTo: \"mlw\" <markw@mohawksoft.com>\nCc: \"PostgreSQL-development\" <pgsql-hackers@postgresql.org>\nSent: Thursday, May 09, 2002 5:53 AM\nSubject: Re: [HACKERS] Path to PostgreSQL portabiliy\n\n\n> mlw wrote:\n> >\n> > No matter what steps you take, cygwin will not be seen by Windows users\nas\n> > anything but a sloppy/messy/horrible hack. It is a fact of life. You are\n> > welcome to disagree, but I assure you it is true.\n>\n> Just to clarify here: is it confirmed that having the complete cygwin\n> distribution is a necessary condition to having a running PostgreSQL on\n> windows? Is it not possible that, having built postgresql with the full\n> cygwin, it would be possible to make a nice clean setup.exe package\n> which bundles the postgresql executables, the required cygwin dlls and\n> other niceties into an easy install package? Given that, I do not think\n> your putative windows user would care at all about what was going on\n> under the covers. As long as the install was clean, there were utilities\n> (pgadmin?) to start working with the database right away, and things\n> \"just worked\", the ugliness (or exquisite symmetry... I am not an\n> expert) of the fork() implementation really would not be an issue :)\n>\n> Of course, an imaginary beautiful packaging regime hinges on the\n> possibility of bundling the cygwin api libraries cleanly without\n> bundling all the rest of the cygwin scruft (unix directory heirarchy,\n> etc etc). Anyone have any light to shed on cygwin's \"packagability\"?\n>\n> P.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n>\n\n\n",
"msg_date": "Thu, 9 May 2002 10:12:52 +1000",
"msg_from": "\"Nicolas Bazin\" <nbazin@ingenico.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Path to PostgreSQL portabiliy"
},
{
"msg_contents": "Who really is your target \"market\" on the windows platform? Microsoft \nAccess users (many)? MySQL users(insignificant?)? MSSQL (many)?\n\nAssuming that the postgresql team isn't getting lots of money or resources \nto do it. I don't see why you would want to invest a lot to support windows \nfrom a long term point of view. Windows can be a costly platform to support.\n\nBecause if you become a serious threat, Microsoft can rip the rug from \nbeneath you any chance they get. Also Microsoft WILL always change their \nAPIs. They're not stupid. If Microsoft freezes their APIs they will end up \nlike \"yet another BIOS manufacturer\", and bye bye profit margins. Microsoft \nwill strive to keep it a proprietary AND changing API.\n\nWindows is rather different operationally. Automating vacuum etc on windows \nis going to be different. Starting postgresql as a service is going to be \ndifferent as well. Same for uninstalling. So support requests are going to \nbe different.\n\nIf your target market is consumer - Windows consumer users also have \ndifferent expectations. Most will want nicer GUIs (those that don't care \nwon't mind running Postgresql elsewhere).\n\nBTW if your target market is a bit higher end - typically those that \"must \nuse\" windows also \"must use\" MSSQL/Oracle/etc. You will thus have to build \nbrand recognition for Postgresql on Windows.\n\nAll this will cost you.\n\nThat said, is it easier to support only Windows NT/2000 and forget about \nWin9x? The bigger dbs don't support win9x either (how does Oracle/DB2 \nsupport NT? They seem to work ok). Leave MySQL to the Win9x people ;). BTW \ndoes MySQL really perform OK on Win9x?\n\nForget the Cygwin approach. Is there really a market for that? Unless \nthings have got a lot easier, installing Cygwin is like installing a new \nO/S just to install your app. And installing and learning a new system has \ngot to be one of the major barriers, otherwise people will either buy a new \nUSD500 1.5+ GHz pc or use VMware+BSD/Linux+Postgresql ;).\n\nCheerio,\nLink.\n\nAt 11:53 AM 5/8/02 -0400, mlw wrote:\n>writing software for over 20 years now, and sometimes you just have to hold\n>your nose. It would be nice if we could code what we want, the way we want, in\n>the language we want, on the platforms we want.\n>\n>Windows represents a HUGE user base, it also represents a platform for which a\n>real good native PostgreSQL should do well. There are, to my knowledge, no \n>good\n>and free databases available for Windows.\n>\n>PostgreSQL on Windows could be very cool as a serious poster child for why\n>open-source is the way to go.\n\n\n",
"msg_date": "Thu, 09 May 2002 15:34:06 +0800",
"msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>",
"msg_from_op": false,
"msg_subject": "Re: Path to PostgreSQL portabiliy"
},
{
"msg_contents": "On Wed, 8 May 2002, Lamar Owen wrote:\n\n> > 3) If (2) is the case, then development could continue under the BSD\n> > license, since developers could use the BSD-original code for their\n> > development work. So there is no risk of \"backflow polution\".\n>\n> Can PostgreSQL, Inc be the GPL distributor for these purposes, being a\n> separate entity from the PostgreSQL Global Development Group?\n\nUmmmm ... no? We tend to be anti-GPL over here, since its anti-business\n... gborg would be a good place for any of this ...\n\n\n",
"msg_date": "Thu, 9 May 2002 08:51:19 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Path to PostgreSQL portabiliy"
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n\n> Ummmm ... no? We tend to be anti-GPL over here, since its anti-business\n\nLet's not use loaded terms here. \n\n-Doug\n",
"msg_date": "09 May 2002 09:51:42 -0400",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: Path to PostgreSQL portabiliy"
},
{
"msg_contents": "Paul Ramsey wrote:\n> mlw wrote:\n> >\n> > No matter what steps you take, cygwin will not be seen by Windows users as\n> > anything but a sloppy/messy/horrible hack. It is a fact of life. You are\n> > welcome to disagree, but I assure you it is true.\n>\n> Just to clarify here: is it confirmed that having the complete cygwin\n> distribution is a necessary condition to having a running PostgreSQL on\n> windows? Is it not possible that, having built postgresql with the full\n> cygwin, it would be possible to make a nice clean setup.exe package\n\n Well, PostgreSQL goes as far as using system(3) to do \"cp -r\"\n and stuff. Dunno what you call it, but I'd say it's making\n assumptions that one shouldn't make :-)\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n",
"msg_date": "Thu, 9 May 2002 13:05:58 -0400 (EDT)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Path to PostgreSQL portabiliy"
},
{
"msg_contents": "Joel Burton wrote:\n> Certainly, we don't need all of cygwin (eg bison, gcc, perl, et al). We'd\n> need the dll, sh, rm, and few other things. I'm not sure if it would need to\n> be in the standard cygwin file structure; I know that you can reconfigure\n> this when you use cygwin (I used to). In any event, instead of having to\n> have a novice pick & guess which of >100 packages they need, we could put\n> together the 5 or 6 they need.\n\n Oh, BTW, had anyone luck with loading of user defined C\n functions or the PL handlers under CygWin? I remember having\n had trouble with that. When the C function uses global\n variables in the backend, things get a bit messy.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n",
"msg_date": "Thu, 9 May 2002 13:09:24 -0400 (EDT)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Path to PostgreSQL portabiliy"
},
{
"msg_contents": "\nFrom: \"Jan Wieck\" <janwieck@yahoo.com>\n> Joel Burton wrote:\n> > Certainly, we don't need all of cygwin (eg bison, gcc, perl, et al).\nWe'd\n> > need the dll, sh, rm, and few other things. I'm not sure if it would\nneed to\n> > be in the standard cygwin file structure; I know that you can\nreconfigure\n> > this when you use cygwin (I used to). In any event, instead of having to\n> > have a novice pick & guess which of >100 packages they need, we could\nput\n> > together the 5 or 6 they need.\n>\n> Oh, BTW, had anyone luck with loading of user defined C\n> functions or the PL handlers under CygWin? I remember having\n> had trouble with that. When the C function uses global\n> variables in the backend, things get a bit messy.\n\n I have a 7.1.3 install with cygwin and I'm using pl/pgsql and a custom\ntype (with accessor written in a C dll). It never cause me any problem.\nMy custom dll is not using global var in the backend though.\n\n cyril\n\n",
"msg_date": "Thu, 9 May 2002 20:00:03 +0200",
"msg_from": "\"Cyril VELTER\" <cyril.velter@libertysurf.fr>",
"msg_from_op": false,
"msg_subject": "Re: Path to PostgreSQL portabiliy"
},
{
"msg_contents": "On Thursday 09 May 2002 07:51 am, Marc G. Fournier wrote:\n> On Wed, 8 May 2002, Lamar Owen wrote:\n> > > 3) If (2) is the case, then development could continue under the BSD\n> > > license, since developers could use the BSD-original code for their\n> > > development work. So there is no risk of \"backflow polution\".\n\n> > Can PostgreSQL, Inc be the GPL distributor for these purposes, being a\n> > separate entity from the PostgreSQL Global Development Group?\n\n> Ummmm ... no? We tend to be anti-GPL over here, since its anti-business\n> ... gborg would be a good place for any of this ...\n\nI can see my feeble attempt at humor fell flat. Sorry.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Thu, 9 May 2002 14:12:16 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Path to PostgreSQL portabiliy"
},
{
"msg_contents": "Hi,\n\n Win32 & threads support are both going to be a lot of work and maybe\nwe'll need in the future one or both - is there any chance Postgres\ndevelopers look at the Apache experience? Briefly, Apache 2 had the some\nproblems as are discussed here (need to support Win, problems with Win32\nfork, questionable cygwin etc) and they decided to solve it once and for\nall with their Apache Portable Runtime and Multi-Processing Modules. APR\nwas already mentioned here - now how about MPMs?\n\n- Robert\n\nPS Relevant links:\n\nhttp://httpd.apache.org/docs-2.0/mpm.html\nhttp://httpd.apache.org/docs-2.0/new_features_2_0.html\n\nPS2 It took them some three years to release Apache 2 so it's probable\nnot that easy - but you knew that already.\n\nPS3 And when talking about Win32 Postgres uses, don't forget there might\nbe a large number of people who would use Posgtres embeded in accounting\nand many other packages - it can be single user Win98, but it might\nstill need decent SQL backend (subqueries, user functions for all kind\nof CDROM catalogs etc). So when doing major rearchitecture of Postgres,\nit might be usefull to plan for a bit of modularity maybe like in\nMozilla when you can drop UI and use just the layout engine or just the\nJavaScript etc.\n",
"msg_date": "Fri, 10 May 2002 08:02:26 +0200",
"msg_from": "Robert <robert@robert.cz>",
"msg_from_op": false,
"msg_subject": "Threads vs processes - The Apache Way (Re: Path to PostgreSQL\n\tportabiliy)"
},
{
"msg_contents": "Robert wrote:\n> \n> Hi,\n> \n> Win32 & threads support are both going to be a lot of work and maybe\n> we'll need in the future one or both - is there any chance Postgres\n> developers look at the Apache experience? Briefly, Apache 2 had the some\n> problems as are discussed here (need to support Win, problems with Win32\n> fork, questionable cygwin etc) and they decided to solve it once and for\n> all with their Apache Portable Runtime and Multi-Processing Modules. APR\n> was already mentioned here - now how about MPMs?\n\nI am starting to come to the conclusion that the PostgreSQL group is satisfied\nwith cygwin, and the will to create a native Win32 version does not exist\noutside of a few organizations that are paying developers to create one.\n\nWithout some buy-in from the core team, I'm not sure I am willing to spend my\ntime on it. If someone would be willing to fund the 100 or so man-hours\nrequired to do it, then that would be a different story.\n",
"msg_date": "Fri, 10 May 2002 07:13:04 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Threads vs processes - The Apache Way (Re: Path to PostgreSQL "
},
{
"msg_contents": "Le Vendredi 10 Mai 2002 13:13, mlw a écrit :\n> I am starting to come to the conclusion that the PostgreSQL group is\n> satisfied with cygwin, and the will to create a native Win32 version does\n> not exist outside of a few organizations that are paying developers to\n> create one.\n\nThe more important is get a Windows version on the way. pgAdmin2, PostgreSQL \nWindows GUI, will soon be included in the Dev-C++ development environment, as \nper discussion with Colin Laplace.\n\nNative tools for Windows can have a huge success. Dev-C++ had 1.200.000 hits \nover the last years.\n\n> Without some buy-in from the core team, I'm not sure I am willing to spend\n> my time on it. If someone would be willing to fund the 100 or so man-hours\n> required to do it, then that would be a different story.\n\nI suggest we focuss on providing a minimal PostgreSQL + Cygwin layer at first. \nThis will give you the required user base to transform PostgreSQL into a \nmulti-platform RDBMS.\n\nIf we add together direct downloads on http://www.postgresql.org and from \npartner sites (Dec-C++ on http://www.bloodshed.net), we could well reach the \nnumber of 1.000.000 downloads a year under the Windows platform.\n\nCheers,\nJean-Michel\n",
"msg_date": "Fri, 10 May 2002 13:57:11 +0200",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": false,
"msg_subject": "pgAdmin2 to be included in Dev-C++"
},
{
"msg_contents": "Jean-Michel POURE wrote:\n> > Without some buy-in from the core team, I'm not sure I am willing to spend\n> > my time on it. If someone would be willing to fund the 100 or so man-hours\n> > required to do it, then that would be a different story.\n> \n> I suggest we focuss on providing a minimal PostgreSQL + Cygwin layer at first.\n> This will give you the required user base to transform PostgreSQL into a\n> multi-platform RDBMS.\n\nSorry, I'm not interested in a cygwin version of PostgreSQL. I think it will do\nmore harm than good. If we make it something that people want to try, and then\nthey TRY it, they will find that is sucks, then we lose. It is very hard to\nremove the bad taste in ones mouth of a poor product. Think Yugo.\n\nI have no patience with designed to fail projects, certainly not with my time.\nPostgreSQL+cygwin is a loser. If I am going to invest my time and effort, I\nwant it to be great.\n\nPut it this way. The run of the mill Windows developer will be using MSDN. With\nMSDN comes MSSQL. To the developer, it is largely free to setup MSSQL to do\ndevelopment work.\n\nOK, a conscientious developer will explore options. They will install various\nsystems and try them. Given a cygwin+PostgreSQL system, MSSQL, MySQL, Oracle,\nDB2, etc. MSSQL will win. MSSQL will win over Oracle for cost and ease of\nsetup. DB2 will lose, similarly to Oracle. MySQL will lose because it sucks.\nPostgreSQL+cygwin will lose because it will also suck.\n\nThe idea is to \"sway\" Microsoft developers to open source, not give them\nammunition of why they think it sucks.\n",
"msg_date": "Fri, 10 May 2002 08:06:56 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pgAdmin2 to be included in Dev-C++"
},
{
"msg_contents": "Le Vendredi 10 Mai 2002 14:06, mlw a écrit :\n> Sorry, I'm not interested in a cygwin version of PostgreSQL. I think it\n> will do more harm than good. If we make it something that people want to\n> try, and then they TRY it, they will find that is sucks, then we lose. It\n> is very hard to remove the bad taste in ones mouth of a poor product. Think\n> Yugo.\n\nCygwin is very stable. Its community is relatively small but very actuve. We \ncould well provide a unique installer to \"hide\" Cygwin from the user. This \ncan be done compiling Cygwin.dll in a separate user space, as per discussion \nwith Dave Page.\n\n> I have no patience with designed to fail projects, certainly not with my\n> time. PostgreSQL+cygwin is a loser. If I am going to invest my time and\n> effort, I want it to be great.\n\nI agree a native Windows PostgreSQL would be better.\n\n> OK, a conscientious developer will explore options. They will install\n> various systems and try them. Given a cygwin+PostgreSQL system, MSSQL,\n> MySQL, Oracle, DB2, etc. MSSQL will win. MSSQL will win over Oracle for\n> cost and ease of setup. DB2 will lose, similarly to Oracle. MySQL will lose\n> because it sucks. PostgreSQL+cygwin will lose because it will also suck.\n\nMySQL under Windows is based on Cygwin.\nMySQL sucks and has a 'huge\" success.\n\nSo let's do it in three moves :\n\n- first move : gain a large audience providing a stable release of Cygwin + \nPostgreSQL. This could be done within days ... not weeks. This will be much \nbetter than MySQL.\n\n- second move : release a bundle of pgAdmin2 + PostgreSQL on \nhttp://www.postgresql.org, Bloodshed and other sites.\n\n- third move : based on 1.000.000 downloads and 100.000 users, feed the \ncommunity with more developpers, more ideas and more Windows native \nsource-code. So you wron't say \"I am alone\".\n\n\"Rome ne s'est pas faite en une nuit\".\nCheers,\nJean-michel\n",
"msg_date": "Fri, 10 May 2002 14:37:03 +0200",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": false,
"msg_subject": "Re: pgAdmin2 to be included in Dev-C++"
},
{
"msg_contents": "Hi all\n\nDo u know that there are huge ammount of PRODUCTION(!!!) Windows NT\nservers runnini apache , php and mysql? Just because the administrators\nare lame, or because some companies cant afford more than one server,\nand before they decided to use Windows NT (because they did not knew\nabout UNIX/Linux/FreeBSD). This is the reality.\n\nOne more thing - development for PgSQL is quite dificult just because it\ndoes not runs on Windows 9X. There are huge ammount of development\nservers running Win 9X, IIS, PHP and MySQL.\n\nA Windows port for postgres will be great. There were before a postgres\ninstallation for wndows but it does not worked properly.\n\nIs not important HOW stable will Windows version work. For example MySQL\nfor Win is quite UNSTABLE too. The more important is to be VERY EASY to\nbe installed, thats all.\n\nNikolay.\n\n\n-----------------------------------------------------------\nThe Reboots are for hardware upgrades,\nFound more here: http://www.nmmm.nu\nNikolay Mihaylov nmmm@nmmm.nu\n\n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of Jean-Michel\nPOURE\nSent: Friday, May 10, 2002 3:37 PM\nTo: mlw\nCc: Robert; pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] pgAdmin2 to be included in Dev-C++\n\n\nLe Vendredi 10 Mai 2002 14:06, mlw a écrit :\n> Sorry, I'm not interested in a cygwin version of PostgreSQL. I think \n> it will do more harm than good. If we make it something that people \n> want to try, and then they TRY it, they will find that is sucks, then \n> we lose. It is very hard to remove the bad taste in ones mouth of a \n> poor product. Think Yugo.\n\nCygwin is very stable. Its community is relatively small but very\nactuve. We \ncould well provide a unique installer to \"hide\" Cygwin from the user.\nThis \ncan be done compiling Cygwin.dll in a separate user space, as per\ndiscussion \nwith Dave Page.\n\n> I have no patience with designed to fail projects, certainly not with \n> my time. PostgreSQL+cygwin is a loser. If I am going to invest my time\n\n> and effort, I want it to be great.\n\nI agree a native Windows PostgreSQL would be better.\n\n> OK, a conscientious developer will explore options. They will install \n> various systems and try them. Given a cygwin+PostgreSQL system, MSSQL,\n\n> MySQL, Oracle, DB2, etc. MSSQL will win. MSSQL will win over Oracle \n> for cost and ease of setup. DB2 will lose, similarly to Oracle. MySQL \n> will lose because it sucks. PostgreSQL+cygwin will lose because it \n> will also suck.\n\nMySQL under Windows is based on Cygwin.\nMySQL sucks and has a 'huge\" success.\n\nSo let's do it in three moves :\n\n- first move : gain a large audience providing a stable release of\nCygwin + \nPostgreSQL. This could be done within days ... not weeks. This will be\nmuch \nbetter than MySQL.\n\n- second move : release a bundle of pgAdmin2 + PostgreSQL on \nhttp://www.postgresql.org, Bloodshed and other sites.\n\n- third move : based on 1.000.000 downloads and 100.000 users, feed the \ncommunity with more developpers, more ideas and more Windows native \nsource-code. So you wron't say \"I am alone\".\n\n\"Rome ne s'est pas faite en une nuit\".\nCheers,\nJean-michel\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n",
"msg_date": "Fri, 10 May 2002 15:53:57 +0300",
"msg_from": "\"Nikolay Mihaylov\" <pg@nmmm.nu>",
"msg_from_op": false,
"msg_subject": "Re: pgAdmin2 to be included in Dev-C++"
},
{
"msg_contents": "Jean-Michel POURE wrote:\n> \n> Le Vendredi 10 Mai 2002 14:06, mlw a �crit :\n> > Sorry, I'm not interested in a cygwin version of PostgreSQL. I think it\n> > will do more harm than good. If we make it something that people want to\n> > try, and then they TRY it, they will find that is sucks, then we lose. It\n> > is very hard to remove the bad taste in ones mouth of a poor product. Think\n> > Yugo.\n> \n> Cygwin is very stable. Its community is relatively small but very actuve. We\n> could well provide a unique installer to \"hide\" Cygwin from the user. This\n> can be done compiling Cygwin.dll in a separate user space, as per discussion\n> with Dave Page.\n\nHere are the problems with cygwin:\n\n(1) GNU license issues.\n(2) Does not work well with anti-virus software\n(3) Since OS level copy-on-write is negated, process creation is much slower.\n(4) Since OS level copy-on-write is negated, memory that otherwise would not be\nallocated to the process is forced to be allocated when the parent process data\nis copied.\n\nAs a product manager, I would not commit to using a cygwin application in\nproduction. Do you know of any long-uptime systems using cygwin? PostgreSQL\nwould need to run for months. I would view it as a risk.\n\nLastly, a Windows program is expected to be a Windows program. Native paths\nneed to be used, like C:\\My Database, D:\\My Postgres, or something like that.\nNative tools must be used to manage it.\n\n\n> \n> > I have no patience with designed to fail projects, certainly not with my\n> MySQL under Windows is based on Cygwin.\n> MySQL sucks and has a 'huge\" success.\n\nDefine \"Success\"\n\n> \n> So let's do it in three moves :\n> \n> - first move : gain a large audience providing a stable release of Cygwin +\n> PostgreSQL. This could be done within days ... not weeks. This will be much\n> better than MySQL.\n\nNo interest in cygwin, sorry.\n",
"msg_date": "Fri, 10 May 2002 08:57:19 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pgAdmin2 to be included in Dev-C++"
},
{
"msg_contents": "Hi everyone,\n\nJean-Michel POURE wrote:\n> \n<snip>\n> - second move : release a bundle of pgAdmin2 + PostgreSQL on\n> http://www.postgresql.org, Bloodshed and other sites.\n\nDon't know if it's useful to know, but a PostgreSQL project got setup on\nSourceforge recently (no CVS), pretty much just so PostgreSQL could be\nincluded in the \"Database Foundry\" on the Sourceforge site. :)\n\nhttp://www.sf.net/projects/pgsql\n\nAnd then I started a new contract and haven't had time to do anything\nwith it (oh well).\n\nRegards and best wishes,\n\nJustin Clift\n\n\n<snip>\n> \n> \"Rome ne s'est pas faite en une nuit\".\n> Cheers,\n> Jean-michel\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Fri, 10 May 2002 23:20:43 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: pgAdmin2 to be included in Dev-C++"
},
{
"msg_contents": "Dear Mark,\n\nAgreed except for paths (see below). But now that we agree, why not move to \nWindows in three steps:\n1) Release a minimal Cygwin + PostgreSQL installer,\n2) Have 100.000 downloads or more Windows developpers,\n3) Work as a team on a Windows port.\n\nBy the way : Cygwin accepts both Windows AND Unix paths depending on \ninstallation options. Cygwin is able to understand C:\\program \nfiles\\postgresql\\var\\lib\\pgsql, /cygdrive/../var/lib/pgsql or simply \n/var/lib/pgsql.\n\nCheers,\nJean-Michel\n\n> Here are the problems with cygwin:\n> (1) GNU license issues.\n> (2) Does not work well with anti-virus software\n> (3) Since OS level copy-on-write is negated, process creation is much\n> slower. (4) Since OS level copy-on-write is negated, memory that otherwise\n> would not be allocated to the process is forced to be allocated when the\n> parent process data is copied.\n> As a product manager, I would not commit to using a cygwin application in\n> production. Do you know of any long-uptime systems using cygwin? PostgreSQL\n> would need to run for months. I would view it as a risk.\n> Lastly, a Windows program is expected to be a Windows program. Native paths\n> need to be used, like C:\\My Database, D:\\My Postgres, or something like\n> that. Native tools must be used to manage it.\n\n",
"msg_date": "Fri, 10 May 2002 15:26:02 +0200",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": false,
"msg_subject": "Re: pgAdmin2 to be included in Dev-C++"
},
{
"msg_contents": "Jean-Michel POURE wrote:\n> \n> Dear Mark,\n> \n> Agreed except for paths (see below). But now that we agree, why not move to\n> Windows in three steps:\n> 1) Release a minimal Cygwin + PostgreSQL installer,\n> 2) Have 100.000 downloads or more Windows developpers,\n> 3) Work as a team on a Windows port.\n> \n> By the way : Cygwin accepts both Windows AND Unix paths depending on\n> installation options. Cygwin is able to understand C:\\program\n> files\\postgresql\\var\\lib\\pgsql, /cygdrive/../var/lib/pgsql or simply\n> /var/lib/pgsql.\n\nThe point you are missing is that a cygwin version of postgres is unacceptable.\nDoing an installer BEFORE commiting to making the system excellent is putting\nthe cart before the horse.\n\nThe LAST thing we want is 100,000+ Windows users downloading PostgreSQL and\ngetting a cygwin version. \n\nThe first time it doesn't work because of anti-virus software, they'll call it\njunk. When they test performance and see that it sucks, they'll remove the\nsoftware.\n",
"msg_date": "Fri, 10 May 2002 09:33:37 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pgAdmin2 to be included in Dev-C++"
},
{
"msg_contents": "Le Vendredi 10 Mai 2002 15:33, mlw a écrit :\n> The first time it doesn't work because of anti-virus software, they'll call\n> it junk. When they test performance and see that it sucks, they'll remove\n> the software.\n\nDear Mark,\n\nPostgreSQL will work well if cygwin.dll is compiled in a separate workspace \nand installed under C:/program files/postgresql and hidden from users. I \nagree it will not be able to serve a 50 TPS system.\n\nFurthermore : MySQL sucks, Windoze sucks and Microsoft is violating our \nprivate rights everyday. So if you care for freedom, we are going to release \nthis f****** Cygwin minimal installer.\n\nDon't you think my friend? Noone will complain about it. Do you see \ndemonstrations in the street against Microsoft? The answer is no.\n\nTherefore, I believe noone will complain about a minimal Cygwin + PostgreSQL \ninstaller. This will only be the beginning of a complete Windows port.\n\nWhich can also be expressed as :\n\"Il faut laisser le temps au temps\"\n\"Il n'y a pas le feu au lac\"\n\nCheers,\nJean-Michel\n\n",
"msg_date": "Fri, 10 May 2002 15:55:13 +0200",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": false,
"msg_subject": "Re: pgAdmin2 to be included in Dev-C++"
},
{
"msg_contents": "Jean-Michel POURE wrote:\n> \n> Le Vendredi 10 Mai 2002 15:33, mlw a �crit :\n> > The first time it doesn't work because of anti-virus software, they'll call\n> > it junk. When they test performance and see that it sucks, they'll remove\n> > the software.\n> \n> Dear Mark,\n> \n> PostgreSQL will work well if cygwin.dll is compiled in a separate workspace\n> and installed under C:/program files/postgresql and hidden from users. I\n> agree it will not be able to serve a 50 TPS system.\n\nThen what is the point? \n\n> \n> Furthermore : MySQL sucks, Windoze sucks and Microsoft is violating our\n> private rights everyday. So if you care for freedom, we are going to release\n> this f****** Cygwin minimal installer.\n\nDon't get me wrong, I would love it if Windows were no longer around. I think a\ncygwin version of PostgreSQL will not further your objective. Windows users\nwill not be seeing the cream of the crop, they will be seeing a quick and dirty\nhack. In the words of Martin Luther King, Excellence is the best revenge.\n\nThe risk you are taking is this: If you rush out a cygwin version of PostgreSQL\nthere may be a lasting impression that PostgreSQL is of poor quality. \n\nHow will Windows developers create C language function extensions? Using cygwin\nand gcc as well? These guys can't do crap without VisualStudio.\n\nSeriously, don't do it. Please don't do it. If we want to make a serious\npresence in the Windows market, it is better to take our time and do it well or\nnot at all.\n\n> \n> Don't you think my friend? Noone will complain about it. Do you see\n> demonstrations in the street against Microsoft? The answer is no.\n> \n> Therefore, I believe noone will complain about a minimal Cygwin + PostgreSQL\n> installer. This will only be the beginning of a complete Windows port.\n\nI completely disagree. Let me ask you. Have you ever used Windows? I mean as\nyour primary system? Have you ever thrilled at getting something new for your\nWindows system? (Like you do with you current system.)\n\nI'm not ashamed to admit I used to love Windows. Before Linux was usable, and\nbefore FreeBSD was unencumbered, it was the best system a user could get for\nthe money. Windows was fun, especially if you had the SDK/DDK and knew how to\nuse it.\n\nThink about Linux and Wine. Linux users do not like Wine applications, no\nmatter how hidden they are. Franken-wine they are called, and fail quickly.\nLook at CorelDraw, a miserable failure. Cygwin on Windows is analogous to Wine\non Linux.\n\nA native PostgreSQL on Windows would rock the Windows world. It would kick\nMSSQL's butt for many applications. I think you underestimate Windows and\nWindows users if you think a cygwin version will satisfy them. The mistake is\nthinking that they are the ignorant unwashed masses that so many UNIX people\nseem to think they are.\n",
"msg_date": "Fri, 10 May 2002 10:22:57 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pgAdmin2 to be included in Dev-C++"
},
{
"msg_contents": "> Then what is the point?\n\nWe need more information about Cygwin. See Jason Tishler mail forwarded by \nDave Page.\n\n> Don't get me wrong, I would love it if Windows were no longer around. I\n> think a cygwin version of PostgreSQL will not further your objective.\n> Windows users will not be seeing the cream of the crop, they will be seeing\n> a quick and dirty hack. In the words of Martin Luther King, Excellence is\n> the best revenge.\n\nMicrosoft and MySQL conquered the world with converse ideas : releasing crap. \nBeside, Cygwin is a very good POSIX emulation. So PostgreSQL + Cygwin wron't \nbe crap.\n\n> How will Windows developers create C language function extensions? Using\n> cygwin and gcc as well? These guys can't do crap without VisualStudio.\n\nDev-C++ because mingw and Cygwin can coexist.\n\n> Seriously, don't do it. Please don't do it. If we want to make a serious\n> presence in the Windows market, it is better to take our time and do it\n> well or not at all.\n\nWell... This is where we disagree. We can do BOTH :\n- release a fast Cygwin + PostgreSQL installer,\n- port PostgreSQL to native Windows.\n\n> I completely disagree. Let me ask you. Have you ever used Windows? I mean\n> as your primary system? Have you ever thrilled at getting something new for\n> your Windows system? (Like you do with you current system.)\n\nI stopped using Windows a year ago for political reasons. I stopped using \nWindows three years ago in production for non-political reasons. I simply \ncould not work with such a bad system.\n\n> I'm not ashamed to admit I used to love Windows. Before Linux was usable,\n> and before FreeBSD was unencumbered, it was the best system a user could\n> get for the money. Windows was fun, especially if you had the SDK/DDK and\n> knew how to use it.\n\nSorry my friend, at that time I was sleeping with an Apple IIx and later a \nMacintosh. You must have had a hard time debugging Windows 1.0 or 2.0 (which \nI read on the list previously). You must have been very lucky to survive. Now \nI understand your point of view better ...\n\n> A native PostgreSQL on Windows would rock the Windows world. It would kick\n> MSSQL's butt for many applications. I think you underestimate Windows and\n> Windows users if you think a cygwin version will satisfy them. The mistake\n> is thinking that they are the ignorant unwashed masses that so many UNIX\n> people seem to think they are.\n\nAgreed. But it wron't be ready before 6 months. Meanwhile I would like to come \nwith a suitable solution to release on http://www.bloodshed.net and feed \nhungry developpers that are getting bored with MySQL.\n\nIn short, my opinion is \n\"Le mieux est l'ennemi du bien\"\n\nCheers,\nJean-Michel\n\n",
"msg_date": "Fri, 10 May 2002 16:51:14 +0200",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": false,
"msg_subject": "Re: pgAdmin2 to be included in Dev-C++"
},
{
"msg_contents": "mlw <markw@mohawksoft.com> writes:\n> Without some buy-in from the core team, I'm not sure I am willing to spend my\n> time on it. If someone would be willing to fund the 100 or so man-hours\n> required to do it, then that would be a different story.\n\nYou are not going to get any buy-in with such ridiculous claims as that.\nIf the total cost of a native Windows port were O(100 hours), it'd have\nbeen done long since. Add a couple zeroes on the end and I'd start to\nbelieve that you might have some grasp of the problem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 May 2002 12:00:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Threads vs processes - The Apache Way (Re: Path to PostgreSQL "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n> > Without some buy-in from the core team, I'm not sure I am willing to spend my\n> > time on it. If someone would be willing to fund the 100 or so man-hours\n> > required to do it, then that would be a different story.\n> \n> You are not going to get any buy-in with such ridiculous claims as that.\n> If the total cost of a native Windows port were O(100 hours), it'd have\n> been done long since. Add a couple zeroes on the end and I'd start to\n> believe that you might have some grasp of the problem.\n\nI was basing my estimates on a couple things. Please feel free to correct me\nwhere I'm wrong. Dann Corbit mentioned a number of, I think I recall, a couple\nhundred man-hours for their port.\n\nMy approach would be to find all the global variables setup by postmaster, not\nall the globals, mind you. Just the ones initialized by postmaster. Move them\nto a structure. That structure would be capable of being copied to the child\nprocess.\n\nIn the area where forking the postgres process happens, I would ifdef that area\nwith an \"HAS_FORK\" The Windows portion would use CreateProcess. The Windows\nversion of postgres would contact the postmaster and get its copy of the\nglobals struct. The code to transfer ownership of sockets, files, and memory\nwould have to be written also.\n\nI would only minimally change the back-end code, it would still be built with\ncygwin tools only directed not to link against the cygwin.dll. (The same goes\nfor the utilities as well.)\n\nA thin port layer could then be constructed by either implementing sysv/UNIX\nreplacements, or a more simple API as needed in the code, like your shared\nmemory and semaphore APIs.\n\nDoes that sound like an unworkable plan?\n",
"msg_date": "Fri, 10 May 2002 12:55:42 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Threads vs processes - The Apache Way (Re: Path to PostgreSQL"
}
] |
[
{
"msg_contents": "\n> > This also poses the biggest problem in terms of legacy compatibility.\n> > Perhaps the answer is to add a runtime config option (and default it\n> > to ANSI) and possibly deprecate the C escaping.\n> \n> While I wouldn't necessarily object to a runtime option, I do object\n> to both the other parts of your proposal ;-). Backslash escaping is\n> not broken; we aren't going to remove it or deprecate it, and I would\n> vote against making it non-default.\n\nWhen we are talking about the places where you need double escaping \n(once for parser, once for input function) to make it work, I would also \nsay that that is very cumbersome (not broken, since it is thus documented) :-) \nI would also default to strict ANSI, but not depricate the escaping when set.\nAll imho of course.\n\nAndreas\n",
"msg_date": "Wed, 8 May 2002 18:47:46 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: non-standard escapes in string literals "
},
{
"msg_contents": "On Wed, May 08, 2002 at 06:47:46PM +0200, Zeugswetter Andreas SB SD wrote:\n> When we are talking about the places where you need double escaping \n> (once for parser, once for input function) to make it work, I would also \n> say that that is very cumbersome (not broken, since it is thus documented) :-) \n> I would also default to strict ANSI, but not depricate the escaping when set.\n> All imho of course.\n\nAs the original reporter of this issue, I am gratified to hear it\nacknowledged by the developers. Thanks! (I also apologize if I\nexaggerated the pain caused, as apparently not many other people\nhave been bitten by this specific problem. Well, it was painful for\nme. ;-) )\n\nI must say, though, that I remain bothered by the \"not broken\"\nattitude. There is an obvious standard for PostgreSQL to follow,\nyet it is non-compliant in utterly trivial ways, which provide\nmarginal or no benefits. Granted, changing long-standing defaults\nmay not be acceptible; but there is a big difference between, \"it is\nbroken but we just can't change it for compatibility reasons\", and,\n\"it is not broken\".\n\nIt is my experience that most other free software projects take\nstandards compliance more seriously than PostgreSQL, and my strong\nopinion that both the project and its users (not to mention the\nwhole SQL database industry, eventually) would benefit from better\nsupport for the SQL standard.\n\nOk, I've said my peace.\n\nAndrew\n",
"msg_date": "Wed, 8 May 2002 13:35:12 -0400",
"msg_from": "pimlott@idiomtech.com (Andrew Pimlott)",
"msg_from_op": false,
"msg_subject": "Re: non-standard escapes in string literals"
},
{
"msg_contents": "> It is my experience that most other free software projects take\n> standards compliance more seriously than PostgreSQL, and my strong\n> opinion that both the project and its users (not to mention the\n> whole SQL database industry, eventually) would benefit from better\n> support for the SQL standard.\n\nUmmm - I think you'd be hard pressed to find a open source db team more\ncommitted to standards compliance.\n\nChris\n\n",
"msg_date": "Fri, 10 May 2002 09:57:06 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: non-standard escapes in string literals"
},
{
"msg_contents": "Andrew Pimlott wrote:\n> On Wed, May 08, 2002 at 06:47:46PM +0200, Zeugswetter Andreas SB SD wrote:\n> > When we are talking about the places where you need double escaping \n> > (once for parser, once for input function) to make it work, I would also \n> > say that that is very cumbersome (not broken, since it is thus documented) :-) \n> > I would also default to strict ANSI, but not depricate the escaping when set.\n> > All imho of course.\n> \n> As the original reporter of this issue, I am gratified to hear it\n> acknowledged by the developers. Thanks! (I also apologize if I\n> exaggerated the pain caused, as apparently not many other people\n> have been bitten by this specific problem. Well, it was painful for\n> me. ;-) )\n> \n> I must say, though, that I remain bothered by the \"not broken\"\n> attitude. There is an obvious standard for PostgreSQL to follow,\n> yet it is non-compliant in utterly trivial ways, which provide\n> marginal or no benefits. Granted, changing long-standing defaults\n> may not be acceptible; but there is a big difference between, \"it is\n> broken but we just can't change it for compatibility reasons\", and,\n> \"it is not broken\".\n> \n> It is my experience that most other free software projects take\n> standards compliance more seriously than PostgreSQL, and my strong\n> opinion that both the project and its users (not to mention the\n> whole SQL database industry, eventually) would benefit from better\n> support for the SQL standard.\n> \n> Ok, I've said my peace.\n\nYes, these are good points. Our big problem is that we use backslash\nfor two things, one for escaping single quotes and for escaping standard\nC characters, like \\n. While we can use the standard-supported '' to\ninsert single quotes, what should we do with \\n? The problem is\nswitching to standard ANSI solution reduces our functionality.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 2 Jun 2002 23:25:36 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: non-standard escapes in string literals"
}
] |
[
{
"msg_contents": "In 7.2 and before it would work to do EXPLAIN in a plpgsql function:\n\nregression=# create function foo(int) returns int as '\nregression'# begin\nregression'# explain select * from tenk1 where unique1 = $1;\nregression'# return 1;\nregression'# end;' language plpgsql;\nCREATE\nregression=# select foo(1);\nNOTICE: QUERY PLAN:\n\nIndex Scan using tenk1_unique1 on tenk1 (cost=0.00..6.00 rows=1 width=148)\n\n foo\n-----\n 1\n(1 row)\n\nwhich was useful for examining the behavior of the planner with\nparameterized queries.\n\nIn current CVS tip this doesn't work anymore --- the EXPLAIN executes\njust fine, but plpgsql discards the result, and you never get to see it.\n\nNot sure what to do about this. Probably plpgsql should be tweaked to\ndo something with EXPLAIN, but what? Should it treat it like a SELECT?\nOr just issue the output as a NOTICE (seems like a step backwards\nthough).\n\nI'm also strongly tempted to try to make the SQL-language equivalent work:\n\nregression=# create function foo(int) returns setof text as\nregression-# 'explain select * from tenk1 where unique1 = $1;'\nregression-# language sql;\nERROR: function declared to return text, but final statement is not a SELECT\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 May 2002 13:00:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Bad side-effect from making EXPLAIN return a select result"
},
{
"msg_contents": "Tom Lane wrote:\n > which was useful for examining the behavior of the planner with\n > parameterized queries.\n >\n > In current CVS tip this doesn't work anymore --- the EXPLAIN\n > executes just fine, but plpgsql discards the result, and you never\n > get to see it.\n >\n > Not sure what to do about this. Probably plpgsql should be tweaked\n > to do something with EXPLAIN, but what? Should it treat it like a\n > SELECT? Or just issue the output as a NOTICE (seems like a step\n > backwards though).\n >\n > I'm also strongly tempted to try to make the SQL-language equivalent\n > work:\n >\n > regression=# create function foo(int) returns setof text as regression-#\n > 'explain select * from tenk1 where unique1 = $1;' regression-#\n > language sql; ERROR: function declared to return text, but final\n > statement is not a SELECT\n\nIf EXPLAIN was treated as a select, and modified to use the\nExprMultipleResult API, then the SRF stuff would allow you to get output\nfrom a SQL function (for that matter a SQL function could do it today\nsince it's only one result column).\n\nPLpgSQL currently doesn't seem to have a way to return setof anything\n(although it can be defined to), but I was planning to look at that\nafter finishing SRFs. E.g.\n\nCREATE TABLE foo (fooid int, foosubid int, fooname text, primary\nkey(fooid,foosubid));\nINSERT INTO foo VALUES(1,1,'Joe');\nINSERT INTO foo VALUES(1,2,'Ed');\nINSERT INTO foo VALUES(2,1,'Mary');\nCREATE OR REPLACE FUNCTION testplpgsql() RETURNS setof int AS 'DECLARE\nfooint int; BEGIN SELECT fooid into fooint FROM foo; RETURN fooint;\nEND;' LANGUAGE 'plpgsql';\n\ntest=# select testplpgsql();\t<== old style API\nCancel request sent\t<== seems to hang, never returns anything, ctl-c\nWARNING: Error occurred while executing PL/pgSQL function testplpgsql\nWARNING: line 1 at select into variables\nERROR: Query was cancelled.\ntest=#\n\nThis never even returns the first row. Am I missing something on this,\nor did plpgsql never support setof results? If so, how?\n\nJoe\n\n\n",
"msg_date": "Wed, 08 May 2002 18:27:06 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Bad side-effect from making EXPLAIN return a select"
}
] |
[
{
"msg_contents": "What happened to the -v option on pg_ctl? We use it and I can not find\nanywhere where it was documented that this was and did go away. What is the\nalternative now?\n\n-- \nLaurette Cisneros\nThe Database Group\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\nWhere's my....bus?\n\n",
"msg_date": "Wed, 8 May 2002 14:17:53 -0700 (PDT)",
"msg_from": "Laurette Cisneros <laurette@nextbus.com>",
"msg_from_op": true,
"msg_subject": "pg_ctl -v"
},
{
"msg_contents": "Laurette Cisneros <laurette@nextbus.com> writes:\n> What happened to the -v option on pg_ctl?\n\nThere is not and never has been any such option.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 May 2002 01:18:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl -v "
},
{
"msg_contents": "Sorry. I think it was a hack here and that person is now gone. My\napologies.\n\nL.\nOn Thu, 9 May 2002, Tom Lane wrote:\n\n> Laurette Cisneros <laurette@nextbus.com> writes:\n> > What happened to the -v option on pg_ctl?\n> \n> There is not and never has been any such option.\n> \n> \t\t\tregards, tom lane\n> \n\n-- \nLaurette Cisneros\nThe Database Group\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\nWhere's my....bus?\n\n",
"msg_date": "Thu, 9 May 2002 08:56:15 -0700 (PDT)",
"msg_from": "Laurette Cisneros <laurette@nextbus.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ctl -v "
}
] |
[
{
"msg_contents": "I'm using 7.2.1 on a Debian system.\n\nIf I do an insert or update or delete on a table, postgres tells me\nhow many rows were affected.\n\nUsing the following input to psql, I got the results:\n\nINSERT 0 0\nUPDATE 0\nDELETE 0\n\nIs this expected? The principle of least suprise suggests to me that\nregardless of the query being rewritten, there is some number of\ntuples being affected, and it would thus still be appropriate to\nreturn that number.\n\nI realize it's not technically a \"bug\", since there's no particular\nguarantee that someone specified existing records or whatnot, but as\nan additional fourth-string check in some web code I put together, I\nwas checking to see if stuff was returned or updated (since the system\nshould only being allowing changes to things that exist) as a\nheuristic to guard against 1) bugs, and 2) attempts to maliciously\nsubvert the public interface.\n\nI can find no mention of this issue in the documentation regarding the\nrule system. Anyone have any guidance?\n\nMike.\n\n-----8<-----\ndrop sequence member_id_seq;\ncreate sequence member_id_seq;\n\ndrop table member;\ncreate table member (\n id integer not null constraint member_id primary key default nextval('member_id_seq'),\n created timestamp not null default now (),\n modified timestamp not null default now (),\n deleted timestamp default null,\n email character varying (128) not null constraint member_email unique,\n password character varying (128) not null\n);\n\ndrop view members;\ncreate view members as select * from member m1 where m1.deleted is null;\n\ndrop rule members_delete;\ncreate rule members_delete as on delete to members do instead update member set deleted = current_timestamp;\n\ndrop rule members_insert;\ncreate rule members_insert as on insert to members do instead insert into member (email, password) values (new.email, new.password);\n\ndrop rule members_update;\ncreate rule members_update as on update to members do instead update member set email = new.email, password = new.password;\n\ninsert into members (email, password) values ('mdorman@wombat.org','pinochle');\n\nupdate members set email='mdorman@lemur.org', password='wombat' where id = 1;\n\ndelete from members where id = 1;\n----->8-----\n",
"msg_date": "08 May 2002 19:08:01 -0400",
"msg_from": "Michael Alan Dorman <mdorman@debian.org>",
"msg_from_op": true,
"msg_subject": "Queries using rules show no rows modified?"
},
{
"msg_contents": "Michael Alan Dorman wrote:\n> \n> I'm using 7.2.1 on a Debian system.\n> \n> If I do an insert or update or delete on a table, postgres tells me\n> how many rows were affected.\n> \n> Using the following input to psql, I got the results:\n> \n> INSERT 0 0\n> UPDATE 0\n> DELETE 0\n> \n> Is this expected? The principle of least suprise suggests to me that\n> regardless of the query being rewritten, there is some number of\n> tuples being affected, and it would thus still be appropriate to\n> return that number.\n\nYou are right. It's a bug introduced in 7.2.\nPlease check the thread [GENERAL]([HACKERS]) \nUsing views and MS access via odbc.\nIf there's no objection, I would commit the patch\nin the thread to both 7.2-stable and the current.\n\nregards,\nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n",
"msg_date": "Thu, 09 May 2002 09:16:12 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Queries using rules show no rows modified?"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> If there's no objection, I would commit the patch\n> in the thread to both 7.2-stable and the current.\n\nLast I checked, I objected to your solution and you objected to mine\n... so I think it's on hold until we get some more votes.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 May 2002 01:24:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Queries using rules show no rows modified? "
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n> Last I checked, I objected to your solution and you objected to mine\n> ... so I think it's on hold until we get some more votes.\n\nWell, If I'm reading this code from DBD::Pg's dbdimp.c correctly, I\nthink that the perl module, at least, feels that the number is much\nmore important than the actual command that is returned:\n\n if (PGRES_TUPLES_OK == status) {\n [...]\n } else if (PGRES_COMMAND_OK == status) {\n /* non-select statement */\n if (! strncmp(cmdStatus, \"DELETE\", 6) || ! strncmp(cmdStatus, \"INSERT\", 6) || ! strncmp(cmdStatus, \"UPDATE\", 6)) {\n ret = atoi(cmdTuples);\n } else {\n ret = -1;\n }\n\nIt appears that while the implementation does look to make sure the\nreturn string is recognizable, it doesn't care too much beyond that\nwhich one it is---not suprising as that string is, as far as the DBI\ninterface is concerned, just \"extra information\" that has no defined\ninterface to get back out to the user. More important, at least from\nthe standpoint of a user of the module seems to be that the cmdTuples\n(gotten from PQcmdTuples) represents number affected so it can be\nreturned.\n\nIn fact, now that I look at it, this change has in fact broken the\nDBD::Pg interface with respect to the DBI when used in the presence of\nrules, because the DBI spec states that it will either return the\nnumber of tuples affected or -1 if that is unknown, rather than 0,\nwhich breaks as a result of this change.\n\nI guess there's an argument to be made as to whether PostgreSQL\nprovides any guarantees about this number being correct or even valid,\nbut the fact that the library interface makes it available, and I see\nnothing in the documentation of the function that suggests that that\nnumber is unreliable suggests that it is not an error to depend on it.\n\nSo, If I understood the proposals correctly, I think that means that\nthis implementation argues for, or at least would work well with,\nHiroshi's solution, since yours, Tom, would return a false zero in\ncertain (perhaps rare) situations, arguably loosing information that\nthe perl module, at least, could use, and the library purports to make\navailable, in order to preserve information it does not.\n\nI guess there is one other possibility, though I don't know how\nradical it would be in either implementation or effects: return the\nempty string from PQcmdTuples in this situation. It serves as\nsomething of an acknowledgement that what went on was not necessarily\nfish or fowl, while still being, from my reading of the docs, a valid\nreturn. The perl module certainly regards it as one, albeit one that\ntransmits precious little information. Well-written interfaces should\nalready be able to cope with it, given that it is documented as a\npossiblity in the docs, right?\n\nMike.\n",
"msg_date": "09 May 2002 09:55:48 -0400",
"msg_from": "Michael Alan Dorman <mdorman@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: Queries using rules show no rows modified?"
},
{
"msg_contents": "Michael Alan Dorman <mdorman@debian.org> writes:\n> So, If I understood the proposals correctly, I think that means that\n> this implementation argues for, or at least would work well with,\n> Hiroshi's solution, since yours, Tom, would return a false zero in\n> certain (perhaps rare) situations,\n\nIMHO Hiroshi's solution would return false information in more cases\nthan mine.\n\nThe basic argument in favor of a patch like this is that if a rule\nreplaces (DO INSTEAD) a command with another command of the same general\ntype, it is useful to return the tag for the replacement command not the\noriginal. I agree with that. I do not agree with the claim that we\nshould return a tag from the underlying implementation when a rule\nrewrites a query into a form totally unrecognizable to the client.\nConsider again the example of transforming an UPDATE on a view into\nan INSERT on some underlying table --- but let's reverse it now and\nsuppose it's the other way, the client sends INSERT and the rule\nreplaces it with an UPDATE. If the client is expecting to get back\n\"INSERT m n\" and actually gets back \"UPDATE n\", isn't that client\nlikely to break?\n\nAnother issue is that the whole thing falls down if the rewriting\ngenerates more than one query; both Hiroshi's proposal and mine will\nnot return any substitute tag then. This seems rather restrictive.\nMaybe we could have behavior like this: if the original command is\nreplaced, then use the tag from the last substituted command of the\nsame class (eg, if you rewrite an UPDATE into an INSERT and an UPDATE,\nyou get the tag from the UPDATE). If there is *no* substitute command\nof the same class, I still believe that returning \"UPDATE 0\" is correct\nbehavior. You sent an update, zero tuples were updated, end of story.\nThere is not scope in this API to tell you about how many tuples might\nhave been inserted or deleted.\n\nNote that as of CVS tip, the firing order of rules is predictable,\nso the rule author can control which substituted command is \"the last\none\". Without this I don't think that the above would work, but with\nit, it seems like a moderately clean answer. Moreover it's at least\nsomewhat compatible with the pre-7.2.1 behavior --- where you got the\ntag from the last command *executed* regardless of any other\nconsiderations. That was definitely broken.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 May 2002 10:43:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Queries using rules show no rows modified? "
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n> The basic argument in favor of a patch like this is that if a rule\n> replaces (DO INSTEAD) a command with another command of the same\n> general type, it is useful to return the tag for the replacement\n> command not the original. I agree with that.\n\nI would argue that the argument in favor of a patch is that there's no\ndocumentation anywhere that behavior changed, or that PQcmdTuples will\nnot return the expected result in the presence of rules. :-)\n\nIs the change behaviorou propose implementable as a patch to 7.2.1?\n\n> If the client is expecting to get back \"INSERT m n\" and actually\n> gets back \"UPDATE n\", isn't that client likely to break?\n\nPerhaps. How many clients are checking that the string returned\nmatches the query it sent?\n\nI've checked DBD::Pg, it doesn't. I've checked psycopg, it doesn't,\nthough it looks like its handling of the value might be a bit bogus.\necpg doesn't, though it looks like it might choke on an empty string.\nPHP doesn't. QT3 doesn't. PoPY (another Python interface) doesn't.\nThe TCL library doesn't even look at the return, it just passes it\nback, so I suppose there might be applications doing a direct look.\nThe python lib included with postgresql doesn't. In fact, the idiom\nis either (in pseudocode):\n\n if (temp = PQcmdTuples (result)) {\n numTuples = atoi (temp);\n } else {\n numTuples = some other arbitrary value;\n }\n\nor:\n\n numTuples = atoi (PQcmdTuples (result));\n\nSo, no, my *very* unscientific and non-comprehensive survey suggests\nthat your fears are mostly groundless. But I haven't seen a single\ninterface that *is* depending on that being correct, but many of them\nreturn misleading results if PQcmdTuples does.\n\nWhich is, if I haven't hammered this enough, not mentioned anywhere in\nthe documentation.\n \n> Another issue is that the whole thing falls down if the rewriting\n> generates more than one query; both Hiroshi's proposal and mine will\n> not return any substitute tag then. This seems rather restrictive.\n\nIf, when you say, \"will not return any substitute tag then.\", you mean\nthat, as an end result PQcmdTuple would return an empty string, well,\nthat seems reasonable---it keeps the DB from returning bogus info, and\nan empty string returned from PQcmdTuple _is_ documented as a valid\nresponse, and it looks like most interfaces would handle it just fine\n(except maybe for ecpg, which I would argue either has a bug or I'm\nnot reading right).\n\nI guess there's the argument to be made that any overly-zealous\ninterface that might choke on getting a different tag back might also\nchoke on getting no tag back. But, again, I don't see any doing any\nof this. And they *all* seem to expect PQcmdTuples to either return\nlegitimate data or nothing at all.\n\n> Maybe we could have behavior like this: if the original command is\n> replaced, then use the tag from the last substituted command of the\n> same class (eg, if you rewrite an UPDATE into an INSERT and an\n> UPDATE, you get the tag from the UPDATE). If there is *no*\n> substitute command of the same class, I still believe that returning\n> \"UPDATE 0\" is correct behavior. You sent an update, zero tuples\n> were updated, end of story.\n\nAs long as you document that PQcmdTuples cannot be relied on when\nusing rules, since the rules might change the query sufficiently to\nmake it unrecognizable, that's probably OK, though it'll require\nsignificant changes to just about all interface libraries.\n\n> Note that as of CVS tip, the firing order of rules is predictable,\n> so the rule author can control which substituted command is \"the\n> last one\". Without this I don't think that the above would work,\n> but with it, it seems like a moderately clean answer. Moreover it's\n> at least somewhat compatible with the pre-7.2.1 behavior --- where\n> you got the tag from the last command *executed* regardless of any\n> other considerations. That was definitely broken.\n\nSo should I interpret these references to CVS tip as suggesting that\nthe fix for this change in behavior is not going to be seen until 7.3,\nor just that a most-complete fix that tries to deal with multi-rule\ninvocations would have to wait for 7.3, but that a fix for the simpler\n'do instead' case could show up in a 7.2.X release?\n\nBecause it seems to me that if we're not going to see a release with a\nfix for this change in behavior, we need to make sure that maintainers\nof all interfaces know that all bets are off regarding PQcmdTuples in\nthe (I believe undetectable) presence of rules so they'll make no\neffort to use it.\n\nMike.\n",
"msg_date": "09 May 2002 12:13:22 -0400",
"msg_from": "Michael Alan Dorman <mdorman@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: Queries using rules show no rows modified?"
},
{
"msg_contents": "Michael Alan Dorman <mdorman@debian.org> writes:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>> If the client is expecting to get back \"INSERT m n\" and actually\n>> gets back \"UPDATE n\", isn't that client likely to break?\n\n> Perhaps. How many clients are checking that the string returned\n> matches the query it sent?\n\n> I've checked DBD::Pg, it doesn't.\n\nYou are confusing client behavior (by which I meant application)\nwith library behavior. In libpq terms, an application that's sent\nan INSERT command might expect to be able to retrieve an OID with\nPQoidValue(). Whether the library avoids core-dumping doesn't mean\nthat the calling app will behave sanely.\n\n> I would argue that the argument in favor of a patch is that there's no\n> documentation anywhere that behavior changed, or that PQcmdTuples will\n> not return the expected result in the presence of rules. :-)\n\nThe motivation for making a change was to try to *preserve* pre-7.2\nbehavior in the case of INSERTs, where formerly you got back an INSERT\ntag even in the presence of ON INSERT DO not-INSTEAD rules. 7.2 broke\nthat; 7.2.1 fixed that case but changed the behavior for INSTEAD cases.\nWhat we're realizing now is that we need an actually designed behavior,\nrather than the implementation artifact that happened to yield pleasant\nresults most of the time before 7.2.\n\nI'm arguing that the \"designed behavior\" ought to include the\nstipulation that the tag you get back will match the command you sent.\nI think that anything else is more likely to confuse clients than help\nthem.\n\n> Which is, if I haven't hammered this enough, not mentioned anywhere in\n> the documentation.\n\nMainly because no one ever designed the behavior; the pre-7.2\nimplementation didn't really think about what should happen.\n\n> I guess there's the argument to be made that any overly-zealous\n> interface that might choke on getting a different tag back might also\n> choke on getting no tag back. But, again, I don't see any doing any\n> of this. And they *all* seem to expect PQcmdTuples to either return\n> legitimate data or nothing at all.\n\nNo, you're still missing the point. PQcmdTuples isn't going to dump\ncore, because it has no context about what was expected: it sees a tag\nand interprets it as best it can, without any idea about what the\ncalling app might be expecting. What we need to think about here is\nwhat linkage an *application* can reasonably expect between the command\nit sends and the tag it gets back (and, hence, the info it can expect to\nretrieve from the tag).\n\n> As long as you document that PQcmdTuples cannot be relied on when\n> using rules, since the rules might change the query sufficiently to\n> make it unrecognizable, that's probably OK, though it'll require\n> significant changes to just about all interface libraries.\n\nOne more time: there will be zero change in any interface library,\nno matter what we do here. The libraries operate at too low a level\nto be affected; they have no idea what command you sent. I'm not even\nconvinced that PQcmdTuples is where to document the issue --- it seems\nto me to be a rule question, instead.\n\n> So should I interpret these references to CVS tip as suggesting that\n> the fix for this change in behavior is not going to be seen until 7.3,\n> or just that a most-complete fix that tries to deal with multi-rule\n> invocations would have to wait for 7.3, but that a fix for the simpler\n> 'do instead' case could show up in a 7.2.X release?\n\nUntil we've decided what *should* happen, it's premature to discuss\nwhether we can fix it correctly in 7.2.X or should install a quick-hack\nchange instead. I'd prefer to fix it correctly but we must not let\nourselves be seduced by a quick hack into not thinking about what the\nbehavior really ideally ought to be. We've done that once too often\nalready ;-)\n\nFWIW, I'm not at all sure that there will *be* any 7.2.2 release\nbefore 7.3. There hasn't so far been enough volume of fixes to\njustify one (no, this problem doesn't justify one IMHO...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 May 2002 13:35:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Queries using rules show no rows modified? "
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n> You are confusing client behavior (by which I meant application)\n> with library behavior. In libpq terms, an application that's sent\n> an INSERT command might expect to be able to retrieve an OID with\n> PQoidValue(). Whether the library avoids core-dumping doesn't mean\n> that the calling app will behave sanely.\n\nNo, Tom, I'm not confusing them. I'm in no way concerned with\nPQcmdTuple coredumping because the published interface specifies that\nit can return a null string if it finds it necessary, which implies\nthat somewhere down there it's doing some decent error handling to\nfigure out if it's gotten something back it can make sense of and\nacting appropriately.\n\nYou brought up core dumps. My concern has been exclusively with the\npotential change in behavior this can cause in applications.\n\nSo I've been doing is going and downloading the source to, and looking\nat the behavior of, some of the libraries that some---probably many,\nmaybe even most---clients are using, those for perl and python and\nphp, and I am finding that most of them do not even expose the\ninformation whose (mis-)interpretation concerns you.\n\nSo, for those interfaces, at least, there was no problem to be fixed\nin the first place.\n\nStill, you don't have to have something actively breaking to warrant\nfixing a bug, so there's no reason to have not made the change that\nwas made.\n\nThe problem is that, at the same time, I am finding that the change to\npostgresql 7.2 may make application code using those interfaces begin\nto operate in new and different ways because, although they aren't\npaying attention to the string, which you are concerned with, they\n*are* paying attention to the numbers.\n\nMany of those interfaces, where they used to return 1 or 10 or 5000 or\n6432456, will now be returning 0, which thanks to the great C\ntradition, is often interpreted to mean \"false\", which may lead an\napplication to question why \"nothing happened.\" As mine did.\n\nAnd this isn't necessarily application programmers making bad choices;\nthe Perl interface, at least, documents the fact that it returns the\nnumber of rows affected or -1 if that is unknowable---but the change\nin behavior leads the perl interface to think it knows, when in fact\nit doesn't.\n\nIf I knew java better, I'd check the JDBC driver. I mean, imagine:\nPerl, python, php and java, all with undocumented unpredictable\nbehavior in the presence of 'update do instead' rules. Break all four\nand you've just created a potential problem for everyone who does web\ndevelopment.\n\nThat, I think, is one of the more egregious changes in behavior I've\nseen in the few years I've been following PostgreSQL, and yet not only\nis there any documentation, I feel like I'm having to fight to even\nget it acknowledge that it is the bigger problem than the blasted\nstrings not matching because it affects a heck of a lot more stuff in\na much more direct manner.\n\nStill, handle this however you want. I'll go fix the Perl driver to\npretend PQcmdTuples doesn't exist, since it can't be trusted to\ndeliver reliable information, and just have it return -1, and *my*\napps will be OK. Maybe some months down the road when 7.3 finally\nstraggles into view there will be a solution. Hopefully no one will\nhave been burned.\n\nAnyway, I'm done beating this dead horse, since the display is\nobviously bothering people.\n\nMike.\n",
"msg_date": "09 May 2002 14:37:47 -0400",
"msg_from": "Michael Alan Dorman <mdorman@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: Queries using rules show no rows modified?"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Michael Alan Dorman <mdorman@debian.org> writes:\n> > So, If I understood the proposals correctly, I think that means that\n> > this implementation argues for, or at least would work well with,\n> > Hiroshi's solution, since yours, Tom, would return a false zero in\n> > certain (perhaps rare) situations,\n> \n> IMHO Hiroshi's solution would return false information in more cases\n> than mine.\n\n\nMy solution never returns false information as to\npatched cases though the returned result may be\ndifferent from the one clients expect.\nProbably your solution doesn't return false\ninformation either if 'UPDATE 0' means UPDATE 0\nbut unknown INSERT/DELETEs. But few(maybe no ?)\nclients seem to think of it and what could clients\ndo with such infos in the first place ? \nOf cource it is nice to have a complete solution\nimmediately but it doesn't seem easy. My patch is\nonly a makeshift solution but fixes the most\nsiginificant case(typical updatable views). \n\nregards, \nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n",
"msg_date": "Fri, 10 May 2002 10:16:50 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Queries using rules show no rows modified?"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Of cource it is nice to have a complete solution\n> immediately but it doesn't seem easy. My patch is\n> only a makeshift solution but fixes the most\n> siginificant case(typical updatable views). \n\nI would like to devise a complete solution *before* we consider\ninstalling makeshift solutions (which will institutionalize wrong\nbehavior).\n\nThere seems to be some feeling here that in the presence of rewrites\nyou only want to know that \"something happened\". Are you suggesting\nthat the returned tuple count should be the sum of all counts from\ninsert, update, and delete actions that happened as a result of the\nquery? We could certainly implement that, but it does not seem like\na good idea to me.\n\nI'm also concerned about having an understandable definition for the\nOID returned for an INSERT query --- if there are additional INSERTs\ntriggered by rules, does that mean you don't get to see the OID assigned\nto the single row you tried to insert? You'll definitely get push-back\nif you propose that. But if we add up all the actions for the generated\nqueries, we are quite likely to be returning an OID along with an insert\ncount greater than one --- which is certainly confusing, as well as\ncontrary to the existing documentation about how it works.\n\nLet's please quit worrying about \"can we install a hack today\" and\ninstead try to figure out what a sensible behavior is. I don't think\nit's likely to be hard to implement anything we might come up with,\nconsidering how tiny this API is.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 May 2002 21:27:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Queries using rules show no rows modified? "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Of cource it is nice to have a complete solution\n> > immediately but it doesn't seem easy. My patch is\n> > only a makeshift solution but fixes the most\n> > siginificant case(typical updatable views).\n> \n> I would like to devise a complete solution *before* we consider\n> installing makeshift solutions (which will institutionalize wrong\n> behavior).\n> \n> There seems to be some feeling here that in the presence of rewrites\n> you only want to know that \"something happened\". Are you suggesting\n> that the returned tuple count should be the sum of all counts from\n> insert, update, and delete actions that happened as a result of the\n> query? We could certainly implement that, but it does not seem like\n> a good idea to me.\n\nWhat should the backends return for complicated rewrites ?\nAnd how should/could clients handle the results ?\nIt doesn't seem easy to me and it seems a flaw of rule\nsystem. Honestly I don't think that the psqlodbc driver\ncan guarantee to handle such cases properly.\nHowever both Ron's case and Michael's one are ordinary\nupdatable views. If we can't handle the case properly, \nwe could never recommend users to use (updatable) views.\n \n\nregards, \nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n",
"msg_date": "Fri, 10 May 2002 11:35:59 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Queries using rules show no rows modified?"
},
{
"msg_contents": "On Fri, 2002-05-10 at 06:27, Tom Lane wrote:\n> I'm also concerned about having an understandable definition for the\n> OID returned for an INSERT query --- if there are additional INSERTs\n> triggered by rules, does that mean you don't get to see the OID assigned\n> to the single row you tried to insert? \n\nAt least when there was actually no insert you don't\n\nand if there actually was more than 1 insert then INSERT 0 N seems quite\nreasonable to me.\n\nIt may even be that returning a concatenation of results would be\nacceptable for current libs\n\nINSERT OID 1 INSERT 0 3 UPDATE 2 DELETE 2\n\n> You'll definitely get push-back\n> if you propose that. But if we add up all the actions for the generated\n> queries, we are quite likely to be returning an OID along with an insert\n> count greater than one --- which is certainly confusing, as well as\n> contrary to the existing documentation about how it works.\n> \n> Let's please quit worrying about \"can we install a hack today\" and\n> instead try to figure out what a sensible behavior is.\n\nThe problem seems to be that recent changes broke updatable views for a\nfew users. So have these basic options:\n\n1. revert the changes until we have a consensus on doing the right thing\n (seems best to me)\n2. change clients (client libraries) for 7.2 cycle at least\n3. not revert but install a hack today so that it seems like things\n still work ;)\n4. figure out correct behaviour and do that for 7.2.2\n5. do nothing and tell users to keep themselves busy with other things\n until there is consensus about new behaviour.\n\noption 5 seems to be worst, as it leaves users in a state with no clear\nview of what is going to happen - we have just changed one arguably\nbroken behaviour for a new one and are probably going to change it again\nsoon when we figure out what the right behaviour should be.\n\n> I don't think\n> it's likely to be hard to implement anything we might come up with,\n> considering how tiny this API is.\n\nThe sensible behaviour for updatable views would be to report ho many\nrows visible through this view were changed, but this can be hard to do\nin a generic way.\n\n-----------------\nHannu\n\n\n",
"msg_date": "10 May 2002 11:32:11 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Queries using rules show no rows modified?"
},
{
"msg_contents": "Tom Lane wrote:\n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Of cource it is nice to have a complete solution\n> > immediately but it doesn't seem easy. My patch is\n> > only a makeshift solution but fixes the most\n> > siginificant case(typical updatable views).\n>\n> I would like to devise a complete solution *before* we consider\n> installing makeshift solutions (which will institutionalize wrong\n> behavior).\n>\n> There seems to be some feeling here that in the presence of rewrites\n> you only want to know that \"something happened\". Are you suggesting\n> that the returned tuple count should be the sum of all counts from\n> insert, update, and delete actions that happened as a result of the\n> query? We could certainly implement that, but it does not seem like\n> a good idea to me.\n\n IMHO the answer should only be a number if the rewritten\n querytree list consists of one query of the same command\n type. everything else has to lead into \"unknown\".\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n",
"msg_date": "Fri, 10 May 2002 06:19:16 -0400 (EDT)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Queries using rules show no rows modified?"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> Tom Lane wrote:\n> >\n> > Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > > Of cource it is nice to have a complete solution\n> > > immediately but it doesn't seem easy. My patch is\n> > > only a makeshift solution but fixes the most\n> > > siginificant case(typical updatable views).\n> >\n> > I would like to devise a complete solution *before* we consider\n> > installing makeshift solutions (which will institutionalize wrong\n> > behavior).\n> >\n> > There seems to be some feeling here that in the presence of rewrites\n> > you only want to know that \"something happened\". Are you suggesting\n> > that the returned tuple count should be the sum of all counts from\n> > insert, update, and delete actions that happened as a result of the\n> > query? We could certainly implement that, but it does not seem like\n> > a good idea to me.\n>\n> What should the backends return for complicated rewrites ?\n> And how should/could clients handle the results ?\n> It doesn't seem easy to me and it seems a flaw of rule\n> system. Honestly I don't think that the psqlodbc driver\n> can guarantee to handle such cases properly.\n> However both Ron's case and Michael's one are ordinary\n> updatable views. If we can't handle the case properly,\n> we could never recommend users to use (updatable) views.\n\n The fact that our rule system is that powerful that you can\n have multi-action rules is a flaw ... awe.\n\n Do you think that if a trigger suppresses your original\n insert, but instead does 2 inserts somewhere else and another\n update and delete here and there, then 0 is the correct\n answer to the client? Well, that's what happens now, so it\n should irritate your client in exactly the same way. So not\n only our rule system, but our trigger system has a flaw too.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n",
"msg_date": "Fri, 10 May 2002 06:29:15 -0400 (EDT)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Queries using rules show no rows modified?"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> What should the backends return for complicated rewrites ?\n\nWell, given that we have only two or three fields to work in,\nit obviously has to be a very simplified view of what happened.\nBut we have to define *something*.\n\n> And how should/could clients handle the results ?\n> It doesn't seem easy to me and it seems a flaw of rule\n> system.\n\nNo, the problem is that the command tag API was designed without any\nthought for rule rewriting. But I don't think it's worth revising\nthat API completely. Even if we did, we'd still have to define what\nbehavior would be seen by clients that use the existing PQcmdTuples,\netc, calls; so we'd still have to solve these same issues.\n\nCome on, guys, work with me a little here. I've thrown out several\nalternative suggestions already, and all I've gotten from either of\nyou is refusal to think about the problem.\n\nI was thinking last night that it might help to break down the issue a\nlittle bit. We have either two or three result fields to think about:\nthe tag name, the tuple count, and in the case of INSERT the inserted\nrow OID. Let's consider each one independently.\n\n1. The tag name: AFAICS, this ought *always* to match the type of the\noriginal command submitted by the client. Doing otherwise could confuse\nclients that are submitting multiple commands per query string.\nBesides, the only possible downside from making this requirement is that\nwe couldn't send back an insertion OID when the original command was\nan update or delete. How likely is it that a client would expect to\nbe able to get an insertion OID from such a command?\n\n2. The inserted row OID: per above, will be supplied only if the\noriginal command was an INSERT. If the original insert command is\nnot removed (no INSTEAD rule), then I think this result should clearly\ncome from the execution of the original command, regardless of any\nadditional INSERTs added by rules. If the original command is removed\nby INSTEAD, then we can distinguish three sub-cases:\n a. No INSERTs in rewriter output: easy, we must return 0.\n b. Exactly one INSERT in rewriter output: pretty easy to agree that\n we should return this command's result.\n c: More than one INSERT in rewriter output: we have a couple of\n possibilities here. It'd be reasonable to directly use the\n result of the last INSERT, or we could total the results of\n all the INSERTs (ie, if taken together they insert a sum total\n of one row, return that row OID; else return 0). Maybe there\n are other possible behaviors. Any thoughts?\n\n3. The tuple count: this seems the most contentious issue. Again,\nif there is no INSTEAD rule I'd be strongly inclined to say we\nshould just return the count from the original command, ignoring any\ncommands added by rules. If there is an INSTEAD, we've discussed\nseveral possibilities: use result of last command in the rewritten\nseries, use result of last command of same type as original command,\nsum up the results of all the rewritten commands, maybe some others\nthat I forgot.\n\nGiven Michael's concern about being able to \"tell that something\nhappened\", I'm inclined to go with the summing-up behavior in the\nINSTEAD cases. This would lead to the following boiled-down behavior:\n\nA. If original command is executed (no INSTEAD), return its tag as-is,\nregardless of commands added by rules.\n\nB. If original command is not executed, then return its tag name\nplus required fields defined as follows: tuple count is sum of tuple\ncounts of all replacement commands. For an INSERT, if the replacement\ncommands taken together inserted a grand total of exactly one tuple,\nreturn that tuple's OID; else return 0.\n\nThis is not completely consistent in pathological cases: you could get\na tuple OID returned even when the returned tuple count is greater\nthan one, which is not a possible case currently. (This would happen\ngiven a rewrite consisting of a single-row INSERT plus additional\nupdate or delete actions that affect some rows.) But that seems\npretty oddball. In all the simple cases I think this proposal gives\nreasonable behavior.\n\nA tighter definition for case B would use the sum of the tuple counts\nof only the replacement actions that are of the same type as the\noriginal command. This would eliminate the possible inconsistency\nbetween tuple count and insert OID results, and it's arguably saner\nthan the above proposal: \"if it says UPDATE 4, that should mean that\nfour rows were updated, not that something else happened to four rows\".\nBut it would not meet Michael's concern about using PQcmdTuples to\ntell that \"something happened\". I could live with either definition.\n\nThoughts, different proposals, alternative ways of breaking down\nthe problem?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 May 2002 10:51:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Queries using rules show no rows modified? "
},
{
"msg_contents": "Jan Wieck <janwieck@yahoo.com> writes:\n> IMHO the answer should only be a number if the rewritten\n> querytree list consists of one query of the same command\n> type. everything else has to lead into \"unknown\".\n\nI think you can easily generalize that to the statement that the\nresult should be the sum of the rewritten operations of the same\ntype as the original query; requiring there to be exactly one\nseems overly restrictive.\n\nMichael seems to feel that the tuple count should be nonzero if any\nof the replacement operations did anything at all. This does not make\na lot of sense at the command tag level (\"UPDATE 4\" might not mean\nthat 4 tuples were updated) but if you look at the definition of\nPQcmdTuples (\"returns the number of rows affected by the SQL command\")\nit's not so unreasonable. And I can see the point of wanting to\nknow whether anything happened.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 May 2002 10:57:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Queries using rules show no rows modified? "
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> The problem seems to be that recent changes broke updatable views for a\n> few users. So have these basic options:\n\n> 1. revert the changes until we have a consensus on doing the right thing\n> (seems best to me)\n\nReverting is not an option, unless you want to also revert 7.2's change\nof execution order of ON INSERT rules; which I would resist as the new\nbehavior is clearly better. But given that, both 7.2 and 7.2.1 have\ncommand-tag behavior that is making users unhappy ... in different ways.\n\nI think we should first concentrate on defining what we think the right\nbehavior should be in the long term. Only after we know that can we\ndevise a plan for getting there. I believe all the concrete suggestions\nthat have been made so far could be implemented straight away in 7.2.2\n(if there is a 7.2.2) ... but we might settle on something that\nrepresents a bigger change with more backwards-compatibility problems.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 May 2002 11:03:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Queries using rules show no rows modified? "
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> Hiroshi Inoue wrote:\n> > Tom Lane wrote:\n> > >\n> >\n> > What should the backends return for complicated rewrites ?\n> > And how should/could clients handle the results ?\n> > It doesn't seem easy to me and it seems a flaw of rule\n> > system. Honestly I don't think that the psqlodbc driver\n> > can guarantee to handle such cases properly.\n> > However both Ron's case and Michael's one are ordinary\n> > updatable views. If we can't handle the case properly,\n> > we could never recommend users to use (updatable) views.\n> \n> The fact that our rule system is that powerful that you can\n> have multi-action rules is a flaw ... awe.\n\nThere's always a plus and a minus.\nFor generic applications the powerfulness is\na nuisance in a sense because it is difficult\nfor them to understand the intension of \ncomplicated rewrites( and triggers as you\npointed out). \nI don't think every application can handle\nevery case. The main point may be how the\napplications can judge if they can handle\nindividual cases.\n\nregards,\nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n",
"msg_date": "Sat, 11 May 2002 10:34:38 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Queries using rules show no rows modified?"
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n> Michael seems to feel that the tuple count should be nonzero if any\n> of the replacement operations did anything at all. This does not\n> make a lot of sense at the command tag level (\"UPDATE 4\" might not\n> mean that 4 tuples were updated) but if you look at the definition\n> of PQcmdTuples (\"returns the number of rows affected by the SQL\n> command\") it's not so unreasonable. And I can see the point of\n> wanting to know whether anything happened.\n\nClose.\n\nIt's not so much that I want to know exactly what happened, it's that\nI want to know that if PostgreSQL says nothing happened, then I can be\nsure that nothing happened, rather than being told that nothing\nhappened when something happened, and vice versa.\n\nIn fact, my suggestion---which might suffer from issues that I am not\naware of, perhaps the ones that led to the patch in the first\nplace---would be that, given ambiguity, have the system return\nsomething that would cause PQcmdTuples to return an empty string (I'm\nassuing this would be a result string with no numbers attached at\nall).\n\nIt is documented, after all, as being the return value when the system\ncannot determine an otherwise correct number, and all of the code I\nlooked at would, I believe, cope gracefully with it, returning what\nI'm guessing (except in the Perl case, where I'm sure) is a sentinel\nvalue indicating, \"it worked, but I have no idea how many tuples were\ninvolved\".\n\nBut I'm not wedded to that---I just don't want to get an answer back\nthat might lead me off into the woods.\n\nAs for the issue of whether the tag is the same or not, I am utterly\npragmatic---I don't use it, and don't really have a way to get to it\nfrom the interfaces I use, so I think the best option is probably\nsomething where the rules to describe it are straightforward to\nminimize confusion and support issues. And it should be documented\nappropriately.\n\nI mean, even when this is resolved, we should probably be putting\nsomething in the documentation that says that PQcmdTuples can really\nonly really be depended upon as a tri-state value: 0 (\"nothing\nhappened\"), >0 (\"something happened\"), empty string (\"heck if I\nknow\").\n\nMike.\n",
"msg_date": "16 May 2002 16:30:01 -0400",
"msg_from": "Michael Alan Dorman <mdorman@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: Queries using rules show no rows modified?"
},
{
"msg_contents": "On Fri, 10 May 2002 10:51:05 -0400, Tom Lane <tgl@sss.pgh.pa.us>\nwrote:\n>Thoughts, different proposals, alternative ways of breaking down\n>the problem?\nWell, you asked for it, so here is my wishlist :-)\n\nFrom a user POV I expect a command to return the number of \"rows\" it\nhas processed successfully. By \"rows\" I mean rows of the table (or\nview or whatever) my command (seemingly) handles, I'd not be\ninterested in any side effects my command has because of triggers\nand/or rules.\n\nSuppose there is a user called Al B. If, for example, his DB designer\ngives him a table foo (id int, name text) to store his data, he may\nconsider this table as a black box. Al does not want to (and probably\neven should not) know about rules and triggers. So when he enters\n\tINSERT INTO foo VALUES (10, 'ten');\nhe expects to get\n\tINSERT nnn 1\nor an error message. He doesn't care for any INSERTs into changelogs\nor UPDATEs to accounting data, he just wants to know whether *his*\nINSERT was successful.\n\nNext, if Al enters\n\tINSERT INTO foo SELECT ... FROM bar WHERE ...\nand the SELECT statement returns 47 rows, he expects\n\tINSERT 0 47\nif there is no problem.\n\n\tUPDATE foo ... WHERE ...\nHere the WHERE clause identifies a certain number of rows which are to\nbe updated. Again this number should be returned as the tuple count.\nSame for DELETE.\n\n>A. If original command is executed (no INSTEAD), return its tag as-is,\n>regardless of commands added by rules.\nYes, please. This is fully compatible with my wishes.\n\n>B. If original command is not executed, then return its tag name\nAgreed.\n\n>plus required fields defined as follows: tuple count is sum of tuple\n>counts of all replacement commands.\nNo, please don't care about replacement commands. If a rule can be\nviewed as something that is executed \"for each row\", then simply let\n\"each row\" that is processed successfully contribute 1 to the tuple\ncount. (Well, I know, this is not always easy. I guess it's easier\nfor INSERT and harder for UPDATE and DELETE. But isn't it a nice\ngoal?)\n\nWhile I'm fairly sure about my preferences up to here, there are some\npoints I don't have a strong opinion on:\n\nOIDs: With an ordinary table the OID returned by INSERT can be used\nto retrieve the new row with SELECT ... WHERE oid=nnn. Ideally this\nwould hold for tables and views with rules, but there is no easy way\nfor the backend to know the correct OID, when there are more than 1\nINSERT statements in the rule. So here's one more idea for your\nsub-case 2c: Let the programmer specify which OID to return, maybe by\nan extension to the INSERT syntax, allowed only in rules:\n\tINSERT INTO ... VALUES (...) RETURNING OID ???\n\nDO INSTEAD NOTHING: Should this be considered successful execution or\nshould it contribute 0 to the tuple count? I don't know which one is\nless surprising. I tend to the latter.\n\nJust my 0.02.\nServus\n Manfred\n",
"msg_date": "Fri, 17 May 2002 19:37:26 +0200",
"msg_from": "Manfred Koizar <mkoi-pg@aon.at>",
"msg_from_op": false,
"msg_subject": "Re: Queries using rules show no rows modified? "
},
{
"msg_contents": "\nAny chance we can resolve this before 7.3? I will add it to the TODO\nlist.\n\n\n---------------------------------------------------------------------------\n\nJan Wieck wrote:\n> Tom Lane wrote:\n> > Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > > Of cource it is nice to have a complete solution\n> > > immediately but it doesn't seem easy. My patch is\n> > > only a makeshift solution but fixes the most\n> > > siginificant case(typical updatable views).\n> >\n> > I would like to devise a complete solution *before* we consider\n> > installing makeshift solutions (which will institutionalize wrong\n> > behavior).\n> >\n> > There seems to be some feeling here that in the presence of rewrites\n> > you only want to know that \"something happened\". Are you suggesting\n> > that the returned tuple count should be the sum of all counts from\n> > insert, update, and delete actions that happened as a result of the\n> > query? We could certainly implement that, but it does not seem like\n> > a good idea to me.\n> \n> IMHO the answer should only be a number if the rewritten\n> querytree list consists of one query of the same command\n> type. everything else has to lead into \"unknown\".\n> \n> \n> Jan\n> \n> --\n> \n> #======================================================================#\n> # It's easier to get forgiveness for being wrong than for being right. #\n> # Let's break this rule - forgive me. #\n> #================================================== JanWieck@Yahoo.com #\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 26 Aug 2002 13:35:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Queries using rules show no rows modified?"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Any chance we can resolve this before 7.3?\n\nI don't think so; the discussion trailed off without any agreement on\nwhat the behavior should be, and so thinking about how to implement it\nseems premature. At this point I think we have more critical issues\nto focus on for 7.3 ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 26 Aug 2002 14:20:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Queries using rules show no rows modified? "
}
] |
[
{
"msg_contents": "If you have a Win32 workstation...\nLook here:\nhttp://sources.redhat.com/cygwin/\n\nThen click on the thing that says \"Install Now\" (Looks like a black \"C\"\nwith a green tongue).\n\nafter a small boatload of clicks, you will see a Window labeled \"Cygwin\nSetup\".\nUnder +All\nyou will find...\n\t+Admin\n\t+Archive\n\t+Base\n\t+Database\n\nClick on the plus sign next to the Database category.\n\nYou will see:\n\t7.2.1-1 [options] [Bin] [Src] [Package] posgresql: PostgreSQL\nData Base Management System\n\nIn other words, they already have an automated installation procedure\nfor PostgreSQL if you are using Cygwin.\n\n",
"msg_date": "Wed, 8 May 2002 16:30:50 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: Path to PostgreSQL portabiliy"
},
{
"msg_contents": "> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Dann Corbit\n> Sent: Wednesday, May 08, 2002 7:31 PM\n> To: PostgreSQL-development\n> Subject: Re: [HACKERS] Path to PostgreSQL portabiliy\n>\n>\n> If you have a Win32 workstation...\n> Look here:\n> http://sources.redhat.com/cygwin/\n>\n> Then click on the thing that says \"Install Now\" (Looks like a black \"C\"\n> with a green tongue).\n>\n> after a small boatload of clicks, you will see a Window labeled \"Cygwin\n> Setup\".\n> Under +All\n> you will find...\n> \t+Admin\n> \t+Archive\n> \t+Base\n> \t+Database\n>\n> Click on the plus sign next to the Database category.\n>\n> You will see:\n> \t7.2.1-1 [options] [Bin] [Src] [Package] posgresql: PostgreSQL\n> Data Base Management System\n>\n> In other words, they already have an automated installation procedure\n> for PostgreSQL if you are using Cygwin.\n\nYes, but you need to choose other packages in addition to PG to get it to\nwork, and which ones you need to choose aren't obvious. I think, at least,\nwe could provide some documentation on a straightforward PG cygwin install.\n\n",
"msg_date": "Wed, 8 May 2002 23:22:41 -0400",
"msg_from": "\"Joel Burton\" <joel@joelburton.com>",
"msg_from_op": false,
"msg_subject": "Re: Path to PostgreSQL portabiliy"
},
{
"msg_contents": "> > In other words, they already have an automated installation procedure\n> > for PostgreSQL if you are using Cygwin.\n>\n> Yes, but you need to choose other packages in addition to PG to get it to\n> work, and which ones you need to choose aren't obvious. I think, at least,\n> we could provide some documentation on a straightforward PG\n> cygwin install.\n\nI think a full, proper native version of Postgres for windows that can be\ncompiled and distributed as a binary with installer is essential. Just look\nat how many people use MySQL for windows.\n\nWe should just rip out our own IPC code and replace it with the APR...\n\nChris\n\n",
"msg_date": "Thu, 9 May 2002 12:10:12 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Path to PostgreSQL portabiliy"
},
{
"msg_contents": "Le Jeudi 9 Mai 2002 01:30, Dann Corbit a écrit :\n> In other words, they already have an automated installation procedure\n> for PostgreSQL if you are using Cygwin.\n\nCygwin installer does not handle package dependencies. Therefore:\n1) users install all packages at once.\n2) upgrading becomes a mess .\n3) sofware may conflict. Example : Cygwin Perl and native Perl, Cygwin Apache \nand native Apache.\n\nIMHO the problem stems from the lack of package installers the Windows world. \nHere is the proposed plan, which boils down to port Debian to Cygwin/Windows:\n\n1) Description of W32/Debian porting\n\n- Port Debian dpkg to native Windows and compile it using mingw. This task has \nnearly been accomplished on http://debian-cygwin.sourceforge.net/bootstrap/.\n\ndpkg is a very powerfull package system comparable to RPM on Linux. I don't \nknow if it is possible to compile dpkg natively under Windows using mingw.\n\n- Create W32/Debian packages providing mingw core executables and libraries.\n\n- Compile Cygwin.dll using mingw. This will enable the creation of a first \nW32/Debian Cygwin.dll package. This should be possible using MSYS-1.0.7 and \nMinGW-1.1. Again, Cygwin installer is messy, we should get rid of it.\n\n- Create W32/Debian packages providing Cygwin core packages.\n\n- At that point, we shoul be able to have our own Cygwin installer with \ndependency checking. There we go:\n\nThis will allow to create further W32/Debian packages and tell wether they \ndepend on Cygwin or not. For example, we may offer Perl with depency to \nCygwin and another one with no depency (compiled with mingw). Another \nexample is Apache : users may be interested in compiling the Cygwin version \nof Apache, but they might as well simply need to install the Windows native \nbinary version.\n\nAlso there are a bunch of Windows only programs. Example Dev-C++ or \nOpenOffice. We should also package them.\n\n- Create W32/Debian packages for GUI environnement (Xfree / qt2 / KDE2 / \nGnome). Porting has already been accomplised on Cygwin. Even KDE3 is on the \nway. See http://kde-cygwin.sourceforge.net/.\n\nUser will be able to open a KDE window and execute KDE applications without \nhastle. This will allow us to offer popular developement environments (ex: \nKDevelop).\n\n2) GUI installer\n\nW32/Debian installation should be performed within a single GUI. The \ninstallation program will give access to all W32/Debian packages at once. \nPackages will be available on Debian mirrors.\n\nBasically, there will be two kinds of packages :\n- native Windows software (native PHP + apache, native python, native Dev-C++, \nnative OpenOffice1.0) with no depency to Cygwin.\n- Unix and Linux ports with Cygwin dependency (ex : PostgreSQL at first).\n\nAll of them will be available within a SINGLE INSTALLER.\n\nSo to me, the question is not \"How to we port PostgreSQL to Windows\" but \nrather \"How do we package all important software needed\", including Cygwin \ndependant software and Windows native software.\n\nIf we could provide such an installer, this could well be the end of Microsoft \nhegemony. Microsoft is a dangerous company violating Human Rights.\n\nDon't spend too much time on porting PostgreSQL to native Windows. This can be \ndone in two months. The amount of work needed to create a minimal W32/Debian \ndistribution with an on-line installer is very little for a community like \nours. This would change the history of Windows computing.\n\nCheers,\nJean-Michel POURE\n\n\n",
"msg_date": "Thu, 9 May 2002 10:09:04 +0200",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": false,
"msg_subject": "Re: Path to PostgreSQL portabiliy"
}
] |
[
{
"msg_contents": "I'm using OIDs for constraint names to guarentee name uniqueness, and\nintend to do the same for SERIAL sequences.\n\nThe problem with this is that the OID value seems to change with each\nrun of the regression tests depending on when the last initdb was (in\nthe case of installcheck) or parallel events in the case of make\ncheck.\n\nThere are two solutions that I can see.\n\nOne is to make the tests which have data that change in parallel mode\nbecome serial tests (constraints, alter_table and foreign_key).\ninstallcheck will continue to fail if not run immediatly after an\ninitdb however.\n\nThe other is to turn of NOTICE statements for regression tests,\ninstead displaying only error messages.\n\n\nFor those who are curious, the NOTICE statements in question are\ndisplaying the auto-generated foreign key constraints which table\ndrops are cascading through. Implicit drop notices (complex type on a\ntable) have already been removed.\n--\nRod\n\n",
"msg_date": "Wed, 8 May 2002 22:07:02 -0400",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "Regression tests and NOTICE statements"
},
{
"msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> installcheck will continue to fail if not run immediatly after an\n> initdb however.\n\nNot acceptable. Quite aside from it not being okay to force an initdb\nto do a regression test, any tiny change to any part of the regress\ntests will probably alter OID assignments in later tests.\n\nWhy are you inserting OIDs into constraint names anyway? I thought\nwe had just agreed that the RI trigger naming arrangement was a bad idea\nand we should change it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 May 2002 01:28:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Regression tests and NOTICE statements "
},
{
"msg_contents": "\n> \"Rod Taylor\" <rbt@zort.ca> writes:\n> > installcheck will continue to fail if not run immediatly after an\n> > initdb however.\n>\n> Not acceptable. Quite aside from it not being okay to force an\ninitdb\n> to do a regression test, any tiny change to any part of the regress\n> tests will probably alter OID assignments in later tests.\n\nThe above is the reason I proposed turning off NOTICE statements.\n>From what I can see 99% of them aren't useful. The tests confirm the\ninformation that NOTICE gives off in better ways anyway. With them\noff, the sudo-random names simply aren't shown anywhere. Only the\neffects of the constraints (of any type) are seen.\n\n> Why are you inserting OIDs into constraint names anyway? I thought\n> we had just agreed that the RI trigger naming arrangement was a bad\nidea\n> and we should change it.\n\nOh. I didn't know it was a bad idea (aside from being a little OID\nwasteful).\n\nOk, I need something guarenteed unique, system generated, and I really\ndidn't like the way CHECK constraints test a name, increment a\ncounter, test the new name, increment a counter, test yet another\nname, increament a counter, .....\n\nSo.. Is there a good way to do this? Or was the above CHECK\nconstraint method of testing ~10 different names with each creation\ngood enough.\n\n",
"msg_date": "Thu, 9 May 2002 08:35:08 -0400",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "Re: Regression tests and NOTICE statements "
},
{
"msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> Ok, I need something guarenteed unique, system generated, and I really\n> didn't like the way CHECK constraints test a name, increment a\n> counter, test the new name, increment a counter, test yet another\n> name, increament a counter, .....\n\n> So.. Is there a good way to do this? Or was the above CHECK\n> constraint method of testing ~10 different names with each creation\n> good enough.\n\nIt seems like a perfectly fine way to me. I like it because it gives\npredictable results (ie, same table schema will always be assigned the\nsame numbers), which the OID approach doesn't. Also, if you want\nsomething *guaranteed* unique then you must do this even with OIDs;\nthere's nothing stopping the user from declaring a constraint with\na name \"foo_nnnnnnn\" that happens to match the OID-based name you\ninvent for its unnamed sibling.\n\nI suppose with lots and lots of constraints the O(N^2) time behavior\nmight start to be a problem, but there are probably ways around that\ntoo --- say, keep a counter in analyze.c that starts at 1 for each new\nCREATE TABLE, and is incremented each time you need to invent a\nconstraint name. You still have to check-and-retry, but the expected\ntime is O(N) not O(N^2).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 May 2002 09:59:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Regression tests and NOTICE statements "
}
] |
[
{
"msg_contents": "Hi All,\n\nIs it just me or is there no downloadable HTML docs for 7.2??\n\nThere's PDF - but that's not much use...\n\nChris\n\n",
"msg_date": "Thu, 9 May 2002 11:16:52 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "7.2 html docs"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Jean-Michel POURE [mailto:jm.poure@freesurf.fr] \n> Sent: 08 May 2002 09:37\n> To: mlw; Dann Corbit\n> Cc: PostgreSQL-development\n> Subject: Re: OK, lets talk portability.\n> \n> There is other issues :\n> \n> 1) Cygwin installation.\n> \n> Presently, Cygwin installer is a nice toy but it is primarily \n> designed for \n> hackers. In order to install PostgreSQL, you need to install \n> a minimum set of \n> packages. As no real dependency between packages exist, a \n> newbee will not \n> know which packages should be downloaded and which should \n> not. Also, Cygwin \n> installer does not allow the automatic installation of \n> PostgreSQL within a \n> service.\n> \n> The result is that newbees eather download ***all*** Cygwin \n> packages or simply \n> say no. Furthermore, after installation, people are facing \n> another issue \n> which is the Unix world. Users have a hard time understanding \n> that PostgreSQL \n> configuration is stored in /var/lib/pgsql/...\n> \n> So my personal opinion is that if PostgreSQL relies on the \n> present Cygwin \n> version, it has no chance to get a standard solution under Windows.\n\nAgreed. I develop pgAdmin using cygwin/postgresql on my laptop and quite\nfrankly it's a pain in the neck. I did notice when playing with MySQL\nrecently that it appears to use Cygwin, though *only* the .dll, there is\nno installation of Cygwin required. Perhaps if we could get to roughly\nthat stage it would be good. Of course we'd also need a few things from\n/bin like sh for example.\n\nIf we can get it to this stage then I'm sure Jean-Michel and I could\ncome up with a nice installer that will allow us to keep the GPL & BSD\ncode nicely separate on a swerver somewhere & still allow an automated\ndownload/install...\n \n> 3) Existing version of PostgreSQL under Windows\n> Did anyone test \n> http://hp.vector.co.jp/authors/VA023283/Postgr> eSQLe.html\n\nDownloaded it but haven't played yet (blame it on a cranky Exchange\nserver!).\n\nRegards, Dave.\n",
"msg_date": "Thu, 9 May 2002 08:12:04 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: OK, lets talk portability."
},
{
"msg_contents": "Le Jeudi 9 Mai 2002 09:12, Dave Page a écrit :\n> If we can get it to this stage then I'm sure Jean-Michel and I could\n> come up with a nice installer that will allow us to keep the GPL & BSD\n> code nicely separate on a swerver somewhere & still allow an automated\n> download/install...\n\nThis would be quite easy. But the problem is that we might need to rename \nCygwin.dll in order to avoid conflicts. It may become a maintenance \nnightmare. This will not stop Cygwin from provinding PostgreSQL neither.\n\nTo my mind, the solution is to deliver all software needed (native Apache, \nnative Python, Native OpenOffice, Cygwin layer, Dev-C++, pgAdmin2, \nCygwin-KDE) within a SINGLE installer. See my previous mails.\n\nWork is time-consuming. If I have to spend time on a project, this will be for \ngetting rid of Microsoft hegemony, not the less.\n\nCheers,\nJean-Michel\n",
"msg_date": "Thu, 9 May 2002 10:16:15 +0200",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": false,
"msg_subject": "Re: OK, lets talk portability."
},
{
"msg_contents": "On Thu, 2002-05-09 at 09:12, Dave Page wrote:\n> \n> Agreed. I develop pgAdmin using cygwin/postgresql on my laptop and quite\n> frankly it's a pain in the neck. I did notice when playing with MySQL\n> recently that it appears to use Cygwin, though *only* the .dll, there is\n> no installation of Cygwin required. Perhaps if we could get to roughly\n> that stage it would be good. Of course we'd also need a few things from\n> /bin like sh for example.\n\nAre you sure sh and friends are absolutely needed ?\n\nI'm sure we can replace most scripts with .bat files or just do without\nthem (tell peole to use CREATE DATABSE instead of createdb, etc.)\n\nAnd instead of initdb we could just install ready-made $PGSQL/data\ndirectory.\n\n-----------\nHannu\n\n\n",
"msg_date": "09 May 2002 10:56:59 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: OK, lets talk portability."
},
{
"msg_contents": "On Thursday 09 May 2002 04:56 am, Hannu Krosing wrote:\n[snip other good ideas....]\n\n> And instead of initdb we could just install ready-made $PGSQL/data\n> directory.\n\n From experience with the RPMset I can tell you that this is a bad idea, and it \ncomes down to one word:\n\nupgrades.\n\nNow if the installation location is _versioned_ we might can talk of a \npre-populated $PGDATA. I'm taking a really hard look right now at versioned \ninstallation locations for the RPMset -- you can then have more than one \nversion installed at a time, and even running at one time if you're careful. \nI haven't implemented it yet, but I am taking a long hard look at what I \nwould have to do in order to make it work.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Thu, 9 May 2002 06:03:20 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: OK, lets talk portability."
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Dann Corbit [mailto:DCorbit@connx.com] \n> Sent: 09 May 2002 00:31\n> To: PostgreSQL-development\n> Subject: Re: Path to PostgreSQL portabiliy\n> \n> \n> If you have a Win32 workstation...\n> Look here:\n> http://sources.redhat.com/cygwin/\n> \n> Then click on the thing that says \"Install Now\" (Looks like a \n> black \"C\" with a green tongue).\n> \n> after a small boatload of clicks, you will see a Window \n> labeled \"Cygwin Setup\". Under +All you will find...\n> \t+Admin\n> \t+Archive\n> \t+Base\n> \t+Database\n> \n> Click on the plus sign next to the Database category.\n> \n> You will see:\n> \t7.2.1-1 [options] [Bin] [Src] [Package] posgresql: \n> PostgreSQL Data Base Management System\n> \n> In other words, they already have an automated installation \n> procedure for PostgreSQL if you are using Cygwin.\n\nThe last time I tried that (coupla months ago) it listed the versions of\nthe packages in reverse order, so I spent about 15 very tedious minutes\nmaking sure that I have the latest version of all the packages I wanted\nselected.\n\nThen I spent an hour or 2 battling with ntsec and initdb on my laptop\n(logged onto, but disconnected from the domain). After that I gave up\nand went back to my very old release that works fine.\n\nThe point I'm trying to make is that if I, as a not inexperienced\nsysadmin of both Windows and Unix systems (not to mention PostgreSQL\nwhich I like to think I'm fairly familiar with) has this trouble, what\nimpression is that going to give the first time user, who's probably\ngoing to go elsewhere at the first sign of trouble?\n\nRegards, Dave.\n",
"msg_date": "Thu, 9 May 2002 08:42:18 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: Path to PostgreSQL portabiliy"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Hannu Krosing [mailto:hannu@tm.ee] \n> Sent: 09 May 2002 09:57\n> To: Dave Page\n> Cc: jm.poure@freesurf.fr; mlw; Dann Corbit; PostgreSQL-development\n> Subject: Re: [HACKERS] OK, lets talk portability.\n> \n> \n> On Thu, 2002-05-09 at 09:12, Dave Page wrote:\n> > \n> > Agreed. I develop pgAdmin using cygwin/postgresql on my laptop and \n> > quite frankly it's a pain in the neck. I did notice when \n> playing with \n> > MySQL recently that it appears to use Cygwin, though *only* \n> the .dll, \n> > there is no installation of Cygwin required. Perhaps if we \n> could get \n> > to roughly that stage it would be good. Of course we'd also \n> need a few \n> > things from /bin like sh for example.\n> \n> Are you sure sh and friends are absolutely needed ?\n> \n> I'm sure we can replace most scripts with .bat files or just \n> do without them (tell peole to use CREATE DATABSE instead of \n> createdb, etc.)\n> \n> And instead of initdb we could just install ready-made \n> $PGSQL/data directory.\n\nYes, we could do that quite easily in which case only the .dll should be\nrequired.\n\nProbably the only required scripts would be:\n\npg_dumpall\ninitlocation\n\nI have doubts about how easily initlocation could be rewritten as a\nbatch file though...\n\nRegards, Dave.\n",
"msg_date": "Thu, 9 May 2002 09:05:33 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: OK, lets talk portability."
}
] |
[
{
"msg_contents": "My dear friends,\n\nFor those who doubt PostgreSQL port is mainly an installer problem, visit \nhttp://fink.sourceforge.net. Fink is a powerfull package installer for MacOSX \nbased on dpkg and Perl.\n\nWith the collaborative help of PostgreSQL hackers, we should be able to \nrelease a single on-line installer for all software needed for the Windoze \nplatform. See my previous emails on the list.\n\nPackages should be of two kinds :\n\nW32/Debian Native Windows packages\n- dpkg,\n- cygwin,\n- apache,\n- python,\n- pgAdmin2 ,\n- perl,\n- dev-c++,\n- openoffice,\n- CVS,\n- diff,\n- WinCVS.\n\nW32/Debian cygwin packages\n- KDE2(3?),\n- Gnome2,\n- PostgreSQL,\n- and event MySQL (then we will see which is the \"fastest\" my friends).\n\nWhen this is achieved, this will always be possible to port PostgreSQL to \nnative Windows. And then release a new W32/package that will integrate \nsmoothly in the framework without hastle. Without dependency problem or \nconflict.\n\nThis will give the stong user base needed to outperform Windows hegemony and \nput an end to Microsoft violating Human Rights.\n\nHelp needed, any candidates?\nCheers, Jean-Michel POURE\n",
"msg_date": "Thu, 9 May 2002 10:53:33 +0200",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Why you should Join W32/Debian to save the world from Microsoft"
},
{
"msg_contents": "On Thu, May 09, 2002 at 10:53:33AM +0200, Jean-Michel POURE wrote:\n> My dear friends,\n> \n> For those who doubt PostgreSQL port is mainly an installer problem, visit \n> http://fink.sourceforge.net. Fink is a powerfull package installer for MacOSX \n> based on dpkg and Perl.\n\nThat's a different installer over a Unixoid operating system, so it doesn't\nreally prove that the only problem is the installer, does it?\n\nRoss\n\n",
"msg_date": "Thu, 9 May 2002 10:35:21 -0500",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: Why you should Join W32/Debian to save the world from Microsoft"
},
{
"msg_contents": "Le Jeudi 9 Mai 2002 17:35, Ross J. Reedstrom a écrit :\n> That's a different installer over a Unixoid operating system, so it doesn't\n> really prove that the only problem is the installer, does it?\n\nThere are three different issues :\n1- package installer providing a minimal Cygwin version (other than Cygwin.exe \nwhich does not work well),\n2- a nice Windows GUI (like pgAdmin2),\n3- Windows port of PostgreSQL.\n\n1 can be achived easily.\n2 already exsts thank to Dave Page.\n3 can wait until we hit the market with 1 and 2.\nThis will give us more feedback.\n\nSo, the only problem remaining is the installer. Is anyone interested in \nhelping me releasing a W32/Debian Cygwin package?\nCheers, Jean-Michel POURE\n",
"msg_date": "Thu, 9 May 2002 19:51:09 +0200",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Re: Why you should Join W32/Debian to save the world from Microsoft"
}
] |
[
{
"msg_contents": "Ooops sorry about the last post (all thumbs today)\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> Sent: 08 May 2002 16:57\n> To: Thomas Lockhart\n> Cc: mlw; PostgreSQL-development; Jan Wieck; Marc G. Fournier; Dann\n> Corbit\n> Subject: Re: Path to PostgreSQL portabiliy \n> \n> \n> Thomas Lockhart <lockhart@fourpalms.org> writes:\n> > 2) If (1) does not exempt the PostgreSQL app from GPL \n> polution, then why\n> > not distribute PostgreSQL on Windows using a GPL license?\n> \n> Given the cygwin licensing terms stated at\n> \thttp://cygwin.com/licensing.html\n> it appears to me that we need not open that can of worms (and I'd much\n> rather not muddy the licensing waters that way, regardless of any\n> arguments about whether it would hurt or not...)\n\nCygwin is not the only additon needed, cygipc will also be needed (GPL)\n(see: http://www.neuro.gatech.edu/users/cwilson/cygutils/cygipc/index.html )\nI have heard it said that an ipc implementation will be appearing in the\nmain\ncygwin DLL, but have yet to see it.\n\n> \n> As near as I can tell, we *could* develop a self-contained \n> installation\n> package for PG+cygwin without any licensing problem. So that set of\n> problems could be solved with a reasonable amount of work. I'm still\n> unclear on whether there are serious technical problems (performance,\n> stability) with using cygwin.\n> \n> (Actually, even if there are performance or stability problems, an\n> easily-installable package would still address the needs of people who\n> want to \"try it out\" or \"get their feet wet\". And maybe that's all we\n> need to do. We always have said that we recommend a Unix platform for\n> production-grade PG installations, and IMNSHO that \n> recommendation would\n> not change one iota if there were a native rather than cygwin-based\n> Windows port. So I'm unconvinced that we have a problem to solve\n> anyway...)\n> \n> \t\t\tregards, tom lane\n> \n\nThe system has about 30 transactional users on average.\nWhy am I using NT? IT has decided it is the way.\nThe major limitation on performance appears to be fork() as this actually\ncopies the memory, rather than just copy on write. Also I believe it has fix\nup some of the file descriptors for things like sockets, and those things\nthat\nhave no fd under windows, but do under *nix. This seems to be the only major\nperformance bottle neck, although I havn't done much comparison testing.\nOther limitations are:\nProcesses - Can only have 62 Children due to WaitForMultipleObjects 64 item\n\tlimit. (I'm wondering about tackling this, however it'll be my first\n\tforey into cygwin so don't hold your breath)\nMemory - Starts at 256Mb limit, can be increased by a registry key:\n\t//HKCU/Software/Cygnus Solutions/Cygwin/\n\t\theap_chunk_in_mb\t\t\tDWORD\nAs can be seen, this uses Current User so you need to make sure your\nipc-daemon service starts as the correct user as well as the postmaster.\nIf you use a lot of shared memory, the opstmaster can take a while to start,\nbut this isn't a problem for me.\n\nAs to stability, don't use on Win9x. This is just about usable for\ndevelopment\nbut it keeps crashin at ramdom times, especially when restoring from dump\nfiles.\nI have never managed to complete make installcheck on 9x.\nSometimes things seem to be broken in a new version of cygwin, to be fixed\nby\nthe next. Just need to test before using. Also I only use plpgsql for server\nside programing. I know there are problems building for TCL, and I don't\nknow\nabout perl and python.\nThe only other problems I've had with NT4 are:\na) When running as a service, it wouldn't term in time due to sighup. Fixed\nb) As of 7.2.1 (don't know about 7.2, not tried it enough), the following\ncauses a problem when restoring large amounts of data, which is solved by\njust using rename (see previous postings to pgsql-cgywin)\n\t/*\n\t * Prefer link() to rename() here just to be really sure that we\ndon't\n\t * overwrite an existing logfile. However, there shouldn't be one,\nso\n\t * rename() is an acceptable substitute except for the truly\nparanoid.\n\t */\n#ifndef __BEOS__\n\tif (link(tmppath, path) < 0)\n\t\telog(STOP, \"link from %s to %s (initialization of log file\n%u, segment %u) failed: %m\",\n\t\t\t tmppath, path, log, seg);\n\tunlink(tmppath);\n#else\n\tif (rename(tmppath, path) < 0)\n\t\telog(STOP, \"rename from %s to %s (initialization of log file\n%u, segment %u) failed: %m\",\n\t\t\t tmppath, path, log, seg);\n#endif\n\nHope something here helps the discussion,\n- Stuart\n\nP.S. I will not be on email Fri-Mon (inclusive), but if anyone wants further\n\tinfo/discussion on particulars, I will be available from Tuesday \n\t(or some of today).\n",
"msg_date": "Thu, 9 May 2002 10:46:03 +0100 ",
"msg_from": "\"Henshall, Stuart - WCP\" <SHenshall@westcountrypublications.co.uk>",
"msg_from_op": true,
"msg_subject": "PG+Cygwin Production Experience (was RE: Path to PostgreSQL porta\n\tbiliy)"
}
] |
[
{
"msg_contents": "I guess I answered my own question didn't I. The check constraint\nmethod (test, then change, and test then change) is the only way to\nguarentee uniqueness. Anything else could have interferance by the\nuser, or issues during a dump / restore where any numbered sequences\nare reset.\n\n--\nRod\n----- Original Message -----\nFrom: \"Rod Taylor\" <rbt@zort.ca>\nTo: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nCc: \"Hackers List\" <pgsql-hackers@postgresql.org>\nSent: Thursday, May 09, 2002 8:35 AM\nSubject: Re: [HACKERS] Regression tests and NOTICE statements\n\n\n>\n> > \"Rod Taylor\" <rbt@zort.ca> writes:\n> > > installcheck will continue to fail if not run immediatly after\nan\n> > > initdb however.\n> >\n> > Not acceptable. Quite aside from it not being okay to force an\n> initdb\n> > to do a regression test, any tiny change to any part of the\nregress\n> > tests will probably alter OID assignments in later tests.\n>\n> The above is the reason I proposed turning off NOTICE statements.\n> From what I can see 99% of them aren't useful. The tests confirm\nthe\n> information that NOTICE gives off in better ways anyway. With them\n> off, the sudo-random names simply aren't shown anywhere. Only the\n> effects of the constraints (of any type) are seen.\n>\n> > Why are you inserting OIDs into constraint names anyway? I\nthought\n> > we had just agreed that the RI trigger naming arrangement was a\nbad\n> idea\n> > and we should change it.\n>\n> Oh. I didn't know it was a bad idea (aside from being a little OID\n> wasteful).\n>\n> Ok, I need something guarenteed unique, system generated, and I\nreally\n> didn't like the way CHECK constraints test a name, increment a\n> counter, test the new name, increment a counter, test yet another\n> name, increament a counter, .....\n>\n> So.. Is there a good way to do this? Or was the above CHECK\n> constraint method of testing ~10 different names with each creation\n> good enough.\n>\n\n",
"msg_date": "Thu, 9 May 2002 09:05:53 -0400",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "Fw: Regression tests and NOTICE statements "
}
] |
[
{
"msg_contents": "Hello,\n\nWe are a small group around pgaccess and Teo (teo@flex.ro) who use pgaccess\nin our daily work and now we try to join our efforts and bring our patches\ntogether.\n\nDuring the last two weeks we managed to arrange a web server, place there\npgaccess.org and the current web site that Teo was running on\nwww.flex.ro/pgaccess.\n\nNow we are about to start a cvs and we are searching for the most recent\nversions and patches of the code. We are searching also for everybody who is\nusing pgaccess and has wishes, or patches and wants to share them.\n\nPlease, everybody interested - contact us during the next few days. Next\nweek we are starting with what we have.\n\nIavor\n\n--\nwww.pgaccess.org\n\n",
"msg_date": "Thu, 9 May 2002 15:30:52 +0200",
"msg_from": "\"Iavor Raytchev\" <iavor.raytchev@verysmall.org>",
"msg_from_op": true,
"msg_subject": "pgaccess"
}
] |
[
{
"msg_contents": "\"Henshall, Stuart - WCP\" <SHenshall@westcountrypublications.co.uk> writes:\n> Cygwin is not the only additon needed, cygipc will also be needed (GPL)\n> (see: http://www.neuro.gatech.edu/users/cwilson/cygutils/cygipc/index.html )\n\nGood point, but is this a requirement that we could get rid of, now that\nwe have the SysV IPC stuff somewhat isolated? AFAICT cygipc provides\nthe SysV IPC API (shmget, semget, etc) --- but if there are usable\nequivalents in the basic Cygwin environment, we could probably use them\nnow.\n\nConsidering how often we see the forgot-to-start-cygipc mistake,\nremoving this requirement would be a clear win.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 May 2002 09:51:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PG+Cygwin Production Experience (was RE: Path to PostgreSQL porta\n\tbiliy)"
},
{
"msg_contents": "\n>From: \"Tom Lane\" <tgl@sss.pgh.pa.us>\n> \"Henshall, Stuart - WCP\" <SHenshall@westcountrypublications.co.uk> writes:\n> > Cygwin is not the only additon needed, cygipc will also be needed (GPL)\n> > (see:\nhttp://www.neuro.gatech.edu/users/cwilson/cygutils/cygipc/index.html )\n>\n> Good point, but is this a requirement that we could get rid of, now that\n> we have the SysV IPC stuff somewhat isolated? AFAICT cygipc provides\n> the SysV IPC API (shmget, semget, etc) --- but if there are usable\n> equivalents in the basic Cygwin environment, we could probably use them\n> now.\n>\n> Considering how often we see the forgot-to-start-cygipc mistake,\n> removing this requirement would be a clear win.\n>\n> regards, tom lane\n\n In my experience, cygipc is the most tricky part in a postgresql/cygwin\ninstall (mainly because because of access rights problem). Using native call\nfor sem / shm will be a good step forward (and the API change might make\nthis quite easy). I've also never been able to start two postmaster instance\non the same box. Doing so is messing shared memory leading to both\npostmaster crashing.\n\n cyril\n\n\n\n",
"msg_date": "Thu, 9 May 2002 16:27:37 +0200",
"msg_from": "\"Cyril VELTER\" <cyril.velter@libertysurf.fr>",
"msg_from_op": false,
"msg_subject": "Re: PG+Cygwin Production Experience (was RE: Path to PostgreSQL porta\n\tbiliy)"
},
{
"msg_contents": "I have found this whole thread very interesting (I'm still not sure \nwhere it is going though :-). But let me throw in some of my thoughts.\n\nA windows version of postgres (whether native of cygwin based) is \nimportant. I have many developers with windows as their desktop OS and \nthey have a postgres db installed to do development work. Postgres on \ncygwin is fine for this need. While I may not trust it in a production \nenvironment it is certainly good enough for development.\n\nA second use we have for postgres on windows is in evals of our product. \n We provide an eval version of our software as an InstallShield \ninstalled .exe that includes our code, postgres and the necessary cygwin \nparts. People doing evals just want to install the eval on their \neveryday machine (most likely running windows) and it needs to be dead \nsimple to install. This can be done with postgres and cygwin. In this \nexample again the current postgres+cygwin works well enough for our \nevals. Again I wouldn't run the production version in this environment, \nbut it is good enough for an eval.\n\nOur eval does show that it is possible to repackage postgres plus the \nparts of cygwin it needs into a nice installer and have it work. (It is \na lot of work but is certainly possible). In fact in our eval install \nwe even use cygrunsrv to install postgres as a windows service.\n\nThe biggest problem we have had is the fact that the utility scripts \n(like pg_ctl, createdb, etc) are all shell scripts that call a whole \nhost of other utilities. It is pretty straight forward to package up \nthe postgres executable and the libraries it needs from cygwin. It is a \nwhole different problem making sure you have a standard unix style shell \nenvironment with all the utilities installed so that you can run the \nshell scripts.\n\nthanks,\n--Barry\n\nTom Lane wrote:\n> \"Henshall, Stuart - WCP\" <SHenshall@westcountrypublications.co.uk> writes:\n> \n>>Cygwin is not the only additon needed, cygipc will also be needed (GPL)\n>>(see: http://www.neuro.gatech.edu/users/cwilson/cygutils/cygipc/index.html )\n> \n> \n> Good point, but is this a requirement that we could get rid of, now that\n> we have the SysV IPC stuff somewhat isolated? AFAICT cygipc provides\n> the SysV IPC API (shmget, semget, etc) --- but if there are usable\n> equivalents in the basic Cygwin environment, we could probably use them\n> now.\n> \n> Considering how often we see the forgot-to-start-cygipc mistake,\n> removing this requirement would be a clear win.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n\n",
"msg_date": "Thu, 09 May 2002 09:16:35 -0700",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: PG+Cygwin Production Experience (was RE: Path to PostgreSQL"
},
{
"msg_contents": "\n\nTom Lane wrote:\n> \"Henshall, Stuart - WCP\" <SHenshall@westcountrypublications.co.uk> writes:\n> \n>>Cygwin is not the only additon needed, cygipc will also be needed (GPL)\n>>(see: http://www.neuro.gatech.edu/users/cwilson/cygutils/cygipc/index.html )\n> \n> \n> Good point, but is this a requirement that we could get rid of, now that\n> we have the SysV IPC stuff somewhat isolated? AFAICT cygipc provides\n> the SysV IPC API (shmget, semget, etc) --- but if there are usable\n> equivalents in the basic Cygwin environment, we could probably use them\n> now.\n> \n> Considering how often we see the forgot-to-start-cygipc mistake,\n> removing this requirement would be a clear win.\n> \n> \t\t\tregards, tom lane\n\nThere is some work going on within cygwin to make the separate cygipc \nstuff less of a problem. They are trying to integrate SysV IPC \nfunctionality it into the core cygwin.dll. However because of licensing \nissues (the current cygipc code is GPL and RedHat currently as the \ncopyright holder on the core cygwin code releases it for a fee under \nother licenses in addition to the GPL) there are some problems with \ndoing that.\n\nthanks,\n--Barry\n\n\n\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n\n",
"msg_date": "Thu, 09 May 2002 09:21:51 -0700",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: PG+Cygwin Production Experience (was RE: Path to PostgreSQL"
},
{
"msg_contents": "Barry Lind wrote:\n> I have found this whole thread very interesting (I'm still not sure\n> where it is going though :-). But let me throw in some of my thoughts.\n>\n> A windows version of postgres (whether native of cygwin based) is\n> important. I have many developers with windows as their desktop OS and\n> they have a postgres db installed to do development work. Postgres on\n> cygwin is fine for this need. While I may not trust it in a production\n> environment it is certainly good enough for development.\n>\n> A second use we have for postgres on windows is in evals of our product.\n> We provide an eval version of our software as an InstallShield\n> installed .exe that includes our code, postgres and the necessary cygwin\n> parts. People doing evals just want to install the eval on their\n> everyday machine (most likely running windows) and it needs to be dead\n> simple to install. This can be done with postgres and cygwin. In this\n> example again the current postgres+cygwin works well enough for our\n> evals. Again I wouldn't run the production version in this environment,\n> but it is good enough for an eval.\n>\n> Our eval does show that it is possible to repackage postgres plus the\n> parts of cygwin it needs into a nice installer and have it work. (It is\n> a lot of work but is certainly possible). In fact in our eval install\n> we even use cygrunsrv to install postgres as a windows service.\n>\n> The biggest problem we have had is the fact that the utility scripts\n> (like pg_ctl, createdb, etc) are all shell scripts that call a whole\n> host of other utilities. It is pretty straight forward to package up\n> the postgres executable and the libraries it needs from cygwin. It is a\n> whole different problem making sure you have a standard unix style shell\n> environment with all the utilities installed so that you can run the\n> shell scripts.\n\n Do I read this right? You wrap the binary eval version of\n your product, the binary PostgreSQL and CygWin including some\n of the utility programs into one InstallShield .exe and ship\n that?\n\n Hmmm, PostgreSQL's license is totally fine with that. And\n your program is your program. But as far as I know bundling\n with CygWin like this costs money. So you pay license fees to\n RedHat for shipping eval copies of your product and don't see\n any value in a native Windows port? Nobel, nobel, or does\n your product have such an outstanding eval/sell ratio that it\n doesn't matter?\n\n\nJan\n\n>\n> thanks,\n> --Barry\n>\n> Tom Lane wrote:\n> > \"Henshall, Stuart - WCP\" <SHenshall@westcountrypublications.co.uk> writes:\n> >\n> >>Cygwin is not the only additon needed, cygipc will also be needed (GPL)\n> >>(see: http://www.neuro.gatech.edu/users/cwilson/cygutils/cygipc/index.html )\n> >\n> >\n> > Good point, but is this a requirement that we could get rid of, now that\n> > we have the SysV IPC stuff somewhat isolated? AFAICT cygipc provides\n> > the SysV IPC API (shmget, semget, etc) --- but if there are usable\n> > equivalents in the basic Cygwin environment, we could probably use them\n> > now.\n> >\n> > Considering how often we see the forgot-to-start-cygipc mistake,\n> > removing this requirement would be a clear win.\n> >\n> > regards, tom lane\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n> >\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n",
"msg_date": "Thu, 9 May 2002 15:20:23 -0400 (EDT)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: PG+Cygwin Production Experience (was RE: Path to PostgreSQL"
}
] |
[
{
"msg_contents": "Hello Ross,\n\nGreat to hear form you. We are just thinking what to do. There was one idea\nto have a sourceforge cvs, another to have the cvs on the www.pgaccess.org\nserver. Then I found out that actually the pgaccess is in the PostgreSQL\ndistribution as well.\n\nAs for us the pgaccess is a tool for our daily work (we did some work on the\nschema to visualize the databases better when working with one of our\nclients) - we would be happy to have a distinctive location for everything -\nsuch as www.pgaccess.org.\n\nHowever if another solution would be better for any reason - I am personally\nopen and I believe all guys are open as well.\n\nIt is not important where it is - it is important (for us) to put a small\norganization around the thing that can make collecting all patches possible.\nTeo is pretty busy right now that's why he brought some of us who have\nsomehow recent patches together - so that we can see if something can come\nout of that.\n\nWhat's your feeling?\n\nIavor\n\n--\nwww.pgaccess.org\n\n> -----Original Message-----\n> From: Ross J. Reedstrom [mailto:reedstrm@rice.edu]\n> Sent: Thursday, May 09, 2002 5:43 PM\n> To: Iavor Raytchev\n> Subject: Re: [HACKERS] pgaccess\n>\n>\n> Hey there Iavor -\n> I wrote some patches to pgaccess, back a year or so ago: the schema\n> design editor was my work (mostly a clone of the query designer, with\n> some tweaks). I'd like to participate. Are you planning on keeping the\n> canonical version of the code in the main postgresql tree?\n>\n> Ross\n> --\n> Ross Reedstrom, Ph.D. reedstrm@rice.edu\n> Executive Director phone: 713-348-6166\n> Gulf Coast Consortium for Bioinformatics fax: 713-348-6182\n> Rice University MS-39\n> Houston, TX 77005\n\n",
"msg_date": "Thu, 9 May 2002 17:54:26 +0200",
"msg_from": "\"Iavor Raytchev\" <iavor.raytchev@verysmall.org>",
"msg_from_op": true,
"msg_subject": "Re: pgaccess"
},
{
"msg_contents": "...\n> It is not important where it is - it is important (for us) to put a small\n> organization around the thing that can make collecting all patches possible.\n\npgaccess is currently in the pgsql cvs tree, and is welcome to stay\nthere. Some of us have commit privileges, and y'all may want to have\nsomeone else with privs also once you are organized and it is clear how\nbest to proceed. If you need web resources that can be arranged too, as\ncan a dedicated mailing list.\n\ngborg is another way to organize, and of course www.pgaccess.org is a\nway too. It partly depends on how you see the future of pgaccess. If it\nstays tightly coupled to pgsql, then perhaps it may as way stay\norganized with pgsql.\n\nRegards.\n\n - Thomas\n",
"msg_date": "Thu, 09 May 2002 09:23:16 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: pgaccess"
},
{
"msg_contents": "\nOn Thu, 9 May 2002, Thomas Lockhart wrote:\n> ...\n> > It is not important where it is - it is important (for us) to put a small\n> > organization around the thing that can make collecting all patches possible.\n> \n> pgaccess is currently in the pgsql cvs tree, and is welcome to stay\n> there. Some of us have commit privileges, and y'all may want to have\n> someone else with privs also once you are organized and it is clear how\n> best to proceed. If you need web resources that can be arranged too, as\n> can a dedicated mailing list.\n> \n> gborg is another way to organize, and of course www.pgaccess.org is a\n> way too. It partly depends on how you see the future of pgaccess. If it\n> stays tightly coupled to pgsql, then perhaps it may as way stay\n> organized with pgsql.\n\nI was working on the assumption that PgAccess was tightly coupled to postgres\n[and versions of postgres] and since Teo was busy with other things and the PG\ncommiters were happy to apply patches that I would be submitting patches to the\npostgres CVS.\n\nI see no reason why pgaccess needs a separate repository, I presume it can be\nfetched from the postgress CVS as a single entity. Although I haven't tried\nthis.\n\nBTW, I had been wondering what to call the Schema tab now that that label is\nrequired for schemas rather than design.\n\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants\n\n",
"msg_date": "Thu, 9 May 2002 18:33:58 +0100 (BST)",
"msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: pgaccess"
},
{
"msg_contents": "On Thu, May 09, 2002 at 06:33:58PM +0100, Nigel J. Andrews wrote:\n> \n> On Thu, 9 May 2002, Thomas Lockhart wrote:\n> > gborg is another way to organize, and of course www.pgaccess.org is a\n> > way too. It partly depends on how you see the future of pgaccess. If it\n> > stays tightly coupled to pgsql, then perhaps it may as way stay\n> > organized with pgsql.\n> \n> I was working on the assumption that PgAccess was tightly coupled to postgres\n> [and versions of postgres] and since Teo was busy with other things and the PG\n> commiters were happy to apply patches that I would be submitting patches to the\n> postgres CVS.\n\nWhat we'll probably need is a note from teo to HACKERS, letting the CVS\ncommiters know who is 'approved' to bless pgaccess patches: i.e. their\npatches should be commited, and they can bless third party patches.\n\n> I see no reason why pgaccess needs a separate repository, I presume it can be\n> fetched from the postgress CVS as a single entity. Although I haven't tried\n> this.\n\nWorks fine. Only tricky part would be providing the windows binary bits\n(dlls) that have traditionally resided on teo's site.\n\n> \n> BTW, I had been wondering what to call the Schema tab now that that label is\n> required for schemas rather than design.\n\nIf you check the archives, when I submitted that patch, I had the\nforsight to ask if anyone could come up with a better name, forseeing\nthe collison that is happening today: no one came up with anything.\nI agree it needs renaming. How about one of 'Charting', 'Graphing',\n'Diagrams', 'Graphics', 'PrettyPictures', 'BossBait' ...\n\nRoss\n",
"msg_date": "Thu, 9 May 2002 13:02:02 -0500",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: pgaccess"
},
{
"msg_contents": "Nigel J. Andrews writes:\n\n> BTW, I had been wondering what to call the Schema tab now that that label is\n> required for schemas rather than design.\n\n\"Design\"?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 10 May 2002 21:13:20 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: pgaccess"
},
{
"msg_contents": "On Fri, May 10, 2002 at 09:13:20PM +0200, Peter Eisentraut wrote:\n> Nigel J. Andrews writes:\n> \n> > BTW, I had been wondering what to call the Schema tab now that that label is\n> > required for schemas rather than design.\n> \n> \"Design\"?\n\nThought about it, but it seems to 'active' for what's behind the tab:\ndrawing pretty pictures. There's no way to draw arbitrary tables and\ncreate them, for example. Also, 'Design' is used a the button contrasting\nto 'New' and 'Open' for things like the Table tab.\n\nI think I'm leaning toward \"Diagram\", since that's the verb as well as\nthe noun. Hmm, on further inspection, all the tabs are plural nouns, so\n\"Designs\" or \"Diagrams\", perhaps.\n\nRoss\n\n",
"msg_date": "Fri, 10 May 2002 15:12:18 -0500",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: pgaccess"
},
{
"msg_contents": "Nigel J. Andrews wrote:\n> > gborg is another way to organize, and of course www.pgaccess.org is a\n> > way too. It partly depends on how you see the future of pgaccess. If it\n> > stays tightly coupled to pgsql, then perhaps it may as way stay\n> > organized with pgsql.\n> \n> I see no reason why pgaccess needs a separate repository, I presume it can be\n> fetched from the postgress CVS as a single entity. Although I haven't tried\n> this.\n\n[ Sorry, just catching up.]\n\nYou can easily checkout a subdirectory from CVS:\n\n\t$ cvs checkout pgsql/src/bin/pgaccess\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 4 Jun 2002 14:19:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgaccess"
}
] |
[
{
"msg_contents": "I have tried to install postgresql 7.2.1 on the new Red Hat 7.3. The\nregression test failed in three cases:\n\n- abstime\n- tinterval\n- horology\n\nI have included to the mail the regression.diffs and regression.out\nfiles...\n\nIs there any reason? Any tip?\n\nThe machine has two processors with 1Gb of Ram.\n\n-- \nDoct. Eng. Denis Gasparin: denis@edistar.com\n---------------------------\nProgrammer & System Administrator - Edistar srl",
"msg_date": "09 May 2002 18:02:17 +0200",
"msg_from": "Denis Gasparin <denis@edistar.com>",
"msg_from_op": true,
"msg_subject": "Psql 7.2.1 Regress tests failed on RedHat 7.3"
},
{
"msg_contents": "Denis Gasparin <denis@edistar.com> writes:\n> Is there any reason? Any tip?\n\nThe diffs seem to be in the same places as for Solaris --- perhaps\nsomeone tried to correct 7.3's timezone database for the 1947 DST\nrules? (AFAIK, Solaris is more correct for 1947 than most other\ntimezone databases.) If so, they blew it ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 May 2002 13:39:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Psql 7.2.1 Regress tests failed on RedHat 7.3 "
},
{
"msg_contents": "Denis Gasparin <denis@edistar.com> writes:\n\n> I have tried to install postgresql 7.2.1 on the new Red Hat 7.3. The\n> regression test failed in three cases:\n\nThe tests are buggy in some cases... the expected results are not\nlocale aware (e.g. \"12.42\" vs \"12,42\", \"A C b\" v \"A b C\")\n\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "09 May 2002 22:57:18 +0000",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: Psql 7.2.1 Regress tests failed on RedHat 7.3"
},
{
"msg_contents": "I've tried to replace the RH 7.3 timezone database (in\n/usr/share/zoneinfo) with that of RH 7.2 (where regress tests work).\n\nThe problem persists... In RH 7.3 there is also a new version of the\nglibc... (2.2.5 - 34)\n\nIf anyone has any idea...\n\n-- \nDoct. Eng. Denis Gasparin: denis@edistar.com\n---------------------------\nProgrammer & System Administrator - Edistar srl\n\nIl gio, 2002-05-09 alle 19:39, Tom Lane ha scritto:\n> Denis Gasparin <denis@edistar.com> writes:\n> > Is there any reason? Any tip?\n> \n> The diffs seem to be in the same places as for Solaris --- perhaps\n> someone tried to correct 7.3's timezone database for the 1947 DST\n> rules? (AFAIK, Solaris is more correct for 1947 than most other\n> timezone databases.) If so, they blew it ...\n> \n> \t\t\tregards, tom lane\n\n\n",
"msg_date": "10 May 2002 10:22:08 +0200",
"msg_from": "Denis Gasparin <denis@edistar.com>",
"msg_from_op": true,
"msg_subject": "Re: Psql 7.2.1 Regress tests failed on RedHat 7.3"
},
{
"msg_contents": "> Denis Gasparin <denis@edistar.com> writes:\n> \n> > I have tried to install postgresql 7.2.1 on the new Red Hat 7.3. The\n> > regression test failed in three cases:\n> \n> The tests are buggy in some cases... the expected results are not\n> locale aware (e.g. \"12.42\" vs \"12,42\", \"A C b\" v \"A b C\")\n\nThis isn't the case you've indicated... This is the regression test diff\noutput:\n\n*** ./expected/abstime.out Wed Nov 21 19:27:25 2001\n--- ./results/abstime.out Thu May 9 17:27:00 2002\n***************\n*** 44,50 ****\n | Wed Dec 31 16:00:00 1969 PST\n | infinity\n | -infinity\n! | Sat May 10 23:59:12 1947 PST\n | invalid\n (7 rows)\n \n--- 44,50 ----\n | Wed Dec 31 16:00:00 1969 PST\n | infinity\n | -infinity\n! | Sat May 10 15:59:12 1947 PST\n | invalid\n (7 rows)\n \nAs you can see the time is different not the time/date format...\n\n-- \nDoct. Eng. Denis Gasparin: denis@edistar.com\n---------------------------\nProgrammer & System Administrator - Edistar srl\n\n\n",
"msg_date": "10 May 2002 10:50:52 +0200",
"msg_from": "Denis Gasparin <denis@edistar.com>",
"msg_from_op": true,
"msg_subject": "Re: Psql 7.2.1 Regress tests failed on RedHat 7.3"
},
{
"msg_contents": "...\n> This isn't the case you've indicated... This is the regression test diff\n> output:\n> *** ./expected/abstime.out Wed Nov 21 19:27:25 2001\n> --- ./results/abstime.out Thu May 9 17:27:00 2002\n> ! | Sat May 10 23:59:12 1947 PST\n...\n> ! | Sat May 10 15:59:12 1947 PST\n...\n> As you can see the time is different not the time/date format...\n\nOoh. Maybe the SuSE folks who were working on glibc actually changed it\nthe way they were talking about. Here (may be) the issue:\n\nPosix demands that mktime() and friends return -1 on error or on \"unable\nto convert time\". Traditionally, many implementations (AIX was a notable\nexception, adding to its rep as a brain-damaged system) ignored that\nrestriction, willingly converting for times before 1970, and returning a\nnegative number. And timezone databases supported times before 1970, and\nthe world was happy.\n\nBut that Posix requirement made it difficult to support the time \"one\nsecond before 1970\", which happens to be \"-1\". So the idea was to start\nreturning -1 for any time before 1970. That sucks. But for PostgreSQL,\nwe actually don't use the results of mktime(), but we *do* use a\nside-effect of the call, which fills in the time zone information (and a\nfew other fields) in the structures given to mktime() as *input*. We\ncurrently ignore the status return of mktime(), since we guard against\nillegal calling parameters another way.\n\nI would be very happy if the implementation of mktime() continued to\n*properly* fill in these fields, even though it insists on returning an\nerror condition for dates before 1970. As it is (assuming that a change\nactually has occurred) the timezone system in GNU systems has been badly\nand gratuitously damaged in the newest releases of glibc.\n\nAny suggestions on how to pursue this??\n\n - Thomas\n",
"msg_date": "Fri, 10 May 2002 06:55:00 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Psql 7.2.1 Regress tests failed on RedHat 7.3"
},
{
"msg_contents": "In article <3CDBD134.15FCF31D@fourpalms.org>,\nThomas Lockhart <lockhart@fourpalms.org> wrote:\n>error condition for dates before 1970. As it is (assuming that a change\n>actually has occurred) the timezone system in GNU systems has been badly\n>and gratuitously damaged in the newest releases of glibc.\n\nThe mktime.c for glibc hasn't changed since Jul 5, 2001.\n\nmrc\n\n-- \n Mike Castle dalgoda@ix.netcom.com www.netcom.com/~dalgoda/\n We are all of us living in the shadow of Manhattan. -- Watchmen\nfatal (\"You are in a maze of twisty compiler features, all different\"); -- gcc\n",
"msg_date": "Fri, 17 May 2002 15:56:50 -0800",
"msg_from": "dalgoda@ix.netcom.com (Mike Castle)",
"msg_from_op": false,
"msg_subject": "Re: Psql 7.2.1 Regress tests failed on RedHat 7.3"
},
{
"msg_contents": "> >error condition for dates before 1970. As it is (assuming that a change\n> >actually has occurred) the timezone system in GNU systems has been badly\n> >and gratuitously damaged in the newest releases of glibc.\n> The mktime.c for glibc hasn't changed since Jul 5, 2001.\n\nThere is a thread (I'm pretty sure it was on-list) which points at a\nchange log which shows that the behavior of mktime() was changed in the\nglibc development tree. My recollection might be wrong on details, but I\nstrongly recall this to be substantially correct.\n\nHopefully the changes are backed out or at least the \"set the time zone\"\nside effect continues to be supported (it wasn't clear what the actual\nchange in implementation would do; I didn't look at the source).\nOtherwise, Linux systems will look more and more like they were built by\nkids who weren't even alive before 1970. Hmm, maybe that's the\nexplanation... ;)\n\n - Thomas (*definitely* pre-1970)\n",
"msg_date": "Fri, 17 May 2002 18:23:38 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Psql 7.2.1 Regress tests failed on RedHat 7.3"
},
{
"msg_contents": "On Friday 10 May 2002 09:55 am, Thomas Lockhart wrote:\n> Ooh. Maybe the SuSE folks who were working on glibc actually changed it\n> the way they were talking about. Here (may be) the issue:\n\n> But that Posix requirement made it difficult to support the time \"one\n> second before 1970\", which happens to be \"-1\". So the idea was to start\n> returning -1 for any time before 1970. That sucks. But for PostgreSQL,\n> we actually don't use the results of mktime(), but we *do* use a\n> side-effect of the call, which fills in the time zone information (and a\n> few other fields) in the structures given to mktime() as *input*. We\n> currently ignore the status return of mktime(), since we guard against\n> illegal calling parameters another way.\n\n> I would be very happy if the implementation of mktime() continued to\n> *properly* fill in these fields, even though it insists on returning an\n> error condition for dates before 1970. As it is (assuming that a change\n> actually has occurred) the timezone system in GNU systems has been badly\n> and gratuitously damaged in the newest releases of glibc.\n\nWell, I went to bat for this a little bit ago, relating to a bug report, but \nI've struck out. The ISO C standard spells it out plainly that dates before \n1970 are just simply illegal for mktime and friends. So, glibc was aligned \nwith the ISO standard (available online -- see my link in the post regarding \nthe mktime bug in Red Hat 7.3). This will change ALL newer Linux \ndistributions, and thus we need to correct our usage, as the glibc people are \nnot likely to change -- in fact, they will likely say that our code is \nbroken.\n\n> Any suggestions on how to pursue this??\n\nWhat information do we need?\n\nFrankly, relying on a side-effect of a call in this way is our bug. Sorry. \nIt was a neat hack, but it now is broken. Where is this in the tree? I can \ntake a look at it if no one else wants to.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 20 May 2002 23:30:41 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Psql 7.2.1 Regress tests failed on RedHat 7.3"
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> Well, I went to bat for this a little bit ago, relating to a bug report, but \n> I've struck out. The ISO C standard spells it out plainly that dates before \n> 1970 are just simply illegal for mktime and friends.\n\nWell, since glibc apparently has no higher ambition than to work for\npost-1970 dates, we may have little choice but to throw out mktime and\nimplement our own timezone library. Ugh. It is pretty damn annoying\nthat they aren't interested in fixing their problem...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 May 2002 23:39:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Psql 7.2.1 Regress tests failed on RedHat 7.3 "
},
{
"msg_contents": "[HACKERS added to cc:, GENERAL dropped]\nOn Monday 20 May 2002 11:39 pm, Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > Well, I went to bat for this a little bit ago, relating to a bug report,\n> > but I've struck out. The ISO C standard spells it out plainly that dates\n> > before 1970 are just simply illegal for mktime and friends.\n\n> Well, since glibc apparently has no higher ambition than to work for\n> post-1970 dates, we may have little choice but to throw out mktime and\n> implement our own timezone library. Ugh. It is pretty damn annoying\n> that they aren't interested in fixing their problem...\n\nThey are just wanting to be standard. I know this; I just can't say how I \nknow this. But the link to the ISO definition is \nhttp://www.opengroup.org/onlinepubs/007904975/basedefs/xbd_chap04.html#tag_04_14 \nFWIW. \n\nWhile I don't agree with the standard, trying to be standard isn't really a \n'problem'. Relying on a side-effect of a nonstandard call is the problem.\n\nCan we pull in the BSD C library's mktime()? OR otherwise utilize it to fit \nthis bill?\n\nLooking at src/backend/utils/adt/datetime.c indicates that it might not be too \ndifficult. It was WISE to centralize the use of mktime in the one function, \nit appears.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 20 May 2002 23:52:29 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Psql 7.2.1 Regress tests failed on RedHat 7.3"
},
{
"msg_contents": "On Tuesday 21 May 2002 09:23 am, Thomas Lockhart wrote:\n> > While I don't agree with the standard, trying to be standard isn't really\n> > a 'problem'. Relying on a side-effect of a nonstandard call is the\n> > problem.\n\n> In my mind no one associated with glibc gets high marks for anything\n> having to do with this change. It is unbelievably short sighted.\n\nOh, I most certainly agree with you on this Thomas. The glibc people are just \nbeing adamant about it being 'standard.' And I certainly didn't mean to step \non your toes, either -- as I can tell this is a sore point for you. I'm just \ntrying to see what, if anything, we can do about it.\n\n> > Can we pull in the BSD C library's mktime()? OR otherwise utilize it to\n> > fit this bill?\n\n> Maybe, but probably forcing a *really* annoying code fork or patch to\n> get the entry points to play nice with glibc. We'll also need to figure\n> out how to manage the time zone database and how to keep it in sync.\n\n> This is a seriously big problem, and we'll need to research what to do\n> next. One possibility is to strip out all string time zone behavior and\n> support only the SQL date/time standard, which specifies only numeric\n> offsets and ignores real-world time zone usage and behaviors. Hmm, IBM\n> contributed to that standard too, maybe the common thread is not a\n> coincidence.\n\nWell, the existing behavior, according to my first read of the code, is to \nassume UTC if the time_t is predicted to be out of range. There is a macro \nfor this, I see. And the problem is that the out-of-range condition is \nhappening at a different place. I don't like this thought, but the most \nconsistent, least-common-denominator tack would to be flag anything prior to \nepoch as out-of-range, even if the underlying calls can handle negative \ntime_t. I don't like that one whit. But I like inconsistent data even less.\n\n> The new glibc behavior is really disappointing. Help and ideas are\n> appreciated; reimplementing an entire date/time system may be a lot of\n> work.\n\nWell, it was good foresight on your part to put all the mktime stuff in the \none place. I'm going to go through it and see if I understand what I'm \nlooking at first.\n\nBut I see a couple of possibilities that we can control:\n1.)\tHave configure test for broken mktime and sub our own mktime in that case \n(if this is even possible -- while the various BSD's have mktime and friends, \nhow difficult is it going to be to unshackle that from a BSD kernel \nunderneath -- I've not looked at the code for OpenBSD's mktime (which I have \non hand), but I guess I will take a look now);\n2.)\tRewrite our stuff to not depend on any mktime, and thus be more portable \n(perhaps?).\n\nBut, in any case, I didn't mean to step on your toes by any of my comments; I \ncompletely agree with you that glibc and the ISO C standard cited are daft in \nthis.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 21 May 2002 12:06:28 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Psql 7.2.1 Regress tests failed on RedHat 7.3"
},
{
"msg_contents": "...\n> But, in any case, I didn't mean to step on your toes by any of my comments; I\n> completely agree with you that glibc and the ISO C standard cited are daft in\n> this.\n\nNo complaints from my toes; I was just ventilating about stupid\nbreakage.\n\n - Thomas\n",
"msg_date": "Tue, 21 May 2002 09:21:30 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Psql 7.2.1 Regress tests failed on RedHat 7.3"
},
{
"msg_contents": "In article <17682.1021952389@sss.pgh.pa.us>,\nTom Lane <tgl@sss.pgh.pa.us> wrote:\n>Well, since glibc apparently has no higher ambition than to work for\n>post-1970 dates, we may have little choice but to throw out mktime and\n>implement our own timezone library. Ugh. It is pretty damn annoying\n>that they aren't interested in fixing their problem...\n\nUmmm... I'm no great admirer of Drepper, but exactly why is this a glibc\nproblem?\n\nThis was clearly an undocumented side-effect that, by pure chance, worked\nwell until now. So it happens to be glibc that first trips up this issue,\nbut it could very well have been any mktime implementation.\n\nI think it's unfair to blame glibc. At least until some other standard\ncomes into existence that makes glibc's implementation invalid.\n\nmrc\n-- \n Mike Castle dalgoda@ix.netcom.com www.netcom.com/~dalgoda/\n We are all of us living in the shadow of Manhattan. -- Watchmen\nfatal (\"You are in a maze of twisty compiler features, all different\"); -- gcc\n",
"msg_date": "Thu, 23 May 2002 15:04:17 -0700",
"msg_from": "dalgoda@ix.netcom.com (Mike Castle)",
"msg_from_op": false,
"msg_subject": "Re: Psql 7.2.1 Regress tests failed on RedHat 7.3 "
},
{
"msg_contents": "dalgoda@ix.netcom.com (Mike Castle) writes:\n> I think it's unfair to blame glibc. At least until some other standard\n> comes into existence that makes glibc's implementation invalid.\n\nNonsense. Are you saying it's a mistake that the Linux timezone\ndatabases (and those of most every other Unix, other than maybe AIX)\ncover pre-1970 years? The folks who took the trouble to develop\nthose databases did not think that Unix time began on 1970-01-01.\n\nThe glibc guys have decided that they will hew to the letter of a\nlowest-common-denominator standard by removing functionality,\nnamely the ability to do anything with pre-1970 dates.\n\nSince they evidently value a narrow reading of a spec over\nfunctionality, I cannot debate with them. But I think it's a seriously\nmisguided position. glibc will join AIX as one of only two libc's on\nthe planet that do not have this functionality. Meanwhile, the Postgres\nproject will be wasting manpower on reinventing a wheel we shouldn't\nhave to reinvent.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 23 May 2002 18:51:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Psql 7.2.1 Regress tests failed on RedHat 7.3 "
}
] |
[
{
"msg_contents": "With the current SRF patch, in certain circumstances selecting from a \nVIEW produces \"Buffer Leak\" warnings, while selecting from the function \nitself does not. Also the VIEW returns only one of the two expected \nrows. The same SQL function when declared as \"... getfoo(int) RETURNS \nint AS ...\" instead of \"... getfoo(int) RETURNS *setof* int AS...\" does \nnot produce the warning. Any ideas what I should be focusing on to track \nthis down? Does anyone have any favorite troubleshooting techniques for \nthis type of problem?\n\nThanks,\nJoe\n\n-- sql, proretset = t, prorettype = b\nDROP FUNCTION getfoo(int);\nDROP\nCREATE FUNCTION getfoo(int) RETURNS setof int AS 'SELECT fooid FROM foo \nWHERE fooid = $1;' LANGUAGE SQL;\nCREATE\nSELECT * FROM getfoo(1) AS t1;\n getfoo\n--------\n 1\n 1\n(2 rows)\n\nDROP VIEW vw_getfoo;\nDROP\nCREATE VIEW vw_getfoo AS SELECT * FROM getfoo(1);\nCREATE\nSELECT * FROM vw_getfoo;\npsql:../srf-test.sql:21: WARNING: Buffer Leak: [055] (freeNext=-3, \nfreePrev=-3, rel=16570/123204, blockNum=1, flags=0x4, refcount=1 1)\npsql:../srf-test.sql:21: WARNING: Buffer Leak: [059] (freeNext=-3, \nfreePrev=-3, rel=16570/123199, blockNum=0, flags=0x85, refcount=1 1)\n getfoo\n--------\n 1\n(1 row)\n\n",
"msg_date": "Thu, 09 May 2002 10:45:09 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "troubleshooting pointers"
},
{
"msg_contents": "Hello, Joe!\n\n JC> With the current SRF patch, in certain circumstances selecting from\n JC> a\n JC> VIEW produces \"Buffer Leak\" warnings, while selecting from the\n JC> function itself does not. Also the VIEW returns only one of the two\n\nSelecting from the function produces such a warning when using it with\nlimit,\nbut it does not when the function returns less rows than specified in limit\n.\ne.g.\n\njust_fun=# create table testtab(i integer, v varchar);\nCREATE\njust_fun=# insert into testtab values(1,'one');\nINSERT 16592 1\njust_fun=# insert into testtab values(2,'two');\nINSERT 16593 1\njust_fun=# insert into testtab values(3,'three');\nINSERT 16594 1\njust_fun=# insert into testtab values(1,'one again');\nINSERT 16595 1\njust_fun=# create function fun(integer) returns setof testtab as 'select *\nfrom testtab where i= $1;' language 'sql';\njust_fun=# select * from fun(1) as fun;\n i | v\n---+-----------\n 1 | one\n 1 | one again\n(2 rows)\n\njust_fun=# select * from fun(1) as fun limit 1;\nWARNING: Buffer Leak: [050] (freeNext=-3, freePrev=-3, rel=16570/16587,\nblockNum=0, flags=0x85, refcount=1 2)\n i | v\n---+-----\n 1 | one\n(1 row)\n\n....And there is no warning with \"ORDER BY\"\n\njust_fun=# select * from fun(1) as fun order by v limit 1;\n i | v\n---+-----\n 1 | one\n(1 row)\n\n\nHope this info maybe useful to solve the problem.\n\nBy the way, could you give an example of C-function returning set?\n\n JC> expected rows. The same SQL function when declared as \"...\n JC> getfoo(int) RETURNS int AS ...\" instead of \"... getfoo(int) RETURNS\n JC> *setof* int AS...\" does not produce the warning. Any ideas what I\n JC> should be focusing on to track this down? Does anyone have any\n JC> favorite troubleshooting techniques for this type of problem?\n\n JC> Thanks,\n JC> Joe\n\nThank you for your work in this direction!\n\nWith best regards, Valentine Zaretsky\n\n",
"msg_date": "Thu, 9 May 2002 23:13:50 +0300",
"msg_from": "\"Valentine Zaretsky\" <valik@apex.dp.ua>",
"msg_from_op": false,
"msg_subject": "Re: troubleshooting pointers"
},
{
"msg_contents": "Valentine Zaretsky wrote:\n> just_fun=# select * from fun(1) as fun limit 1;\n> WARNING: Buffer Leak: [050] (freeNext=-3, freePrev=-3, rel=16570/16587,\n> blockNum=0, flags=0x85, refcount=1 2)\n> i | v\n> ---+-----\n> 1 | one\n> (1 row)\n> \n> ....And there is no warning with \"ORDER BY\"\n> \n> just_fun=# select * from fun(1) as fun order by v limit 1;\n> i | v\n> ---+-----\n> 1 | one\n> (1 row)\n> \n> \n> Hope this info maybe useful to solve the problem.\n\nHmm. Yes, it looks like this is probably the same or a related issue.\n\n\n> \n> By the way, could you give an example of C-function returning set?\n> \n\nIn contrib/dblink, see dblink.c for a couple of examples (dblink(), \ndblink_get_pkey()), or look at pg_stat_get_backend_idset() in the \nbackend code. I haven't written a C-function returning a setof composite \ntype yet, but probably will soon, because I'll need it for testing (and \nultimately for the regression test script).\n\nThanks for the help!\n\nJoe\n\n\n",
"msg_date": "Thu, 09 May 2002 13:32:57 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "Re: troubleshooting pointers"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> With the current SRF patch, in certain circumstances selecting from a \n> VIEW produces \"Buffer Leak\" warnings, while selecting from the function \n> itself does not. Also the VIEW returns only one of the two expected \n> rows.\n\nThe buffer leak suggests failure to shut down a plan tree (ie, no\nExecutorEnd call). Probably related to not running the VIEW to\ncompletion, but it's hard to guess at the underlying cause.\n\nDo the plan trees (EXPLAIN display) look the same in both cases?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 May 2002 19:28:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: troubleshooting pointers "
},
{
"msg_contents": "Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n> \n>>With the current SRF patch, in certain circumstances selecting from a \n>>VIEW produces \"Buffer Leak\" warnings, while selecting from the function \n>>itself does not. Also the VIEW returns only one of the two expected \n>>rows.\n> \n> The buffer leak suggests failure to shut down a plan tree (ie, no\n> ExecutorEnd call). Probably related to not running the VIEW to\n> completion, but it's hard to guess at the underlying cause.\n> \n> Do the plan trees (EXPLAIN display) look the same in both cases?\n\nYes, but it suffers from the issue you brought up yesterday -- i.e. \nEXPLAIN doesn't run from within the function, and EXPLAIN outside the \nfunction (or VIEW which calls it) doesn't show very much:\n\ntest=# EXPLAIN SELECT * FROM vw_getfoo;\n QUERY PLAN\n-----------------------------------------------------------\n Function Scan on getfoo (cost=0.00..0.00 rows=0 width=0)\n(1 row)\n\ntest=# EXPLAIN SELECT * FROM getfoo(1);\n QUERY PLAN\n-----------------------------------------------------------\n Function Scan on getfoo (cost=0.00..0.00 rows=0 width=0)\n(1 row)\n\nI found an explaination you gave a while back which sounds like it \nexplains the problem:\nhttp://archives.postgresql.org/pgsql-bugs/2001-06/msg00051.php\n\nI also confirmed that postquel_end(), which calls ExecutorEnd(), never \ngets called for the VIEW case (or the LIMIT case that was pointed out on \nan earlier post).\n\nJust now I was looking for a way to propagate the necessary information \nto call ExecutorEnd() from ExecEndFunctionScan() in the case that fmgr \ndoesn't. It looks like I might be able to add a member to the \nExprContext struct for this purpose. Does this sound like the correct \n(or at least a reasonable) approach?\n\nThanks,\n\nJoe\n\n\n",
"msg_date": "Thu, 09 May 2002 16:42:36 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "Re: troubleshooting pointers"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Just now I was looking for a way to propagate the necessary information \n> to call ExecutorEnd() from ExecEndFunctionScan() in the case that fmgr \n> doesn't. It looks like I might be able to add a member to the \n> ExprContext struct for this purpose. Does this sound like the correct \n> (or at least a reasonable) approach?\n\nYeah, this is something that's bothered me in the past: with the\nexisting API, a function-returning-set will not get a chance to shut\ndown cleanly and release resources if its result is not read all the\nway to completion. You can demonstrate the problem without any\nuse of the SRF patch. Using current CVS tip (no patch), and the\nregression database:\n\nregression=# create function foo(int) returns setof int as '\nregression'# select unique1 from tenk1 where unique2 > $1'\nregression-# language sql;\n\nregression=# select foo(9990) limit 4;\nWARNING: Buffer Leak: [009] (freeNext=-3, freePrev=-3, rel=16570/135224, blockNum=29, flags=0x4, refcount=1 1)\nWARNING: Buffer Leak: [021] (freeNext=-3, freePrev=-3, rel=16570/18464, blockNum=232, flags=0x4, refcount=1 1)\n foo\n------\n 4093\n 6587\n 6093\n 429\n(4 rows)\n\nI don't much care for the thought of trawling every expression tree\nlooking for functions-returning-set during plan shutdown, so the thought\nthat comes to mind is to expect functions that want a shutdown callback\nto register themselves somehow. Adding a list of callbacks to\nExprContext seems pretty reasonable, but you'd also need some link in\nReturnSetInfo to let the function find the ExprContext to register\nitself with. Then FreeExprContext would call the callbacks.\n\nHmm ... another advantage of doing this is that the function would be\nable to find the ecxt_per_query_memory associated with the ExprContext.\nThat would be a Good Thing.\n\nWe should also think about the fcache (FunctionCache) struct and whether\nthat needs to tie into this. See the FIXME in utils/fcache.h.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 May 2002 20:15:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: troubleshooting pointers "
},
{
"msg_contents": "Tom Lane wrote:\n> I don't much care for the thought of trawling every expression tree\n> looking for functions-returning-set during plan shutdown, so the thought\n> that comes to mind is to expect functions that want a shutdown callback\n> to register themselves somehow. Adding a list of callbacks to\n> ExprContext seems pretty reasonable, but you'd also need some link in\n> ReturnSetInfo to let the function find the ExprContext to register\n> itself with. Then FreeExprContext would call the callbacks.\n\nI've made changes which fix this and will send them in with a revised \nSRF patch later today. Summary of design:\n1.) moved the execution_state struct and ExecStatus enum to executor.h\n2.) added \"void *es\" member to ExprContext\n3.) added econtext member to ReturnSetInfo\n4.) set rsi->econtext on the way in at ExecMakeFunctionResult()\n5.) set rsi->econtext->es on the way in at fmgr_sql()\n6.) used econtext->es on the way out at ExecFreeExprContext() to call \nExecutorEnd() if needed (because postquel_execute() never got the chance).\n\nOne note: I changed ExecFreeExprContext() because that's where all the \naction was for SQL function calls. FreeExprContext() was not involved \nfor the test case, but it looked like it probably should have the same \nchanges, so I made them there also.\n\n> \n> Hmm ... another advantage of doing this is that the function would be\n> able to find the ecxt_per_query_memory associated with the ExprContext.\n> That would be a Good Thing.\n\nWhat does this allow done that can't be done today?\n\n> \n> We should also think about the fcache (FunctionCache) struct and whether\n> that needs to tie into this. See the FIXME in utils/fcache.h.\n\nWhile I was at it, I added an fcache member to ExprContext, and \npopulated it in ExecMakeFunctionResult() for SRF cases. I wasn't sure \nwhat else to do with it at the moment, but at least it is a step in the \nright direction.\n\n\nJoe\n\n",
"msg_date": "Fri, 10 May 2002 11:02:28 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "Re: troubleshooting pointers"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Tom Lane wrote:\n>> Adding a list of callbacks to\n>> ExprContext seems pretty reasonable, but you'd also need some link in\n>> ReturnSetInfo to let the function find the ExprContext to register\n>> itself with. Then FreeExprContext would call the callbacks.\n\n> I've made changes which fix this and will send them in with a revised \n> SRF patch later today. Summary of design:\n> 1.) moved the execution_state struct and ExecStatus enum to executor.h\n> 2.) added \"void *es\" member to ExprContext\n> 3.) added econtext member to ReturnSetInfo\n> 4.) set rsi->econtext on the way in at ExecMakeFunctionResult()\n> 5.) set rsi->econtext->es on the way in at fmgr_sql()\n> 6.) used econtext->es on the way out at ExecFreeExprContext() to call \n> ExecutorEnd() if needed (because postquel_execute() never got the chance).\n\nUm. I don't like that; it assumes not only that ExecutorEnd is the only\nkind of callback needed, but also that there is at most one function\nper ExprContext that needs a shutdown callback. Neither of these\nassumptions hold water IMO.\n\nThe design I had in mind was more like this: add to ExprContext a list\nheader field pointing to a list of structs along the lines of\n\n\tstruct exprcontext_callback {\n\t\tstruct exprcontext_callback *next;\n\t\tvoid (*function) (Datum);\n\t\tDatum arg;\n\t}\n\nand then call each specified function with given argument during\nFreeExprContext. Probably ought to be careful to do that in reverse\norder of registration. We'd also need to invent a RescanExprContext\noperation to call the callbacks during a Rescan. The use of Datum\n(and not, say, void *) as PG's standard callback arg type was settled on\nsome time ago --- originally for on_proc_exit IIRC --- and seems to have\nworked well enough.\n\n>> Hmm ... another advantage of doing this is that the function would be\n>> able to find the ecxt_per_query_memory associated with the ExprContext.\n>> That would be a Good Thing.\n\n> What does this allow done that can't be done today?\n\nIt provides a place for the function to allocate stuff that needs to\nlive over multiple calls, ie, until it gets its shutdown callback.\nRight now a function has to use TransactionCommandContext for that,\nbut that's really too coarse-grained.\n\n>> We should also think about the fcache (FunctionCache) struct and whether\n>> that needs to tie into this. See the FIXME in utils/fcache.h.\n\n> While I was at it, I added an fcache member to ExprContext, and \n> populated it in ExecMakeFunctionResult() for SRF cases. I wasn't sure \n> what else to do with it at the moment, but at least it is a step in the \n> right direction.\n\nWell, I was debating whether that's good or not. The existing fcache\napproach is wrong (per cited FIXME); it might be better not to propagate\naccess of it into more places. Unless you can see a specific reason to\nallow the function to have access to the fcache struct, I think I'm\ninclined not to.\n\nWhat's really more relevant here is that during the hypothetical new\nRescanExprContext function, we ought to go around and clear any fcaches\nin the context that have setArgsValid = true, so that they will be\nrestarted afresh during the next scan of the plan. (The fact that that\ndoesn't happen now is another shortcoming of the existing set-functions-\nin-expressions code.) So this suggests making a callback function type\nspecifically to do that, and registering every fcache that is executing\na set function in the callback list...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 May 2002 14:40:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: troubleshooting pointers "
},
{
"msg_contents": "Tom Lane wrote:\n> Um. I don't like that; it assumes not only that ExecutorEnd is the only\n> kind of callback needed, but also that there is at most one function\n> per ExprContext that needs a shutdown callback. Neither of these\n> assumptions hold water IMO.\n> \n> The design I had in mind was more like this: add to ExprContext a list\n> header field pointing to a list of structs along the lines of\n> \n> \tstruct exprcontext_callback {\n> \t\tstruct exprcontext_callback *next;\n> \t\tvoid (*function) (Datum);\n> \t\tDatum arg;\n> \t}\n> \n> and then call each specified function with given argument during\n> FreeExprContext. Probably ought to be careful to do that in reverse\n> order of registration. We'd also need to invent a RescanExprContext\n> operation to call the callbacks during a Rescan. The use of Datum\n> (and not, say, void *) as PG's standard callback arg type was settled on\n> some time ago --- originally for on_proc_exit IIRC --- and seems to have\n> worked well enough.\n\nWell, I guess I set my sights too low ;-) This is a very nice design.\n\nI have the shutdown callback working now, and will send a new patch in a \nfew minutes. I have not started RescanExprContext() yet, but will do it \nwhen I address rescans in general.\n\n> What's really more relevant here is that during the hypothetical new\n> RescanExprContext function, we ought to go around and clear any fcaches\n> in the context that have setArgsValid = true, so that they will be\n> restarted afresh during the next scan of the plan. (The fact that that\n> doesn't happen now is another shortcoming of the existing set-functions-\n> in-expressions code.) So this suggests making a callback function type\n> specifically to do that, and registering every fcache that is executing\n> a set function in the callback list...\n\nI also added FunctionCachePtr_callback struct and a member to \nExprContext. I have not yet created the registration or shutdown \nfunctions, but again, I'll work on them as part of the rescan work.\n\nI still have a couple of issues related to VIEWs that I need to figure \nout, then I'll start the rescan work.\n\nThanks for the review and help!\n\nJoe\n\n",
"msg_date": "Fri, 10 May 2002 18:40:57 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "Re: troubleshooting pointers"
},
{
"msg_contents": "Tom Lane wrote:\n> Um. I don't like that; it assumes not only that ExecutorEnd is the only\n> kind of callback needed, but also that there is at most one function\n> per ExprContext that needs a shutdown callback. Neither of these\n> assumptions hold water IMO.\n> \n> The design I had in mind was more like this: add to ExprContext a list\n> header field pointing to a list of structs along the lines of\n> \n> \tstruct exprcontext_callback {\n> \t\tstruct exprcontext_callback *next;\n> \t\tvoid (*function) (Datum);\n> \t\tDatum arg;\n> \t}\n> \n> and then call each specified function with given argument during\n> FreeExprContext. Probably ought to be careful to do that in reverse\n> order of registration. We'd also need to invent a RescanExprContext\n> operation to call the callbacks during a Rescan. The use of Datum\n> (and not, say, void *) as PG's standard callback arg type was settled on\n> some time ago --- originally for on_proc_exit IIRC --- and seems to have\n> worked well enough.\n\nHere's the patch, per my post to HACKERS.\n\nIt builds cleanly on my dev box, and passes all regression tests.\n\nThanks,\n\nJoe",
"msg_date": "Fri, 10 May 2002 18:45:27 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "SRF patch (was Re: [HACKERS] troubleshooting pointers)"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> ... I have not started RescanExprContext() yet, but will do it \n> when I address rescans in general.\n\n> I still have a couple of issues related to VIEWs that I need to figure \n> out, then I'll start the rescan work.\n\nIt's not unlikely that those issues are exactly due to not having rescan\nhandled properly. What misbehavior are you seeing?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 May 2002 22:07:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: troubleshooting pointers "
},
{
"msg_contents": "Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n>>... I have not started RescanExprContext() yet, but will do it \n>>when I address rescans in general.\n> \n>>I still have a couple of issues related to VIEWs that I need to figure \n>>out, then I'll start the rescan work.\n> \n> It's not unlikely that those issues are exactly due to not having rescan\n> handled properly. What misbehavior are you seeing?\n\nHmm, that might just be it.\n\nWhen I select from a view based on a function which returns a base type, \nI only get the first row. When I select from a view which is based on a \nfunction returning a composite type, it triggers an assertion. I've \ntraced the latter down to a slot pointer which is reset to NULL \nsomewhere. Haven't had the time to get much further. In both cases, \nselecting from the function directly works great.\n\nThanks,\n\nJoe\n\n",
"msg_date": "Fri, 10 May 2002 21:04:32 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "Re: troubleshooting pointers"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Tom Lane wrote:\n>> It's not unlikely that those issues are exactly due to not having rescan\n>> handled properly. What misbehavior are you seeing?\n\n> Hmm, that might just be it.\n\n> When I select from a view based on a function which returns a base type, \n> I only get the first row. When I select from a view which is based on a \n> function returning a composite type, it triggers an assertion. I've \n> traced the latter down to a slot pointer which is reset to NULL \n> somewhere.\n\nUm, that's probably not it then. Rescan would only come into play for\na plan node that's being used as the inside of a join, or some other\ncontexts more complicated than this. A simple view ought to make no\ndifference at all in the generated plan --- perhaps there's some bit\nof the planner that you missed teaching about function RTEs or\nFunctionScan plan nodes?\n\nAnyway, I plan to review and apply your patch today, if I don't run\ninto any major problems. Will look to see if I see a reason for the\nview trouble.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 11 May 2002 10:57:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: troubleshooting pointers "
},
{
"msg_contents": "Tom Lane wrote:\n> Um, that's probably not it then. Rescan would only come into play for\n> a plan node that's being used as the inside of a join, or some other\n> contexts more complicated than this. A simple view ought to make no\n> difference at all in the generated plan --- perhaps there's some bit\n> of the planner that you missed teaching about function RTEs or\n> FunctionScan plan nodes?\n> \n> Anyway, I plan to review and apply your patch today, if I don't run\n> into any major problems. Will look to see if I see a reason for the\n> view trouble.\n\n(Sorry for the slow response -- been out all day)\n\nActually I found late last night that when the view is used, the RTE is \na RangeVar, so the RangeFunction code never gets executed. So I think \nyour comment above is right on. That may well explain both problems. \nI'll start looking again tonight.\n\nThanks,\n\nJoe\n\n\n",
"msg_date": "Sat, 11 May 2002 20:36:01 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "Re: troubleshooting pointers"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Actually I found late last night that when the view is used, the RTE is \n> a RangeVar, so the RangeFunction code never gets executed. So I think \n> your comment above is right on. That may well explain both problems. \n\nHmm. I thought your view problems were explained by the cut-and-pasteos\nI noticed in _readRangeTblEntry. Maybe there's more though. I haven't\ngot to the point of trying to actually execute the patch ... will work\non it more today.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 12 May 2002 11:34:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: troubleshooting pointers "
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Here's the patch, per my post to HACKERS.\n> It builds cleanly on my dev box, and passes all regression tests.\n\nI've committed this with some revisions. The VIEW cases you were\nworried about seem to work now. I think you'll find that\nsingle-FROM-item cases generally work, and it's time to start worrying\nabout joins (ie, rescans).\n\nParameters also need thought. This should be rejected:\n\nregression=# select * from foo, foot(fooid) z where foo.f2 = z.f2;\nserver closed the connection unexpectedly\n\nOn the other hand, IMHO this should work:\n\nregression=# select * from foo where f2 in\nregression-# (select f2 from foot(foo.fooid) z where z.fooid = foo.fooid);\nserver closed the connection unexpectedly\n\nand here again rescanning is going to be critical.\n\n\t\t\tregards, tom lane\n\nPS: test case for above:\n\ncreate table foo(fooid int, f2 int);\ninsert into foo values(1, 11);\ninsert into foo values(2, 22);\ninsert into foo values(1, 111);\n\ncreate function foot(int) returns setof foo as '\nselect * from foo where fooid = $1' language sql;\n",
"msg_date": "Sun, 12 May 2002 16:33:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SRF patch (was Re: [HACKERS] troubleshooting pointers) "
},
{
"msg_contents": "Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n> \n>>Here's the patch, per my post to HACKERS.\n>>It builds cleanly on my dev box, and passes all regression tests.\n> \n> \n> I've committed this with some revisions. The VIEW cases you were\n> worried about seem to work now. I think you'll find that\n> single-FROM-item cases generally work, and it's time to start worrying\n> about joins (ie, rescans).\n\nThanks! I've been offline most of the weekend, but I can get back on \nthis now. I'll start work on the rescans and test cases below right \naway. Were your revisions extensive? Any major misconceptions on my part?\n\nThanks,\n\nJoe\n\n\n> \n> Parameters also need thought. This should be rejected:\n> \n> regression=# select * from foo, foot(fooid) z where foo.f2 = z.f2;\n> server closed the connection unexpectedly\n> \n> On the other hand, IMHO this should work:\n> \n> regression=# select * from foo where f2 in\n> regression-# (select f2 from foot(foo.fooid) z where z.fooid = foo.fooid);\n> server closed the connection unexpectedly\n> \n> and here again rescanning is going to be critical.\n> \n> \t\t\tregards, tom lane\n> \n> PS: test case for above:\n> \n> create table foo(fooid int, f2 int);\n> insert into foo values(1, 11);\n> insert into foo values(2, 22);\n> insert into foo values(1, 111);\n> \n> create function foot(int) returns setof foo as '\n> select * from foo where fooid = $1' language sql;\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n\n\n",
"msg_date": "Sun, 12 May 2002 21:08:35 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "Re: SRF patch (was Re: [HACKERS] troubleshooting pointers)"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Were your revisions extensive? Any major misconceptions on my part?\n\nI did a fair amount of polishing on the ExprContext callback stuff,\nand removed or moved around some node fields that I thought were\nunnecessary or in the wrong place. I also set up proper infrastructure\nfor cost estimation on function RTEs (though the estimates themselves\nare still pretty lame). Nothing I'd call \"major\"... more in the\nline of stylistic improvements...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 May 2002 00:31:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SRF patch (was Re: [HACKERS] troubleshooting pointers) "
},
{
"msg_contents": "Tom Lane wrote:\n> I've committed this with some revisions. The VIEW cases you were\n> worried about seem to work now. I think you'll find that\n> single-FROM-item cases generally work, and it's time to start worrying\n> about joins (ie, rescans).\n\nHi Tom,\n\nI've been looking through the SRF patch as-committed, and I think I \nunderstand most of your changes, but I have a question: FunctionNext() \nnow seems to *always* use a tuplestore instead of conditionally using \nthe store only for rescans, or if the function was explicitly marked as \nPM_MATERIALIZE. Do you still think there should be an option to project \ntuples without first storing them, or should we eliminate the notion of \nfunction mode and always materialize?\n\n\n> \n> Parameters also need thought. This should be rejected:\n> \n> regression=# select * from foo, foot(fooid) z where foo.f2 = z.f2;\n> server closed the connection unexpectedly\n\nI don't understand why this should be rejected, but it does fail for me \nalso, due to a NULL slot pointer. At what point should it be rejected?\n\n\n> \n> On the other hand, IMHO this should work:\n> \n> regression=# select * from foo where f2 in\n> regression-# (select f2 from foot(foo.fooid) z where z.fooid = foo.fooid);\n> server closed the connection unexpectedly\n\nThis also fails in (based on a quick look) exactly the same way -- a \nNULL slot pointer (econtext->ecxt_scantuple) passed to ExecEvalVar().\n\n\nBTW, The test cases I was using previously now all pass (copy below).\n\nThanks,\n\nJoe\n\n\nDROP TABLE foo;\nCREATE TABLE foo (fooid int, foosubid int, fooname text, primary \nkey(fooid,foosubid));\nINSERT INTO foo VALUES(1,1,'Joe');\nINSERT INTO foo VALUES(1,2,'Ed');\nINSERT INTO foo VALUES(2,1,'Mary');\n\n-- sql, proretset = f, prorettype = b\nDROP FUNCTION getfoo(int);\nCREATE FUNCTION getfoo(int) RETURNS int AS 'SELECT $1;' LANGUAGE SQL;\nSELECT * FROM getfoo(1) AS t1;\nDROP VIEW vw_getfoo;\nCREATE VIEW vw_getfoo AS SELECT * FROM getfoo(1);\nSELECT * FROM vw_getfoo;\n\n-- sql, proretset = t, prorettype = b\nDROP FUNCTION getfoo(int);\nCREATE FUNCTION getfoo(int) RETURNS setof int AS 'SELECT fooid FROM foo \nWHERE fooid = $1;' LANGUAGE SQL;\nSELECT * FROM getfoo(1) AS t1;\nDROP VIEW vw_getfoo;\nCREATE VIEW vw_getfoo AS SELECT * FROM getfoo(1);\nSELECT * FROM vw_getfoo;\n\n-- sql, proretset = t, prorettype = b\nDROP FUNCTION getfoo(int);\nCREATE FUNCTION getfoo(int) RETURNS setof text AS 'SELECT fooname FROM \nfoo WHERE fooid = $1;' LANGUAGE SQL;\nSELECT * FROM getfoo(1) AS t1;\nDROP VIEW vw_getfoo;\nCREATE VIEW vw_getfoo AS SELECT * FROM getfoo(1);\nSELECT * FROM vw_getfoo;\n\n-- sql, proretset = f, prorettype = c\nDROP FUNCTION getfoo(int);\nCREATE FUNCTION getfoo(int) RETURNS foo AS 'SELECT * FROM foo WHERE \nfooid = $1;' LANGUAGE SQL;\nSELECT * FROM getfoo(1) AS t1;\nDROP VIEW vw_getfoo;\nCREATE VIEW vw_getfoo AS SELECT * FROM getfoo(1);\nSELECT * FROM vw_getfoo;\n\n-- sql, proretset = t, prorettype = c\nDROP FUNCTION getfoo(int);\nCREATE FUNCTION getfoo(int) RETURNS setof foo AS 'SELECT * FROM foo \nWHERE fooid = $1;' LANGUAGE SQL;\nSELECT * FROM getfoo(1) AS t1;\nDROP VIEW vw_getfoo;\nCREATE VIEW vw_getfoo AS SELECT * FROM getfoo(1);\nSELECT * FROM vw_getfoo;\n\n-- C, proretset = f, prorettype = b\nSELECT * FROM dblink_replace('123456789987654321', '99', 'HelloWorld');\nDROP VIEW vw_dblink_replace;\nCREATE VIEW vw_dblink_replace AS SELECT * FROM \ndblink_replace('123456789987654321', '99', 'HelloWorld');\nSELECT * FROM vw_dblink_replace;\n\n-- C, proretset = t, prorettype = b\nSELECT dblink_get_pkey FROM dblink_get_pkey('foo');\nDROP VIEW vw_dblink_get_pkey;\nCREATE VIEW vw_dblink_get_pkey AS SELECT dblink_get_pkey FROM \ndblink_get_pkey('foo');\nSELECT * FROM vw_dblink_get_pkey;\n\n\n",
"msg_date": "Tue, 14 May 2002 15:27:47 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "Re: SRF patch (was Re: [HACKERS] troubleshooting pointers)"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> I've been looking through the SRF patch as-committed, and I think I \n> understand most of your changes, but I have a question: FunctionNext() \n> now seems to *always* use a tuplestore instead of conditionally using \n> the store only for rescans,\n\nThe problem is that as things now stand, you do not know whether you\nwill be asked to rescan, so you must materialize the result just in\ncase.\n\nI would like to improve the system so that lower-level plan nodes will\nbe told whether they need to support rescan; but we aren't there yet,\nand I don't think it's the first priority to work on for SRF. Always\nmaterializing will do for the moment.\n\n> Do you still think there should be an option to project \n> tuples without first storing them, or should we eliminate the notion of \n> function mode and always materialize?\n\nIf the function is going to produce a materialized tupleset to begin\nwith (because that's convenient for it internally) then there's no value\nin having nodeFunctionscan.c make duplicate storage of the tupleset.\nWe need some way of communicating that fact from the function back to\nthe plan node ... but again, not first priority.\n\n>> Parameters also need thought. This should be rejected:\n>> \n>> regression=# select * from foo, foot(fooid) z where foo.f2 = z.f2;\n>> server closed the connection unexpectedly\n\n> I don't understand why this should be rejected, but it does fail for me \n> also, due to a NULL slot pointer. At what point should it be rejected?\n\nIn the parser. Ideally, fooid should not even be *visible* while we are\nparsing the arguments to the sibling FROM node. Compare the handling of\nvariable resolution in JOIN/ON clauses --- the namespace gets\nmanipulated so that those clauses can't see vars from sibling FROM nodes.\n\n>> On the other hand, IMHO this should work:\n>> \n>> regression=# select * from foo where f2 in\n>> regression-# (select f2 from foot(foo.fooid) z where z.fooid = foo.fooid);\n>> server closed the connection unexpectedly\n\n> This also fails in (based on a quick look) exactly the same way -- a \n> NULL slot pointer (econtext->ecxt_scantuple) passed to ExecEvalVar().\n\nRight. This should work, but the Var has to be converted into a Param\nreferencing the upper-level variable. I've forgotten right at the\nmoment where that happens (someplace in the planner) ... but I'll bet\nthat the someplace doesn't know it needs to process function argument\nnodetrees in function RTEs.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 May 2002 19:15:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SRF patch (was Re: [HACKERS] troubleshooting pointers) "
},
{
"msg_contents": "Tom Lane wrote:\n>>I don't understand why this should be rejected, but it does fail for me \n>>also, due to a NULL slot pointer. At what point should it be rejected?\n> \n> \n> In the parser. Ideally, fooid should not even be *visible* while we are\n> parsing the arguments to the sibling FROM node. Compare the handling of\n> variable resolution in JOIN/ON clauses --- the namespace gets\n> manipulated so that those clauses can't see vars from sibling FROM nodes.\n> \n\nAttached patch takes care of this case. It also passes my previous test \ncases (see below). Applies cleanly to CVS tip and passes all regression \ntests. Please apply if there are no objections.\n\nI'm still working on the second test case from Tom (the NULL slot \npointer inducing subselect).\n\nJoe\n\n------< tests >-------\ntest=# \\i /opt/src/srf-test.sql\nDROP TABLE foo;\nDROP\nCREATE TABLE foo(fooid int, f2 int);\nCREATE\nINSERT INTO foo VALUES(1, 11);\nINSERT 126218 1\nINSERT INTO foo VALUES(2, 22);\nINSERT 126219 1\nINSERT INTO foo VALUES(1, 111);\nINSERT 126220 1\nDROP FUNCTION foot(int);\nDROP\nCREATE FUNCTION foot(int) returns setof foo as 'SELECT * FROM foo WHERE \nfooid = $1;' LANGUAGE SQL;\nCREATE\n\n-- should fail with ERROR message\nselect * from foo, foot(fooid) z where foo.f2 = z.f2;\npsql:/opt/src/srf-test.sql:10: ERROR: Function relation in FROM clause \nmay not refer to other relation, \"foo\"\n\nDROP TABLE foo;\nDROP\nCREATE TABLE foo (fooid int, foosubid int, fooname text, primary \nkey(fooid,foosubid));\npsql:/opt/src/srf-test.sql:13: NOTICE: CREATE TABLE / PRIMARY KEY will \ncreate implicit index 'foo_pkey' for table 'foo'\nCREATE\nINSERT INTO foo VALUES(1,1,'Joe');\nINSERT 126228 1\nINSERT INTO foo VALUES(1,2,'Ed');\nINSERT 126229 1\nINSERT INTO foo VALUES(2,1,'Mary');\nINSERT 126230 1\n\n-- sql, proretset = f, prorettype = b\nDROP FUNCTION getfoo(int);\nDROP\nCREATE FUNCTION getfoo(int) RETURNS int AS 'SELECT $1;' LANGUAGE SQL;\nCREATE\nSELECT * FROM getfoo(1) AS t1;\n getfoo\n--------\n 1\n(1 row)\n\nDROP VIEW vw_getfoo;\nDROP\nCREATE VIEW vw_getfoo AS SELECT * FROM getfoo(1);\nCREATE\nSELECT * FROM vw_getfoo;\n getfoo\n--------\n 1\n(1 row)\n\n\n-- sql, proretset = t, prorettype = b\nDROP FUNCTION getfoo(int);\nDROP\nCREATE FUNCTION getfoo(int) RETURNS setof int AS 'SELECT fooid FROM foo \nWHERE fooid = $1;' LANGUAGE SQL;\nCREATE\nSELECT * FROM getfoo(1) AS t1;\n getfoo\n--------\n 1\n 1\n(2 rows)\n\nDROP VIEW vw_getfoo;\nDROP\nCREATE VIEW vw_getfoo AS SELECT * FROM getfoo(1);\nCREATE\nSELECT * FROM vw_getfoo;\n getfoo\n--------\n 1\n 1\n(2 rows)\n\n\n-- sql, proretset = t, prorettype = b\nDROP FUNCTION getfoo(int);\nDROP\nCREATE FUNCTION getfoo(int) RETURNS setof text AS 'SELECT fooname FROM \nfoo WHERE fooid = $1;' LANGUAGE SQL;\nCREATE\nSELECT * FROM getfoo(1) AS t1;\n getfoo\n--------\n Joe\n Ed\n(2 rows)\n\nDROP VIEW vw_getfoo;\nDROP\nCREATE VIEW vw_getfoo AS SELECT * FROM getfoo(1);\nCREATE\nSELECT * FROM vw_getfoo;\n getfoo\n--------\n Joe\n Ed\n(2 rows)\n\n\n-- sql, proretset = f, prorettype = c\nDROP FUNCTION getfoo(int);\nDROP\nCREATE FUNCTION getfoo(int) RETURNS foo AS 'SELECT * FROM foo WHERE \nfooid = $1;' LANGUAGE SQL;\nCREATE\nSELECT * FROM getfoo(1) AS t1;\n fooid | foosubid | fooname\n-------+----------+---------\n 1 | 1 | Joe\n(1 row)\n\nDROP VIEW vw_getfoo;\nDROP\nCREATE VIEW vw_getfoo AS SELECT * FROM getfoo(1);\nCREATE\nSELECT * FROM vw_getfoo;\n fooid | foosubid | fooname\n-------+----------+---------\n 1 | 1 | Joe\n(1 row)\n\n\n-- sql, proretset = t, prorettype = c\nDROP FUNCTION getfoo(int);\nDROP\nCREATE FUNCTION getfoo(int) RETURNS setof foo AS 'SELECT * FROM foo \nWHERE fooid = $1;' LANGUAGE SQL;\nCREATE\nSELECT * FROM getfoo(1) AS t1;\n fooid | foosubid | fooname\n-------+----------+---------\n 1 | 1 | Joe\n 1 | 2 | Ed\n(2 rows)\n\nDROP VIEW vw_getfoo;\nDROP\nCREATE VIEW vw_getfoo AS SELECT * FROM getfoo(1);\nCREATE\nSELECT * FROM vw_getfoo;\n fooid | foosubid | fooname\n-------+----------+---------\n 1 | 1 | Joe\n 1 | 2 | Ed\n(2 rows)\n\n\n-- C, proretset = f, prorettype = b\nSELECT * FROM dblink_replace('123456789987654321', '99', 'HelloWorld');\n dblink_replace\n----------------------------\n 12345678HelloWorld87654321\n(1 row)\n\nDROP VIEW vw_dblink_replace;\nDROP\nCREATE VIEW vw_dblink_replace AS SELECT * FROM \ndblink_replace('123456789987654321', '99', 'HelloWorld');\nCREATE\nSELECT * FROM vw_dblink_replace;\n dblink_replace\n----------------------------\n 12345678HelloWorld87654321\n(1 row)\n\n\n-- C, proretset = t, prorettype = b\nSELECT dblink_get_pkey FROM dblink_get_pkey('foo');\n dblink_get_pkey\n-----------------\n fooid\n foosubid\n(2 rows)\n\nDROP VIEW vw_dblink_get_pkey;\nDROP\nCREATE VIEW vw_dblink_get_pkey AS SELECT dblink_get_pkey FROM \ndblink_get_pkey('foo');\nCREATE\nSELECT * FROM vw_dblink_get_pkey;\n dblink_get_pkey\n-----------------\n fooid\n foosubid\n(2 rows)",
"msg_date": "Wed, 15 May 2002 22:16:28 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "Re: SRF patch (was Re: [HACKERS] troubleshooting pointers)"
},
{
"msg_contents": "Joe Conway wrote:\n >>\n >\n > Attached patch takes care of this case. It also passes my previous\n > test\n\nSorry, I just noticed that I did not finish modifying the comments that \nwere cut-and-pasted from elsewhere. This patch includes better comments.\n\nJoe",
"msg_date": "Wed, 15 May 2002 22:30:33 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "Re: SRF patch (was Re: [HACKERS] troubleshooting pointers)"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Tom Lane wrote:\n>> In the parser. Ideally, fooid should not even be *visible* while we are\n>> parsing the arguments to the sibling FROM node. Compare the handling of\n>> variable resolution in JOIN/ON clauses --- the namespace gets\n>> manipulated so that those clauses can't see vars from sibling FROM nodes.\n\n> Attached patch takes care of this case. It also passes my previous test \n> cases (see below). Applies cleanly to CVS tip and passes all regression \n> tests. Please apply if there are no objections.\n\nI've applied a simplified form of this patch --- it seemed you were\ndoing it the hard way. (Possibly I should have recommended\nRangeSubselect as a model, not JOIN/ON. Like RangeSubselect,\nRangeFunction doesn't need to allow *any* references to Vars of the\ncurrent query level.)\n\nFurther digging also revealed that query_tree_walker,\nquery_tree_mutator, and SS_finalize_plan had been missing out on their\nresponsibilities to process function-RTE expressions. With those things\nfixed, it appears that outer-level Var references and sub-selects work\nas expected in function-RTE expressions.\n\nI am still concerned about whether ExecFunctionReScan works correctly;\nif not, the problems would show up in join and subquery situations.\nI think the parser and planner stages are in pretty good shape now,\nthough. (At least as far as the basic functionality goes. Having\na smarter materialization policy will take work in the planner.)\n\nIt's not too soon to start thinking about documentation and regression\ntests for SRFs ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 18 May 2002 15:02:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SRF patch (was Re: [HACKERS] troubleshooting pointers) "
},
{
"msg_contents": "Tom Lane wrote:\n> I am still concerned about whether ExecFunctionReScan works correctly;\n> if not, the problems would show up in join and subquery situations.\n> I think the parser and planner stages are in pretty good shape now,\n> though. (At least as far as the basic functionality goes. Having\n> a smarter materialization policy will take work in the planner.)\n\nI have been beating heavily on this function, but so far I can't find an \nexample which doesn't seem to work correctly. However, I also cannot \nfind an example which executes this part of the function:\n\n. . .\n/*\n * Here we have a choice whether to drop the tuplestore (and recompute\n * the function outputs) or just rescan it. This should depend on\n * whether the function expression contains parameters and/or is\n * marked volatile. FIXME soon.\n */\nif (node->scan.plan.chgParam != NULL)\n{\n\ttuplestore_end((Tuplestorestate *) scanstate->tuplestorestate);\n\tscanstate->tuplestorestate = NULL;\n}\nelse\n. . .\n\nHere's at least part of what I've used to test:\n\nCREATE TABLE foorescan (fooid int, foosubid int, fooname text, primary \nkey(fooid,foosubid));\n\n-- use PHP to insert 100,000 records --\n\nVACUUM ANALYZE;\nCREATE FUNCTION foorescan(int,int) returns setof foorescan as 'SELECT * \nFROM foorescan WHERE fooid >= $1 and fooid < $2 ;' LANGUAGE SQL;\nselect * from foorescan f, (select fooid, foosubid from \nfoorescan(5000,5010)) as s where f.fooid = s.fooid and f.foosubid = \ns.foosubid;\nCREATE VIEW vw_foorescan as select * from foorescan f, (select fooid, \nfoosubid from foorescan(5000,5010)) as s where f.fooid = s.fooid and \nf.foosubid = s.foosubid;\n\n--invokes ExecFunctionReScan\nselect * from foorescan f where f.fooid in (select fooid from \nfoorescan(5000,5001));\n\nCREATE TABLE barrescan (fooid int primary key);\nINSERT INTO barrescan values(5000);\nINSERT INTO barrescan values(5001);\nINSERT INTO barrescan values(5002);\nINSERT INTO barrescan values(5003);\nINSERT INTO barrescan values(5004);\nINSERT INTO barrescan values(5005);\nINSERT INTO barrescan values(5006);\nINSERT INTO barrescan values(5007);\nINSERT INTO barrescan values(5008);\nINSERT INTO barrescan values(5009);\n\n--invokes ExecFunctionReScan\nselect * from random(), foorescan(5000,5010) f JOIN barrescan b ON \nb.fooid = f.fooid WHERE f.foosubid = 9;\nselect * from foorescan(5000,5000 + (random() * 10)::int) f JOIN \nbarrescan b ON b.fooid = f.fooid WHERE f.foosubid = 9;\n\n\nAny ideas on getting (node->scan.plan.chgParam != NULL) to be true?\n\nJoe\n\n",
"msg_date": "Sun, 19 May 2002 13:52:27 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] SRF patch (was Re: troubleshooting pointers)"
},
{
"msg_contents": "Tom Lane wrote:\n> I am still concerned about whether ExecFunctionReScan works correctly;\n> if not, the problems would show up in join and subquery situations.\n> I think the parser and planner stages are in pretty good shape now,\n> though. (At least as far as the basic functionality goes. Having\n> a smarter materialization policy will take work in the planner.)\n> \nHere's a small patch to ExecFunctionReScan. It was clearing\n scanstate->csstate.cstate.cs_ResultTupleSlot\nwhen I think it should have been clearing\n scanstate->csstate.css_ScanTupleSlot\n\nalthough there is no discernable (at least to me) difference either way.\n\nJoe",
"msg_date": "Sun, 19 May 2002 13:55:42 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "Re: SRF patch (was Re: [HACKERS] troubleshooting pointers)"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Any ideas on getting (node->scan.plan.chgParam != NULL) to be true?\n\nYou need something that passes a parameter into the scan node.\nI think the only thing that would do it is a subquery that references\nan outer-level variable, for example\n\nselect * from foo where fooid in\n(select barid from bar(foo.fieldx));\n\nHere, each time we rescan the subselect result for a new foo row, we\nneed to update the foo.fieldx Param to the new value for the new row.\nThat's what the chgParam mechanism is for: to notify you that a Param\nchanged since your last scan. (Without that, you could and probably\nshould just rewind and regurgitate your prior output.)\n\nNote that\n\nselect * from foo, bar(5000) where fooid = barid\n\ndoes not involve any parameters: the WHERE condition will be executed\nby the join node, and the FunctionScan node will have no contact at all\nwith data coming from the other table.\n\nNow that I think about it, it's possible that ExecFunctionReScan is\ncorrect now, at least given the simplistic always-materialize policy\nthat we've implemented so far. But it hasn't gotten much testing.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 19 May 2002 17:22:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] SRF patch (was Re: troubleshooting pointers) "
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Here's a small patch to ExecFunctionReScan. It was clearing\n> scanstate->csstate.cstate.cs_ResultTupleSlot\n> when I think it should have been clearing\n> scanstate->csstate.css_ScanTupleSlot\n\nWhy do you think that? To the extent that other rescan routines are\nclearing anything, they're clearing ResultTupleSlot.\n\n> although there is no discernable (at least to me) difference either way.\n\nMy guess is that it's pretty much a no-op, since the slot will get\ncleared and re-used on the next call anyway.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 19 May 2002 17:27:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SRF patch (was Re: [HACKERS] troubleshooting pointers) "
},
{
"msg_contents": "Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n> \n>>Here's a small patch to ExecFunctionReScan. It was clearing\n>> scanstate->csstate.cstate.cs_ResultTupleSlot\n>>when I think it should have been clearing\n>> scanstate->csstate.css_ScanTupleSlot\n> \n> \n> Why do you think that? To the extent that other rescan routines are\n> clearing anything, they're clearing ResultTupleSlot.\n\nWell, nodeMaterial and nodeSort both clear cs_ResultTupleSlot, but they \nalso use cs_ResultTupleSlot in ExecMaterial/ExecSort, whereas \nFunctionNext uses css_ScanTupleSlot. But as you pointed out, perhaps \nit's a noop anyway.\n\nI was having trouble getting everything to work correctly with \nFunctionNext using cs_ResultTupleSlot. I guess I don't really understand \nthe distinction, but I did note that the scan nodes (subqueryscan, \nseqscan, etc) used css_ScanTupleSlot, while the materialization nodes \ntended to use cs_ResultTupleSlot.\n\nJoe\n\n\n",
"msg_date": "Sun, 19 May 2002 14:40:30 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "Re: SRF patch (was Re: [HACKERS] troubleshooting pointers)"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> I was having trouble getting everything to work correctly with \n> FunctionNext using cs_ResultTupleSlot. I guess I don't really understand \n> the distinction, but I did note that the scan nodes (subqueryscan, \n> seqscan, etc) used css_ScanTupleSlot, while the materialization nodes \n> tended to use cs_ResultTupleSlot.\n\nResultTupleSlot is generally used by plan nodes that do ExecProject;\nit holds the tuple formed by ExecProject (ie, the calculated SELECT\ntargetlist). ScanTupleSlot is normally the raw input tuple. For\nFunctionscan I'd suppose that the scan tuple is the tuple returned\nby the function and ResultTupleSlot holds the result of ExecProject.\nTo see the difference, consider\n\n\tSELECT a, b, c+1 FROM foo(33);\n\nwhere foo returns a tuple (a,b,c,d,e). The scanned tuple is\n(a,b,c,d,e), the projected tuple is (a,b,c+1).\n\nIt may well be that rescan could usefully clear both scan and result\ntuples, but I don't see the point of making such a change only in\nFunctionScan.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 19 May 2002 18:36:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SRF patch (was Re: [HACKERS] troubleshooting pointers) "
},
{
"msg_contents": "was Re: [PATCHES] SRF patch (was Re: [HACKERS] troubleshooting pointers)\n\nTom Lane wrote:\n> \n> Now that I think about it, it's possible that ExecFunctionReScan is\n> correct now, at least given the simplistic always-materialize policy\n> that we've implemented so far. But it hasn't gotten much testing.\n\nOK -- the attached (stand alone) test script exercises \nExecFunctionReScan, including cases with chgParam != NULL. I'll try to \ncome up with one or two more variants for the latter, but so far I have \nnot found any misbehavior.\n\nJoe",
"msg_date": "Sun, 19 May 2002 16:33:53 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "SRF rescan testing"
},
{
"msg_contents": "Tom Lane wrote:\n > It's not too soon to start thinking about documentation and\n > regression tests for SRFs ...\n\nAttached is a regression test patch for SRFs. I based it on the test\nscripts that I have been using, minus the C function tests and without \ncalls to random() -- figured random() wouldn't work too well for a \nregression test ;-)\n\nJoe",
"msg_date": "Sun, 19 May 2002 17:40:15 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "Re: SRF patch (was Re: [HACKERS] troubleshooting pointers)"
},
{
"msg_contents": "Joe Conway wrote:\n> Tom Lane wrote:\n>>\n>> Now that I think about it, it's possible that ExecFunctionReScan is\n>> correct now, at least given the simplistic always-materialize policy\n>> that we've implemented so far. But it hasn't gotten much testing.\n> \n> OK -- the attached (stand alone) test script exercises \n> ExecFunctionReScan, including cases with chgParam != NULL. I'll try to \n> come up with one or two more variants for the latter, but so far I have \n> not found any misbehavior.\n\nI'm thinking about next steps for SRFs and looking for input. The \ncurrent status is that SRFs seem to work properly in the \nalway-materialize mode, for the following cases of FROM clause functions \nand VIEWs created based on FROM clause functions:\n\n(rehash from earlier post)\nLanguage RetSet RetType Status\n--------------- ------- ------- ---------------------\nC t b OK\nC t c Not tested\nC f b OK\nC f c Not tested\nSQL t b OK\nSQL t c OK\nSQL f b OK\nSQL f c OK\nPL/pgSQL t b No retset support\nPL/pgSQL t c No retset support\nPL/pgSQL f b OK\nPL/pgSQL f c OK\n-----------------------------------------------------\nRetSet: t = function declared to return setof something\nRetType: b = base type; c = composite type\n\nI've also submitted a patch for a regression test (any feedback?). At \nthis point I know of several things which need to be done (or at least I \nthink they are desirable):\n\n1. Documentation -- it wasn't clear if Joel Burton was going to have \ntime to contribute something here, but if not, I'll start working on \nthis next. Any guidance as to which section of the docs this should go in?\n\n2. Create a sample C-function which returns setof a composite type \n(possibly in conjunction with #1)\n\n3. PL/pgSQL support for returning sets -- this seems to me like an \nimportant item if SRFs are to be useful to the masses. Any pointers on \nhow to approach this would be appreciated.\n\n4. Non-materialize mode support for SRFs.\n\n5. Improve the system so that lower-level plan nodes will be told \nwhether they need to support rescan.\n\n6. Support for named composite types that don't have a table tied to them.\n\nHave I missed anything major? Is this order of priority reasonable?\n\nThanks,\n\nJoe\n\n",
"msg_date": "Fri, 24 May 2002 15:44:37 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "Re: SRF rescan testing"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> I'm thinking about next steps for SRFs and looking for input. ... At\n> this point I know of several things which need to be done (or at least I \n> think they are desirable):\n\n> 1. Documentation -- it wasn't clear if Joel Burton was going to have \n> time to contribute something here, but if not, I'll start working on \n> this next. Any guidance as to which section of the docs this should go in?\n\nThere is related material currently in the SQL-functions section of the\nprogrammer's guide. This should perhaps be moved to someplace where\nit's more clearly relevant to all types of functions. On the other hand\nit's awfully nice to be able to show simple examples, so I'm not sure we\nwant to divorce the material from SQL functions entirely.\n\n> 3. PL/pgSQL support for returning sets -- this seems to me like an \n> important item if SRFs are to be useful to the masses. Any pointers on \n> how to approach this would be appreciated.\n\nDoes Oracle's pl/sql support this? If so what does it look like?\n\n> 6. Support for named composite types that don't have a table tied to them.\n\nI agree that this is bottom priority. It doesn't really add any\nfunctionality (since a dummy table doesn't cost much of anything).\nAnd a clean solution would require major rearchitecting of the system\ntables --- pg_attribute rows would need to be tied to pg_type rows for\ncomposite types, not to pg_class rows. While this would be quite doable\nconsidering the backend alone, I'm not excited about the prospect of\nbreaking every catalog-examining client in sight. Another interesting\nquestion is whether inheritance now applies to types rather than tables,\nand if so what does that imply?\n\n(OTOH one could make a good argument that now is the time to do it\nif we're ever gonna do it --- clients that are not schema-aware will\nbe badly in need of work anyway for 7.3...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 24 May 2002 19:28:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SRF rescan testing "
},
{
"msg_contents": "Tom Lane wrote:\n>>3. PL/pgSQL support for returning sets -- this seems to me like an \n>>important item if SRFs are to be useful to the masses. Any pointers on \n>>how to approach this would be appreciated.\n> \n> Does Oracle's pl/sql support this? If so what does it look like?\n\nI *think* Oracle pl/sql can return (the equivilent of) setof composite \nusing a special Oracle package (DBMS_OUTPUT, see: \nhttp://www.ora.com/catalog/oraclebip/chapter/ch06.html), but it cannot \nbe used as a row source in a FROM clause. Hopefully an Oracle guru will \ncorrect or add to this.\n\nI know that MS SQL Server can return one *or more* result sets from a \n\"stored procedure\", however they cannot be used as FROM clause row \nsources either (at least not as of MSSQL 7, but I don't think that has \nchanged in MSSQL 2000). The syntax is something like:\n exec sp_myprocedure\nIt is *not* possible to define a VIEW based on a stored procedure, but \nmany MS centric report writers allow the \"exec sp_myprocedure\" syntax as \na row source for reports.\n\nAs far as PL/pgSQL is concerned, I was thinking that a new type of \nRETURN (maybe \"RETURN NEXT myval\" ??) command could be used, which would \nindicate \"rsi->isDone = ExprMultipleResult\", and that the standard \nRETURN command would set \"rsi->isDone = ExprEndResult\", but only if \n\"fcinfo->resultinfo != NULL\". That way you could do something like:\n\n. . .\nFOR row IN select_query LOOP\n statements\n RETURN NEXT row;\nEND LOOP;\n\nRETURN NULL;\n. . .\n\nDoes this sound reasonable?\n\nJoe\n\n",
"msg_date": "Sun, 26 May 2002 09:55:06 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "Re: SRF rescan testing"
},
{
"msg_contents": "> (OTOH one could make a good argument that now is the time to do it\n> if we're ever gonna do it --- clients that are not schema-aware will\n> be badly in need of work anyway for 7.3...)\n\nMaybe the attisdropped column should be created and added to the\npg_attribute catalog now as well. It would always be false, but would mean\nonly 1 round of mad postgres admin program hacking... Might be able to\navoid catalog changes for a drop column implementation in 7.4...\n\nChris\n\n\n",
"msg_date": "Mon, 27 May 2002 15:11:38 -0700",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: SRF rescan testing "
},
{
"msg_contents": "On Sun, 2002-05-26 at 21:55, Joe Conway wrote:\n> Tom Lane wrote:\n> >>3. PL/pgSQL support for returning sets -- this seems to me like an \n> >>important item if SRFs are to be useful to the masses. Any pointers on \n> >>how to approach this would be appreciated.\n> > \n> > Does Oracle's pl/sql support this? If so what does it look like?\n> \n> I *think* Oracle pl/sql can return (the equivilent of) setof composite \n> using a special Oracle package (DBMS_OUTPUT, see: \n> http://www.ora.com/catalog/oraclebip/chapter/ch06.html), but it cannot \n> be used as a row source in a FROM clause. Hopefully an Oracle guru will \n> correct or add to this.\n\nI'm no Oracle guru, but this is what a quick Google search found me:\n\nhttp://download-west.oracle.com/otndoc/oracle9i/901_doc/appdev.901/a89856/08_subs.htm#19677\n\n\n\n-------------\nHannu\n\n\n",
"msg_date": "28 May 2002 08:54:14 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: SRF rescan testing"
},
{
"msg_contents": "Hannu Krosing wrote:\n> On Sun, 2002-05-26 at 21:55, Joe Conway wrote:\n> \n>>Tom Lane wrote:\n>>\n>>>>3. PL/pgSQL support for returning sets -- this seems to me like an \n>>>>important item if SRFs are to be useful to the masses. Any pointers on \n>>>>how to approach this would be appreciated.\n>>>\n>>>Does Oracle's pl/sql support this? If so what does it look like?\n>>\n>>I *think* Oracle pl/sql can return (the equivilent of) setof composite \n>>using a special Oracle package (DBMS_OUTPUT, see: \n>>http://www.ora.com/catalog/oraclebip/chapter/ch06.html), but it cannot \n>>be used as a row source in a FROM clause. Hopefully an Oracle guru will \n>>correct or add to this.\n> \n> \n> I'm no Oracle guru, but this is what a quick Google search found me:\n> \n> http://download-west.oracle.com/otndoc/oracle9i/901_doc/appdev.901/a89856/08_subs.htm#19677\n> \n\nAfter a quick look, this appears to be a very relevant document. Does \nanyone know if this is new in 9i?\n\nJoe\n\n",
"msg_date": "Mon, 27 May 2002 23:15:00 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "Re: SRF rescan testing"
},
{
"msg_contents": "Tom Lane wrote:\n\n>>3. PL/pgSQL support for returning sets -- this seems to me like an \n>>important item if SRFs are to be useful to the masses. Any pointers on \n>>how to approach this would be appreciated.\n>>\n>\n>Does Oracle's pl/sql support this? If so what does it look like?\n>\nOracle supports \"pipelined functions\". These functions use operator \nPIPE(set%rowtype) to return a row.\nSyntax for queries using pipelined functions:\n\nSELECT f1,f2,... FROM TABLE(func(p1,p2, ...));\n\n\nIt seems that the most important thing to implement for PL/pgSQL \nfunctions returning sets is restoring of the function execution state in \nthe next call\n\n\nWBR, Valentine Zaretsky\n\n",
"msg_date": "Tue, 28 May 2002 15:48:52 +0300",
"msg_from": "Valentine Zaretsky <valik@apex.dp.ua>",
"msg_from_op": false,
"msg_subject": "Re: SRF rescan testing"
},
{
"msg_contents": "Here's the first doc patch for SRFs. The patch covers general \ninformation and SQL language specific info wrt SRFs. I've taken to \ncalling this feature \"Table Fuctions\" to be consistent with (at least) \none well known RDBMS.\n\nNote that I mention under the SQL language Table Function section that \n\"Functions returning sets\" in query target lists is a deprecated \nfeature, subject to removal in later releases. I think there was general \nagreement on this, but I thought it was worth pointing out.\n\nI still need to submit some C language function documentation, but was \nhoping to see if any further changes were needed in the Composite and \nSRF API patch that I sent in earlier. I've started the documentation but \nwill hold of sending in a patch for now on that.\n\nIf no objections, please apply.\n\nThanks,\n\nJoe\n\np.s. any feedback on the SRF regression test patch?",
"msg_date": "Thu, 13 Jun 2002 14:25:49 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "Table Function (aka SRF) doc patch"
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nJoe Conway wrote:\n> Here's the first doc patch for SRFs. The patch covers general \n> information and SQL language specific info wrt SRFs. I've taken to \n> calling this feature \"Table Fuctions\" to be consistent with (at least) \n> one well known RDBMS.\n> \n> Note that I mention under the SQL language Table Function section that \n> \"Functions returning sets\" in query target lists is a deprecated \n> feature, subject to removal in later releases. I think there was general \n> agreement on this, but I thought it was worth pointing out.\n> \n> I still need to submit some C language function documentation, but was \n> hoping to see if any further changes were needed in the Composite and \n> SRF API patch that I sent in earlier. I've started the documentation but \n> will hold of sending in a patch for now on that.\n> \n> If no objections, please apply.\n> \n> Thanks,\n> \n> Joe\n> \n> p.s. any feedback on the SRF regression test patch?\n\n[ text/html is unsupported, treating like TEXT/PLAIN ]\n\n> Index: doc//src/sgml/xfunc.sgml\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/doc/src/sgml/xfunc.sgml,v\n> retrieving revision 1.51\n> diff -c -r1.51 xfunc.sgml\n> *** doc//src/sgml/xfunc.sgml\t22 Mar 2002 19:20:33 -0000\t1.51\n> --- doc//src/sgml/xfunc.sgml\t13 Jun 2002 20:30:27 -0000\n> ***************\n> *** 188,193 ****\n> --- 188,194 ----\n> 1\n> </screen>\n> </para>\n> + \n> </sect2>\n> \n> <sect2>\n> ***************\n> *** 407,427 ****\n> </sect2>\n> \n> <sect2>\n> ! <title><acronym>SQL</acronym> Functions Returning Sets</title>\n> \n> <para>\n> ! As previously mentioned, an SQL function may be declared as\n> ! returning <literal>SETOF <replaceable>sometype</></literal>.\n> ! In this case the function's final <command>SELECT</> query is executed to\n> ! completion, and each row it outputs is returned as an element\n> ! of the set.\n> </para>\n> \n> <para>\n> ! Functions returning sets may only be called in the target list\n> ! of a <command>SELECT</> query. For each row that the <command>SELECT</> generates by itself,\n> ! the function returning set is invoked, and an output row is generated\n> ! for each element of the function's result set. An example:\n> \n> <programlisting>\n> CREATE FUNCTION listchildren(text) RETURNS SETOF text AS\n> --- 408,460 ----\n> </sect2>\n> \n> <sect2>\n> ! <title><acronym>SQL</acronym> Table Functions (Functions Returning Sets)</title>\n> \n> <para>\n> ! A table function is one that may be used in the <command>FROM</command>\n> ! clause of a query. All SQL Language functions may be used in this manner.\n> ! If the function is defined to return a base type, the table function\n> ! produces a one column result set. If the function is defined to\n> ! return <literal>SETOF <replaceable>sometype</></literal>, the table\n> ! function returns multiple rows. To illustrate a SQL table function,\n> ! consider the following, which returns <literal>SETOF</literal> a\n> ! composite type:\n> ! \n> ! <programlisting>\n> ! CREATE TABLE foo (fooid int, foosubid int, fooname text, primary key(fooid,foosubid));\n> ! INSERT INTO foo VALUES(1,1,'Joe');\n> ! INSERT INTO foo VALUES(1,2,'Ed');\n> ! INSERT INTO foo VALUES(2,1,'Mary');\n> ! CREATE FUNCTION getfoo(int) RETURNS setof foo AS '\n> ! SELECT * FROM foo WHERE fooid = $1;\n> ! ' LANGUAGE SQL;\n> ! SELECT * FROM getfoo(1) AS t1;\n> ! </programlisting>\n> ! \n> ! <screen>\n> ! fooid | foosubid | fooname\n> ! -------+----------+---------\n> ! 1 | 1 | Joe\n> ! 1 | 2 | Ed\n> ! (2 rows)\n> ! </screen>\n> </para>\n> \n> <para>\n> ! When an SQL function is declared as returning <literal>SETOF\n> ! <replaceable>sometype</></literal>, the function's final\n> ! <command>SELECT</> query is executed to completion, and each row it\n> ! outputs is returned as an element of the set.\n> ! </para>\n> ! \n> ! <para>\n> ! Functions returning sets may also currently be called in the target list\n> ! of a <command>SELECT</> query. For each row that the <command>SELECT</>\n> ! generates by itself, the function returning set is invoked, and an output\n> ! row is generated for each element of the function's result set. Note,\n> ! however, that this capability is deprecated and may be removed in future\n> ! releases. The following is an example function returning a set from the\n> ! target list:\n> \n> <programlisting>\n> CREATE FUNCTION listchildren(text) RETURNS SETOF text AS\n> ***************\n> *** 1620,1625 ****\n> --- 1653,1706 ----\n> </para>\n> </sect1>\n> \n> + <sect1 id=\"xfunc-tablefunctions\">\n> + <title>Table Functions</title>\n> + \n> + <indexterm zone=\"xfunc-tablefunctions\"><primary>function</></>\n> + \n> + <para>\n> + Table functions are functions that produce a set of rows, made up of\n> + either base (scalar) data types, or composite (multi-column) data types.\n> + They are used like a table, view, or subselect in the <literal>FROM</>\n> + clause of a query. Columns returned by table functions may be included in\n> + <literal>SELECT</>, <literal>JOIN</>, or <literal>WHERE</> clauses in the\n> + same manner as a table, view, or subselect column.\n> + </para>\n> + \n> + <para>\n> + If a table function returns a base data type, the single result column\n> + is named for the function. If the function returns a composite type, the\n> + result columns get the same names as the individual attributes of the type.\n> + </para>\n> + \n> + <para>\n> + A table function may be aliased in the <literal>FROM</> clause, but it also\n> + may be left unaliased. If a function is used in the FROM clause with no\n> + alias, the function name is used as the relation name.\n> + </para>\n> + \n> + <para>\n> + Table functions work wherever tables do in <literal>SELECT</> statements.\n> + For example\n> + <programlisting>\n> + CREATE TABLE foo (fooid int, foosubid int, fooname text, primary key(fooid,foosubid));\n> + CREATE FUNCTION getfoo(int) RETURNS foo AS 'SELECT * FROM foo WHERE fooid = $1;' LANGUAGE SQL;\n> + SELECT * FROM getfoo(1) AS t1;\n> + SELECT * FROM foo where foosubid in (select foosubid from getfoo(foo.fooid) z where z.fooid = foo.fooid);\n> + CREATE VIEW vw_getfoo AS SELECT * FROM getfoo(1);\n> + SELECT * FROM vw_getfoo;\n> + </programlisting>\n> + are all valid statements.\n> + </para>\n> + \n> + <para>\n> + Currently, table functions are supported as SQL language functions\n> + (<xref linkend=\"xfunc-sql\">) and C language functions\n> + (<xref linkend=\"xfunc-c\">). See these individual sections for more\n> + details.\n> + </para>\n> + \n> + </sect1>\n> \n> <sect1 id=\"xfunc-plhandler\">\n> <title>Procedural Language Handlers</title>\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 Jun 2002 14:45:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Table Function (aka SRF) doc patch"
},
{
"msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\n\nBruce Momjian wrote:\n> \n> Your patch has been added to the PostgreSQL unapplied patches list at:\n> \n> \thttp://candle.pha.pa.us/cgi-bin/pgpatches\n> \n> I will try to apply it within the next 48 hours.\n> \n> ---------------------------------------------------------------------------\n> \n> \n> Joe Conway wrote:\n> > Here's the first doc patch for SRFs. The patch covers general \n> > information and SQL language specific info wrt SRFs. I've taken to \n> > calling this feature \"Table Fuctions\" to be consistent with (at least) \n> > one well known RDBMS.\n> > \n> > Note that I mention under the SQL language Table Function section that \n> > \"Functions returning sets\" in query target lists is a deprecated \n> > feature, subject to removal in later releases. I think there was general \n> > agreement on this, but I thought it was worth pointing out.\n> > \n> > I still need to submit some C language function documentation, but was \n> > hoping to see if any further changes were needed in the Composite and \n> > SRF API patch that I sent in earlier. I've started the documentation but \n> > will hold of sending in a patch for now on that.\n> > \n> > If no objections, please apply.\n> > \n> > Thanks,\n> > \n> > Joe\n> > \n> > p.s. any feedback on the SRF regression test patch?\n> \n> [ text/html is unsupported, treating like TEXT/PLAIN ]\n> \n> > Index: doc//src/sgml/xfunc.sgml\n> > ===================================================================\n> > RCS file: /opt/src/cvs/pgsql/doc/src/sgml/xfunc.sgml,v\n> > retrieving revision 1.51\n> > diff -c -r1.51 xfunc.sgml\n> > *** doc//src/sgml/xfunc.sgml\t22 Mar 2002 19:20:33 -0000\t1.51\n> > --- doc//src/sgml/xfunc.sgml\t13 Jun 2002 20:30:27 -0000\n> > ***************\n> > *** 188,193 ****\n> > --- 188,194 ----\n> > 1\n> > </screen>\n> > </para>\n> > + \n> > </sect2>\n> > \n> > <sect2>\n> > ***************\n> > *** 407,427 ****\n> > </sect2>\n> > \n> > <sect2>\n> > ! <title><acronym>SQL</acronym> Functions Returning Sets</title>\n> > \n> > <para>\n> > ! As previously mentioned, an SQL function may be declared as\n> > ! returning <literal>SETOF <replaceable>sometype</></literal>.\n> > ! In this case the function's final <command>SELECT</> query is executed to\n> > ! completion, and each row it outputs is returned as an element\n> > ! of the set.\n> > </para>\n> > \n> > <para>\n> > ! Functions returning sets may only be called in the target list\n> > ! of a <command>SELECT</> query. For each row that the <command>SELECT</> generates by itself,\n> > ! the function returning set is invoked, and an output row is generated\n> > ! for each element of the function's result set. An example:\n> > \n> > <programlisting>\n> > CREATE FUNCTION listchildren(text) RETURNS SETOF text AS\n> > --- 408,460 ----\n> > </sect2>\n> > \n> > <sect2>\n> > ! <title><acronym>SQL</acronym> Table Functions (Functions Returning Sets)</title>\n> > \n> > <para>\n> > ! A table function is one that may be used in the <command>FROM</command>\n> > ! clause of a query. All SQL Language functions may be used in this manner.\n> > ! If the function is defined to return a base type, the table function\n> > ! produces a one column result set. If the function is defined to\n> > ! return <literal>SETOF <replaceable>sometype</></literal>, the table\n> > ! function returns multiple rows. To illustrate a SQL table function,\n> > ! consider the following, which returns <literal>SETOF</literal> a\n> > ! composite type:\n> > ! \n> > ! <programlisting>\n> > ! CREATE TABLE foo (fooid int, foosubid int, fooname text, primary key(fooid,foosubid));\n> > ! INSERT INTO foo VALUES(1,1,'Joe');\n> > ! INSERT INTO foo VALUES(1,2,'Ed');\n> > ! INSERT INTO foo VALUES(2,1,'Mary');\n> > ! CREATE FUNCTION getfoo(int) RETURNS setof foo AS '\n> > ! SELECT * FROM foo WHERE fooid = $1;\n> > ! ' LANGUAGE SQL;\n> > ! SELECT * FROM getfoo(1) AS t1;\n> > ! </programlisting>\n> > ! \n> > ! <screen>\n> > ! fooid | foosubid | fooname\n> > ! -------+----------+---------\n> > ! 1 | 1 | Joe\n> > ! 1 | 2 | Ed\n> > ! (2 rows)\n> > ! </screen>\n> > </para>\n> > \n> > <para>\n> > ! When an SQL function is declared as returning <literal>SETOF\n> > ! <replaceable>sometype</></literal>, the function's final\n> > ! <command>SELECT</> query is executed to completion, and each row it\n> > ! outputs is returned as an element of the set.\n> > ! </para>\n> > ! \n> > ! <para>\n> > ! Functions returning sets may also currently be called in the target list\n> > ! of a <command>SELECT</> query. For each row that the <command>SELECT</>\n> > ! generates by itself, the function returning set is invoked, and an output\n> > ! row is generated for each element of the function's result set. Note,\n> > ! however, that this capability is deprecated and may be removed in future\n> > ! releases. The following is an example function returning a set from the\n> > ! target list:\n> > \n> > <programlisting>\n> > CREATE FUNCTION listchildren(text) RETURNS SETOF text AS\n> > ***************\n> > *** 1620,1625 ****\n> > --- 1653,1706 ----\n> > </para>\n> > </sect1>\n> > \n> > + <sect1 id=\"xfunc-tablefunctions\">\n> > + <title>Table Functions</title>\n> > + \n> > + <indexterm zone=\"xfunc-tablefunctions\"><primary>function</></>\n> > + \n> > + <para>\n> > + Table functions are functions that produce a set of rows, made up of\n> > + either base (scalar) data types, or composite (multi-column) data types.\n> > + They are used like a table, view, or subselect in the <literal>FROM</>\n> > + clause of a query. Columns returned by table functions may be included in\n> > + <literal>SELECT</>, <literal>JOIN</>, or <literal>WHERE</> clauses in the\n> > + same manner as a table, view, or subselect column.\n> > + </para>\n> > + \n> > + <para>\n> > + If a table function returns a base data type, the single result column\n> > + is named for the function. If the function returns a composite type, the\n> > + result columns get the same names as the individual attributes of the type.\n> > + </para>\n> > + \n> > + <para>\n> > + A table function may be aliased in the <literal>FROM</> clause, but it also\n> > + may be left unaliased. If a function is used in the FROM clause with no\n> > + alias, the function name is used as the relation name.\n> > + </para>\n> > + \n> > + <para>\n> > + Table functions work wherever tables do in <literal>SELECT</> statements.\n> > + For example\n> > + <programlisting>\n> > + CREATE TABLE foo (fooid int, foosubid int, fooname text, primary key(fooid,foosubid));\n> > + CREATE FUNCTION getfoo(int) RETURNS foo AS 'SELECT * FROM foo WHERE fooid = $1;' LANGUAGE SQL;\n> > + SELECT * FROM getfoo(1) AS t1;\n> > + SELECT * FROM foo where foosubid in (select foosubid from getfoo(foo.fooid) z where z.fooid = foo.fooid);\n> > + CREATE VIEW vw_getfoo AS SELECT * FROM getfoo(1);\n> > + SELECT * FROM vw_getfoo;\n> > + </programlisting>\n> > + are all valid statements.\n> > + </para>\n> > + \n> > + <para>\n> > + Currently, table functions are supported as SQL language functions\n> > + (<xref linkend=\"xfunc-sql\">) and C language functions\n> > + (<xref linkend=\"xfunc-c\">). See these individual sections for more\n> > + details.\n> > + </para>\n> > + \n> > + </sect1>\n> > \n> > <sect1 id=\"xfunc-plhandler\">\n> > <title>Procedural Language Handlers</title>\n> \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> > \n> > http://archives.postgresql.org\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 20 Jun 2002 12:57:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Table Function (aka SRF) doc patch"
}
] |
[
{
"msg_contents": "On Fri, 2002-05-10 at 02:33, Dann Corbit wrote:\n> \n> It took a few hundred man hours to do it. \n\nAbout 2-8 weeks for one full time programmer ?\n\n> I see the whole Win32 port as\n> a non issue. Several parties have already completed it (including the\n> place where I work -- CONNX Solutions Inc.). If we did not do it or all\n> parties who already did it were hit by a comet or something, someone\n> else would accomplish it. It isn't trivial but it isn't impossible\n> either. If a need is large enough, someone will manage it. The need is\n> large enough. Ergo...\n\nDo you know which of these run ((reasonably) well) on win9x ?\n\n> Here are some other things related:\n> \n> A ready to go Win32 PosgreSQL package:\n> http://www.dbexperts.net/postgresql\n\nPerhaps we should back up and let dbexperts et.al. recover their costs\nand after that repent and commit changes back to main tree ;)\n\n## insert a little ad-hominem attack to everyone objecting a native \n## win32 port as owning stock in some win32-pg-selling company \n\nBTW, do they have an evaluation version or do they think that people\nwould in that case evaluate on win32 and then buy a cheap linux box for\n$495.- :)\n\n--------------------\nHannu\n\n\n",
"msg_date": "10 May 2002 01:15:02 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": true,
"msg_subject": "Re: Issues tangential to win32 support"
},
{
"msg_contents": "> -----Original Message-----\n> From: Hannu Krosing [mailto:hannu@tm.ee]\n> Sent: Thursday, May 09, 2002 12:10 PM\n> To: Jan Wieck\n> Cc: Scott Marlowe; PostgreSQL-development\n> Subject: Re: [HACKERS] Issues tangential to win32 support\n> \n> \n> On Thu, 2002-05-09 at 22:51, Jan Wieck wrote:\n> > Scott Marlowe wrote:\n> > > There are some issues that the whole idea of a win32 port \n> should bring up.\n> > > One of them is whether or not postgresql should be rewritten as a\n> > > multi-threaded app.\n> > \n> > Please, don't add this one to it.\n> > \n> > I'm all for the native Windows port, yes, but I've discussed\n> > the multi-thread questions for days at Great Bridge, then\n> > again with my new employer, with people on shows and whatnot.\n> > \n> > Anything in the whole backend is designed with a multi-\n> > process model in mind. You'll not do that in any reasonable\n> > amount of time.\n> \n> IIRC you are replying to the man who _has_ actually don this ?\n> \n> Perhaps using an unreasonable amount of time but still ... :)\n\nIt took a few hundred man hours to do it. I see the whole Win32 port as\na non issue. Several parties have already completed it (including the\nplace where I work -- CONNX Solutions Inc.). If we did not do it or all\nparties who already did it were hit by a comet or something, someone\nelse would accomplish it. It isn't trivial but it isn't impossible\neither. If a need is large enough, someone will manage it. The need is\nlarge enough. Ergo...\n\nHere are some other things related:\n\nA ready to go Win32 PosgreSQL package:\nhttp://www.dbexperts.net/postgresql\n\nAn open source project to productize PostgreSQL for Windows (has gone\nnowhere so far):\nhttp://gborg.postgresql.org/project/winpackage/projdisplay.php\n\nA native Win32 port accomplished by a Japanese Group:\nhttp://hp.vector.co.jp/authors/VA023283/PostgreSQLe.html\nIf you look under the \"Gists for Patch\" it contains exactly the same\ntasks that CONNX Solutions Inc. had to accomplish in every case.\n",
"msg_date": "Thu, 9 May 2002 14:33:40 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": false,
"msg_subject": "Re: Issues tangential to win32 support"
},
{
"msg_contents": "Dann Corbit wrote:\n> It took a few hundred man hours to do it. I see the whole Win32 port as\n> a non issue. Several parties have already completed it (including the\n> place where I work -- CONNX Solutions Inc.). If we did not do it or all\n> parties who already did it were hit by a comet or something, someone\n> else would accomplish it. It isn't trivial but it isn't impossible\n> either. If a need is large enough, someone will manage it. The need is\n> large enough. Ergo...\n> \n> Here are some other things related:\n> \n> A ready to go Win32 PosgreSQL package:\n> http://www.dbexperts.net/postgresql\n> \n> An open source project to productize PostgreSQL for Windows (has gone\n> nowhere so far):\n> http://gborg.postgresql.org/project/winpackage/projdisplay.php\n> \n> A native Win32 port accomplished by a Japanese Group:\n> http://hp.vector.co.jp/authors/VA023283/PostgreSQLe.html\n> If you look under the \"Gists for Patch\" it contains exactly the same\n> tasks that CONNX Solutions Inc. had to accomplish in every case.\n\nThese packages are based upon cygwin. Problems with cygwin:\n\n(1) GNU license issues.\n(2) Does not work well with anti-virus software\n(3) Since OS level copy-on-write is negated, process creation is much slower.\n(4) Since OS level copy on write is negated, memory that otherwise would not be\nallocated to the process is forced to ba allocated when the parent process data\nis copied.\n",
"msg_date": "Thu, 09 May 2002 17:41:29 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Issues tangential to win32 support"
}
] |
[
{
"msg_contents": "Hello everybody,\n\nThe last message of Chris helped me a lot.\n\nLet me give a short summary why do we (www.pgaccess.org) do what we do.\n\nWhat are the motives behind and what is the goal.\n\nMy company needed pgaccess exactly because of the nice visual 'schema'. The\n'schema', however, did not behave well if you give it 20-30 tables, so we\nasked Teo if he plans to patch this. The last official update on the site of\nTeo is from January 2001. Since then - if there have been patches, they have\nremained somehow unannounced. Teo said he has no time and we fixed it. We\nsent Teo patches several times and he came back with the following e-mail -\n\n> From: Constantin Teodorescu [mailto:teo@flex.ro]\n> Sent: Thursday, April 25, 2002 11:16 PM\n> To: Iavor Raytchev\n> Cc: Boyan Dzambazov; bartus.l@bitel.hu; cmaj@freedomcorpse.info\n> Subject: Re: Future PgAccess development\n>\n> Dear Iavor, Boyan, Bartus, Chris\n>\n> I am writing to you all because in the last days I have received from\n> all of you different patches and enhancements to PgAccess:\n>\n> - Iavor & Boyan in schema module\n> - Bartus in function handling\n> - Chris in report module\n>\n> Thank you all for your work and for developing PgAccess.\n>\n> For the moment, it's impossible to me to receive patches, maintain and\n> push a new version (0.99) of PgAccess. I am involved in a lot of other\n> projects and I have no free time.\n>\n> Furthermore, I am not familiar with the CVS and I have no free time to\n> learn something new right now.\n>\n> I ask you to join your efforts, to exchange between all of you the\n> patches that you have done and to try to set up a web site where\n> PgAccess development could continue in future. I don't know anything\n> about Sourceforge but it seems that they do such a thing. I want to stay\n> close to the discussions concerning the future of PgAccess and I want to\n> contribute with ideas, suggestions. But I feel that I will have no time\n> to build up a new release and I think that your enhancements should be\n> included in the next PostgreSQL release.\n>\n> I have also some changes in the query builder in order to support the\n> outer and inner join capabilities in PostgreSQL 7.x. but they are not\n> finished.\n>\n> Another important thing will be the changes that have to be done in\n> order to support table (row) editing without OID's because 7.2.x\n> versions allow table creation without OID's and table viewing is not\n> working any more.\n>\n> Thank you all , I'm waiting for your answers,\n>\n> Teo\n\n\nTo sum it up -\n\n-> pgaccess has not been officially updated since January 2001\n\n = there is no real interest in it or the interest is not public\n\n-> the author has no time\n\n = the project has no leader\n\n-> there are several people actively working on it\n\n = there is some interest\n\n-> the author gives us the chance to bring life\n\n = if we like it we must get it\n\n\nSo we did.\n\nWe took the www.pgaccess.org domain (on the name of Teo). We set up a\nserver. And we started searching for the latest pgaccess versioin to insert\nit into the cvs.\n\nFirst I thought Teo should have the latest version. He said - no, it should\nbe with the PostgreSQL distribution. I went there, but it did not seem very\nfresh. Then I continued my investigation and wrote to the\nwebmaster@postgresql.org - my goal was to really find all patches and\nintersted people and to bring the project to some useful place. Vince\nVielhaber wrote back that I should ask the HACKERS.\n\n\nSo I did.\n\nAnd now we are here.\n\nWe heard a lot of opinions from different sides.\n\nI would make the following summary -\n\n1] During the last 1 year there has not been an active interest in and/or\ndevelopment of pgaccess. Or if it has been - it has not been very official.\n\n2] Currently there are at least four people who actively need pgaccess and\nwrite for it - Bartus, Chris, Boyan and myself.\n\n3] To talk about pgaccess without talking about PostgreSQL is a nonsense -\npgaccess has one purpose and this is PostgreSQL.\n\n4] PostgreSQL is too much bigger than pgaccess (organizationwize) - the\nproximity kills pgaccess. PostgreSQL is PostgreSQL. It is great - that's why\nwe spent so much time trying to do something about it. Bug pgaccess is not\nPostgreSQL - it is one of the great tools around PostgreSQL and must be\nindependent.\n\n5] gborg is a mess (I hope I do not hurt anybody's feelings) - just see the\nbroken images on first page that have not been fixed for at least several\ndays. And the missing search. I have been searching in gborg for pgaccess\nseveral times - and I could not find it. I have the feeling that before\ngborg there was a very pretty postgresql.org style page with the projects -\nwhat happened to it?\n\n\nPROPOSAL\n\nWhat pgaccess needs is some fresh air - it needs a small and fresh team. It\nneeds own web site, own cvs, own mailing list. So that the people who love\nit, write for it and really need it can be easy to identify and to talk to.\nThis will not break its relationship to PostgreSQL in any way (see 3] above)\n\n\nAt the end - I am not experienced how decisions are taken in an open source\ncommunity - I have no idea what is next.\n\nMay be one can write a summary what are the bad sides of the above proposal.\nAnd if there are no such really - we should just proceed and have this nice\ntool alive and running.\n\nThanks everybody,\n\nIavor\n\n--\nwww.pgaccess.org\n\n",
"msg_date": "Thu, 9 May 2002 22:24:00 +0200",
"msg_from": "\"Iavor Raytchev\" <iavor.raytchev@verysmall.org>",
"msg_from_op": true,
"msg_subject": "www.pgaccess.org - the official story (the way I saw it)"
},
{
"msg_contents": "On Thu, May 09, 2002 at 10:24:00PM +0200, Iavor Raytchev wrote:\n\n<nice summary of how we got here>\n\n> PROPOSAL\n> \n> What pgaccess needs is some fresh air - it needs a small and fresh team. It\n> needs own web site, own cvs, own mailing list. So that the people who love\n> it, write for it and really need it can be easy to identify and to talk to.\n> This will not break its relationship to PostgreSQL in any way (see 3] above)\n\nI'd suggest keeping a copy of pgaccess in the main tree, as well, and\npushing versions from the development CVS over on a regular basis. There\nare basically two types of development that will need to happen: adapting\npgaccess to changes in PostgreSQL, and developing new features, on top\nof the stable release of PostgreSQL. I suggest having two branches at\ncvs.pgaccess.org: one that tracks HEAD of pgsql, one that uses the latest\nstable release. As features stablize on the second branch, we push them\nover to the pgsql branch, then into the pgsql tree, itself. Note that\nwe might be able to write some pgaccess regression tests: at minimum,\nsome sanity tests on the schema we store in the database. At postgresql\nrelease time, we'd make sure to get the latest, freshest code into the\nmain tree, and distributions.\n\n> At the end - I am not experienced how decisions are taken in an open source\n> community - I have no idea what is next.\n\nLike this! Out in the open, on the mailing lists. This message of yours was\nexactly the right thing to post: you contacted the original maintainer, got\nthe 'mantle' passed over to the new group, and continue on.\n\nIt might be good to get a mailing list at the main site, rather than\nrunning our own: that way, people will find it, and Bruce or someone\nhas an easy place to push patches he receives for our approval.\n\n> May be one can write a summary what are the bad sides of the above proposal.\n> And if there are no such really - we should just proceed and have this nice\n> tool alive and running.\n\nOnly bad thing would be to let the code in the main postgresql tree rot:\neither we keep it fresh, or we ask to have it pulled.\n\nRoss\n",
"msg_date": "Thu, 9 May 2002 15:42:22 -0500",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: www.pgaccess.org - the official story (the way I saw it)"
},
{
"msg_contents": "Thanks Ross,\n\nThis sounds like a resolution.\n\n> I'd suggest keeping a copy of pgaccess in the main tree, as well, and\n> pushing versions from the development CVS over on a regular basis.\n\nI am not a cvs expert. We will check this with Stanislav - our system\nadministrator, when he is back from holiday on Monday. I am sure there\nshould be an automated way of keeping things fresh.\n\nWho is the contact person for the PostgreSQL cvs?\n\n> There\n> are basically two types of development that will need to happen: adapting\n> pgaccess to changes in PostgreSQL, and developing new features, on top\n> of the stable release of PostgreSQL.\n\nRight.\n\nIt will be nice if we can have assigned liaison officers on the PostgreSQL\nside who can father the relationship with the pgaccess.org team. Regular\nsessions when a release of PostgreSQL is about to happen also might improve\nthe work a lot.\n\n> I suggest having two branches at\n> cvs.pgaccess.org: one that tracks HEAD of pgsql, one that uses the latest\n> stable release. As features stablize on the second branch, we push them\n> over to the pgsql branch, then into the pgsql tree, itself. Note that\n> we might be able to write some pgaccess regression tests: at minimum,\n> some sanity tests on the schema we store in the database. At postgresql\n> release time, we'd make sure to get the latest, freshest code into the\n> main tree, and distributions.\n\nThis sounds beautiful. There is more meaning in it than words. I need to\nsleep on it to get it, and we need some time to set this process up. But I\nam sure we should follow this if we want to get anywhere.\n\n> Like this! Out in the open, on the mailing lists. This message of\n> yours was\n> exactly the right thing to post: you contacted the original\n> maintainer, got\n> the 'mantle' passed over to the new group, and continue on.\n\nWell, let's hope people will like it. We started doing it for our own needs.\nNow suddenly it became the centre of the Universe :)\n\n> It might be good to get a mailing list at the main site, rather than\n> running our own: that way, people will find it, and Bruce or someone\n> has an easy place to push patches he receives for our approval.\n\nYes, this will happen next week. We just launched this server and we need\nfew more days to organize.\n\n> > May be one can write a summary what are the bad sides of the\n> above proposal.\n> > And if there are no such really - we should just proceed and\n> have this nice\n> > tool alive and running.\n>\n> Only bad thing would be to let the code in the main postgresql tree rot:\n> either we keep it fresh, or we ask to have it pulled.\n\nWell... as I said to Teo, Chris and Bartus when we started pgaccess.org - we\nneed it and we start it. If we fall out of business and can not provide the\nserver anymore - somebody else should take over. And they agreed to take the\nrisk. Until we are alive and breathing - we will be doing it. And until we\nare doing it - it will be fresh and blooming.\n\nThanks again,\n\nIavor\n\n",
"msg_date": "Thu, 9 May 2002 23:08:17 +0200",
"msg_from": "\"Iavor Raytchev\" <iavor.raytchev@verysmall.org>",
"msg_from_op": true,
"msg_subject": "Re: www.pgaccess.org - the official story (the way I saw it)"
},
{
"msg_contents": "Hi everybody,\n\nI think, that our \"job\" is to help this project to grow up to fit the \nneeds of the people that are using it. In the last months I didn't \nnotice any activity around it. And there are real expectations that are \nstill unsatisfied.\n\nThis project really needs the fresh air. I think, to have the \npgaccess.org is something good, and we shold make this whole thing work.\n\nSo let's do it!\n\nLet's take the last stable release, let's apply the patches, and let's \nput it on the pgaccess.org, where everybody can reach it easily. If we \nfind some other patches we can easily apply them too.\nThe source is very \"readable\", not too complicated, even as a beginner \nin tcl I was able to make useful changes. Congratulations to Teo, he \ndid a very good job.\n\nTo have an enthusiastic group of developers around the pgaccess is good \nfor the postgres teem too.\n\nOnce again: LET'S DO IT!\n\nLevi.\n\nP.S: In the near future I'm planning to make the hungarian translation \ntoo.\n\nOn 2002.05.09 22:24 Iavor Raytchev wrote:\n> Hello everybody,\n> \n> The last message of Chris helped me a lot.\n> \n> Let me give a short summary why do we (www.pgaccess.org) do what we\n> do.\n> \n> What are the motives behind and what is the goal.\n> \n> My company needed pgaccess exactly because of the nice visual\n> 'schema'. The\n> 'schema', however, did not behave well if you give it 20-30 tables, so\n> we\n> asked Teo if he plans to patch this. The last official update on the\n> site of\n> Teo is from January 2001. Since then - if there have been patches,\n> they have\n> remained somehow unannounced. Teo said he has no time and we fixed it.\n> We\n> sent Teo patches several times and he came back with the following\n> e-mail -\n> \n> > From: Constantin Teodorescu [mailto:teo@flex.ro]\n> > Sent: Thursday, April 25, 2002 11:16 PM\n> > To: Iavor Raytchev\n> > Cc: Boyan Dzambazov; bartus.l@bitel.hu; cmaj@freedomcorpse.info\n> > Subject: Re: Future PgAccess development\n> >\n> > Dear Iavor, Boyan, Bartus, Chris\n> >\n> > I am writing to you all because in the last days I have received\n> from\n> > all of you different patches and enhancements to PgAccess:\n> >\n> > - Iavor & Boyan in schema module\n> > - Bartus in function handling\n> > - Chris in report module\n> >\n> > Thank you all for your work and for developing PgAccess.\n> >\n> > For the moment, it's impossible to me to receive patches, maintain\n> and\n> > push a new version (0.99) of PgAccess. I am involved in a lot of\n> other\n> > projects and I have no free time.\n> >\n> > Furthermore, I am not familiar with the CVS and I have no free time\n> to\n> > learn something new right now.\n> >\n> > I ask you to join your efforts, to exchange between all of you the\n> > patches that you have done and to try to set up a web site where\n> > PgAccess development could continue in future. I don't know anything\n> > about Sourceforge but it seems that they do such a thing. I want to\n> stay\n> > close to the discussions concerning the future of PgAccess and I\n> want to\n> > contribute with ideas, suggestions. But I feel that I will have no\n> time\n> > to build up a new release and I think that your enhancements should\n> be\n> > included in the next PostgreSQL release.\n> >\n> > I have also some changes in the query builder in order to support\n> the\n> > outer and inner join capabilities in PostgreSQL 7.x. but they are\n> not\n> > finished.\n> >\n> > Another important thing will be the changes that have to be done in\n> > order to support table (row) editing without OID's because 7.2.x\n> > versions allow table creation without OID's and table viewing is not\n> > working any more.\n> >\n> > Thank you all , I'm waiting for your answers,\n> >\n> > Teo\n> \n> \n> To sum it up -\n> \n> -> pgaccess has not been officially updated since January 2001\n> \n> = there is no real interest in it or the interest is not public\n> \n> -> the author has no time\n> \n> = the project has no leader\n> \n> -> there are several people actively working on it\n> \n> = there is some interest\n> \n> -> the author gives us the chance to bring life\n> \n> = if we like it we must get it\n> \n> \n> So we did.\n> \n> We took the www.pgaccess.org domain (on the name of Teo). We set up a\n> server. And we started searching for the latest pgaccess versioin to\n> insert\n> it into the cvs.\n> \n> First I thought Teo should have the latest version. He said - no, it\n> should\n> be with the PostgreSQL distribution. I went there, but it did not seem\n> very\n> fresh. Then I continued my investigation and wrote to the\n> webmaster@postgresql.org - my goal was to really find all patches and\n> intersted people and to bring the project to some useful place. Vince\n> Vielhaber wrote back that I should ask the HACKERS.\n> \n> \n> So I did.\n> \n> And now we are here.\n> \n> We heard a lot of opinions from different sides.\n> \n> I would make the following summary -\n> \n> 1] During the last 1 year there has not been an active interest in\n> and/or\n> development of pgaccess. Or if it has been - it has not been very\n> official.\n> \n> 2] Currently there are at least four people who actively need pgaccess\n> and\n> write for it - Bartus, Chris, Boyan and myself.\n> \n> 3] To talk about pgaccess without talking about PostgreSQL is a\n> nonsense -\n> pgaccess has one purpose and this is PostgreSQL.\n> \n> 4] PostgreSQL is too much bigger than pgaccess (organizationwize) -\n> the\n> proximity kills pgaccess. PostgreSQL is PostgreSQL. It is great -\n> that's why\n> we spent so much time trying to do something about it. Bug pgaccess is\n> not\n> PostgreSQL - it is one of the great tools around PostgreSQL and must\n> be\n> independent.\n> \n> 5] gborg is a mess (I hope I do not hurt anybody's feelings) - just\n> see the\n> broken images on first page that have not been fixed for at least\n> several\n> days. And the missing search. I have been searching in gborg for\n> pgaccess\n> several times - and I could not find it. I have the feeling that\n> before\n> gborg there was a very pretty postgresql.org style page with the\n> projects -\n> what happened to it?\n> \n> \n> PROPOSAL\n> \n> What pgaccess needs is some fresh air - it needs a small and fresh\n> team. It\n> needs own web site, own cvs, own mailing list. So that the people who\n> love\n> it, write for it and really need it can be easy to identify and to\n> talk to.\n> This will not break its relationship to PostgreSQL in any way (see 3]\n> above)\n> \n> \n> At the end - I am not experienced how decisions are taken in an open\n> source\n> community - I have no idea what is next.\n> \n> May be one can write a summary what are the bad sides of the above\n> proposal.\n> And if there are no such really - we should just proceed and have this\n> nice\n> tool alive and running.\n> \n> Thanks everybody,\n> \n> Iavor\n> \n> --\n> www.pgaccess.org\n> \n> \n",
"msg_date": "Fri, 10 May 2002 00:30:00 +0200",
"msg_from": "Bartus Levente <bartus.l@bitel.hu>",
"msg_from_op": false,
"msg_subject": "Re: www.pgaccess.org - the official story (the way I saw it)"
},
{
"msg_contents": "On Thu, 9 May 2002, Iavor Raytchev wrote:\n\n> Thanks Ross,\n>\n> This sounds like a resolution.\n>\n> > I'd suggest keeping a copy of pgaccess in the main tree, as well, and\n> > pushing versions from the development CVS over on a regular basis.\n>\n> I am not a cvs expert. We will check this with Stanislav - our system\n> administrator, when he is back from holiday on Monday. I am sure there\n> should be an automated way of keeping things fresh.\n>\n> Who is the contact person for the PostgreSQL cvs?\n> > Only bad thing would be to let the code in the main postgresql tree rot:\n> > either we keep it fresh, or we ask to have it pulled.\n>\n> Well... as I said to Teo, Chris and Bartus when we started pgaccess.org - we\n> need it and we start it. If we fall out of business and can not provide the\n> server anymore - somebody else should take over. And they agreed to take the\n> risk. Until we are alive and breathing - we will be doing it. And until we\n> are doing it - it will be fresh and blooming.\n\n From a PgSQL Project standpoint, pgaccess has always been included as a\nway of increasing the overall distribution of the package as a valid GUI\ninterface ... all that has ever happened in the past is that when a new\nrelease came out from Teo, Bruce has generally downloaded it and replaced\nwhat we had in CVS ... there were no patches involved ... I don't see why\nthat has to change, does it?\n\nIf the pgaccess.org folk would like, I can provide them with a means of\nbeing able to easily upload a new copy of each release to\nftp.postgresql.org, so that it can make use of the extensive distribution\nsystem wthat has been developeed over the years ... just let me know ...\n\n\n\n",
"msg_date": "Thu, 9 May 2002 20:27:32 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: www.pgaccess.org - the official story (the way I saw"
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> From a PgSQL Project standpoint, pgaccess has always been included as a\n> way of increasing the overall distribution of the package as a valid GUI\n> interface ... all that has ever happened in the past is that when a new\n> release came out from Teo, Bruce has generally downloaded it and replaced\n> what we had in CVS ... there were no patches involved ... I don't see why\n> that has to change, does it?\n\nIdeally I think there should be only one master CVS copy of pgaccess ---\neither that should be the one in the postgresql.org tree, or we should\nremove pgaccess from postgresql.org and let it become a standalone\nproject with its own CVS someplace else. I know that right now, there\nare some changes in the postgresql.org tree that are not in Teo's tree,\nbecause I made some 7.2 fixes there last summer (having forgotten that\nour sources were not the master copy). This is not good, but it'll\nkeep happening if there are multiple CVS trees.\n\nWhich of those approaches to take is pretty much up to the new\nmaintainers of pgaccess --- if you guys would rather be a separate\nproject, fine, or we can work with you if you want postgresql.org\nto be the CVS repository. Personally I'd vote for the latter.\nThe JDBC folks have been working pretty successfully as a sub-project\nwithin the postgresql.org tree, so I think you could do the same.\nBut you might get more \"name recognition\" as a separate project.\n\n> If the pgaccess.org folk would like, I can provide them with a means of\n> being able to easily upload a new copy of each release to\n> ftp.postgresql.org, so that it can make use of the extensive distribution\n> system wthat has been developeed over the years ... just let me know ...\n\nRight, if there's a separate CVS we can still arrange to be an FTP\ndistribution channel.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 May 2002 19:49:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: www.pgaccess.org - the official story (the way I saw "
},
{
"msg_contents": "\nOn Thu, 9 May 2002, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > From a PgSQL Project standpoint, pgaccess has always been included as a\n> > way of increasing the overall distribution of the package as a valid GUI\n> > interface ... all that has ever happened in the past is that when a new\n> > release came out from Teo, Bruce has generally downloaded it and replaced\n> > what we had in CVS ... there were no patches involved ... I don't see why\n> > that has to change, does it?\n> \n> Ideally I think there should be only one master CVS copy of pgaccess ---\n> either that should be the one in the postgresql.org tree, or we should\n> remove pgaccess from postgresql.org and let it become a standalone\n> project with its own CVS someplace else. I know that right now, there\n> are some changes in the postgresql.org tree that are not in Teo's tree,\n> because I made some 7.2 fixes there last summer (having forgotten that\n> our sources were not the master copy). This is not good, but it'll\n> keep happening if there are multiple CVS trees.\n> \n> Which of those approaches to take is pretty much up to the new\n> maintainers of pgaccess --- if you guys would rather be a separate\n> project, fine, or we can work with you if you want postgresql.org\n> to be the CVS repository. Personally I'd vote for the latter.\n> The JDBC folks have been working pretty successfully as a sub-project\n> within the postgresql.org tree, so I think you could do the same.\n> But you might get more \"name recognition\" as a separate project.\n\nI'm not part of this pgaccess group but having the repository at postgresql.org\nmakes sense to me as refreshing a local tree to capture changes to postgres is\nalso going to bring in any commited changes to pgaccess. That's easiest for\nkeeping everything in step, since breakages are going to be apparent straight\naway. If there's a separate repository then it's easy to see someone keeping\nupto date with one but not the other and ending up in a mess.\n\nOn the other hand, I also quite like the idea of it being maintained as a\nseparate entity with some sort of push to the main repository. I was also\ntrying to make a case for this based on the ease of enhancing and releasing\nfunctionality for those not on the bleeding edge but I'm not so sure now since\nthat requires all fixes to keep in step with the backend to backwards\ncompatible.\n\n> [snip]\n\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants\n\n",
"msg_date": "Fri, 10 May 2002 01:40:22 +0100 (BST)",
"msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: www.pgaccess.org - the official story (the way I saw"
},
{
"msg_contents": "What about http://sourceforge.net/projects/pgaccess/? It looks \ninactive but somebody did set it up on 2002-04-25. I think I\nfound it from Teo's website.\n\nMikE\n\n\n> \n> To sum it up -\n> \n> -> pgaccess has not been officially updated since January 2001\n> \n> = there is no real interest in it or the interest is not public\n> \n> -> the author has no time\n> \n> = the project has no leader\n> \n> -> there are several people actively working on it\n> \n> = there is some interest\n> \n> -> the author gives us the chance to bring life\n> \n> = if we like it we must get it\n> \n> So we did.\n> \n> We took the www.pgaccess.org domain (on the name of Teo). We set up a\n> server. And we started searching for the latest pgaccess versioin to insert\n> it into the cvs.\n> \n> First I thought Teo should have the latest version. He said - no, it should\n> be with the PostgreSQL distribution. I went there, but it did not seem very\n> fresh. Then I continued my investigation and wrote to the\n> webmaster@postgresql.org - my goal was to really find all patches and\n> intersted people and to bring the project to some useful place. Vince\n> Vielhaber wrote back that I should ask the HACKERS.\n> \n> So I did.\n> \n> And now we are here.\n> \n> We heard a lot of opinions from different sides.\n> \n> I would make the following summary -\n> \n> 1] During the last 1 year there has not been an active interest in and/or\n> development of pgaccess. Or if it has been - it has not been very official.\n> \n> 2] Currently there are at least four people who actively need pgaccess and\n> write for it - Bartus, Chris, Boyan and myself.\n> \n> 3] To talk about pgaccess without talking about PostgreSQL is a nonsense -\n> pgaccess has one purpose and this is PostgreSQL.\n> \n> 4] PostgreSQL is too much bigger than pgaccess (organizationwize) - the\n> proximity kills pgaccess. PostgreSQL is PostgreSQL. It is great - that's why\n> we spent so much time trying to do something about it. Bug pgaccess is not\n> PostgreSQL - it is one of the great tools around PostgreSQL and must be\n> independent.\n> \n> 5] gborg is a mess (I hope I do not hurt anybody's feelings) - just see the\n> broken images on first page that have not been fixed for at least several\n> days. And the missing search. I have been searching in gborg for pgaccess\n> several times - and I could not find it. I have the feeling that before\n> gborg there was a very pretty postgresql.org style page with the projects -\n> what happened to it?\n> \n> PROPOSAL\n> \n> What pgaccess needs is some fresh air - it needs a small and fresh team. It\n> needs own web site, own cvs, own mailing list. So that the people who love\n> it, write for it and really need it can be easy to identify and to talk to.\n> This will not break its relationship to PostgreSQL in any way (see 3] above)\n> \n> At the end - I am not experienced how decisions are taken in an open source\n> community - I have no idea what is next.\n> \n> May be one can write a summary what are the bad sides of the above proposal.\n> And if there are no such really - we should just proceed and have this nice\n> tool alive and running.\n> \n> Thanks everybody,\n> \n> Iavor\n> \n> --\n> www.pgaccess.org\n",
"msg_date": "Thu, 16 May 2002 17:34:49 -0700",
"msg_from": "Mike Embry <membry@engine-qfe0.sps.mot.com>",
"msg_from_op": false,
"msg_subject": "Re: www.pgaccess.org - the official story (the way I saw it)"
},
{
"msg_contents": "Tom Lane wrote:\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > From a PgSQL Project standpoint, pgaccess has always been included as a\n> > way of increasing the overall distribution of the package as a valid GUI\n> > interface ... all that has ever happened in the past is that when a new\n> > release came out from Teo, Bruce has generally downloaded it and replaced\n> > what we had in CVS ... there were no patches involved ... I don't see why\n> > that has to change, does it?\n> \n> Ideally I think there should be only one master CVS copy of pgaccess ---\n> either that should be the one in the postgresql.org tree, or we should\n> remove pgaccess from postgresql.org and let it become a standalone\n> project with its own CVS someplace else. I know that right now, there\n> are some changes in the postgresql.org tree that are not in Teo's tree,\n> because I made some 7.2 fixes there last summer (having forgotten that\n ^^^^^^^^^^^^^^^^^^^^^\n> our sources were not the master copy). This is not good, but it'll\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> keep happening if there are multiple CVS trees.\n\n[ Just catching up.]\n\nActually, the PostgreSQL CVS tree is the master pgacces source since Teo\nstopped working on it. I used to pass patches back to him but at one\npoint he told me that we should maintian the master copy.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 4 Jun 2002 14:37:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: www.pgaccess.org - the official story (the way I saw"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: mlw [mailto:markw@mohawksoft.com]\n> Sent: Thursday, May 09, 2002 2:41 PM\n> To: Dann Corbit\n> Cc: PostgreSQL-development\n> Subject: Re: Issues tangential to win32 support\n> \n> \n> Dann Corbit wrote:\n> > It took a few hundred man hours to do it. I see the whole \n> Win32 port as\n> > a non issue. Several parties have already completed it \n> (including the\n> > place where I work -- CONNX Solutions Inc.). If we did not \n> do it or all\n> > parties who already did it were hit by a comet or something, someone\n> > else would accomplish it. It isn't trivial but it isn't impossible\n> > either. If a need is large enough, someone will manage it. \n> The need is\n> > large enough. Ergo...\n> > \n> > Here are some other things related:\n> > \n> > A ready to go Win32 PosgreSQL package:\n> > http://www.dbexperts.net/postgresql\n> > \n> > An open source project to productize PostgreSQL for Windows \n> (has gone\n> > nowhere so far):\n> > http://gborg.postgresql.org/project/winpackage/projdisplay.php\n> > \n> > A native Win32 port accomplished by a Japanese Group:\n> > http://hp.vector.co.jp/authors/VA023283/PostgreSQLe.html\n> > If you look under the \"Gists for Patch\" it contains exactly the same\n> > tasks that CONNX Solutions Inc. had to accomplish in every case.\n> \n> These packages are based upon cygwin. Problems with cygwin:\n> \n> (1) GNU license issues.\n> (2) Does not work well with anti-virus software\n> (3) Since OS level copy-on-write is negated, process creation \n> is much slower.\n> (4) Since OS level copy on write is negated, memory that \n> otherwise would not be\n> allocated to the process is forced to ba allocated when the \n> parent process data\n> is copied.\n\nOur package avoids Cygwin altogether. We wrote our own POSIX layer from\nscratch, and we junked fork() for CreateProcess() {and inserted copious:\n#ifdef ICKY_WIN32_KLUDGE\n/* our code goes here */\n#else\n/* Standard UNIX code goes here */\n#endif\n\nIt's complete, and it performs like the burning blue blazes. We have\nrun the full PostgreSQL test suite to completion with success. However,\nbefore we release any SQL tool, we have our own test suite with tens of\nthousands of tests to perform. Hence, we won't have a release until\nJune at the earliest.\n\nI think the Japanese one also does not use Cygwin (but I have not tried\ninstalling it yet).\n",
"msg_date": "Thu, 9 May 2002 14:51:50 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: Issues tangential to win32 support"
},
{
"msg_contents": "Dann Corbit wrote:\n> Our package avoids Cygwin altogether. We wrote our own POSIX layer from\n> scratch, and we junked fork() for CreateProcess() {and inserted copious:\n> #ifdef ICKY_WIN32_KLUDGE\n> /* our code goes here */\n> #else\n> /* Standard UNIX code goes here */\n> #endif\n\nOK, what sorts of things did you do in your ICKY_WIN32_KLUDGE? Were they ever\nmigrated back into the main tree? Did you simulate fork() or a stand-alone?\n\nI know Windows very well, but I have thus far remained ignorant of PostgreSQL\ninternals.\n\n> \n> It's complete, and it performs like the burning blue blazes. We have\n> run the full PostgreSQL test suite to completion with success. However,\n> before we release any SQL tool, we have our own test suite with tens of\n> thousands of tests to perform. Hence, we won't have a release until\n> June at the earliest.\n> \n> I think the Japanese one also does not use Cygwin (but I have not tried\n> installing it yet).\n\nThe japanese site claims cygwin.\n",
"msg_date": "Thu, 09 May 2002 17:56:01 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Issues tangential to win32 support"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: mlw [mailto:markw@mohawksoft.com]\n> Sent: Thursday, May 09, 2002 2:56 PM\n> To: Dann Corbit\n> Cc: PostgreSQL-development\n> Subject: Re: Issues tangential to win32 support\n> \n> \n> Dann Corbit wrote:\n> > Our package avoids Cygwin altogether. We wrote our own \n> POSIX layer from\n> > scratch, and we junked fork() for CreateProcess() {and \n> inserted copious:\n> > #ifdef ICKY_WIN32_KLUDGE\n> > /* our code goes here */\n> > #else\n> > /* Standard UNIX code goes here */\n> > #endif\n> \n> OK, what sorts of things did you do in your \n> ICKY_WIN32_KLUDGE? Were they ever\n> migrated back into the main tree? Did you simulate fork() or \n> a stand-alone?\n\nI explained it in another mail.\n\nWe had quite a few changes we had to make (several hundred man-hours,\nabout half of which was the POSIX layer and the precise time routines).\n\nNo sense trying to simulate fork() -- it stinks on Win32. The Cygwin\nand PW32 implementations of fork() are dogs. Smarter folks than us\ntried it and failed miserably. Why reinvent a broken wheel? We use\ncreate process and our own startup code. Our version is competitive\nwith fork() on Linux for spawning tasks and in general the queries run\nconsiderably faster.\n \n> I know Windows very well, but I have thus far remained \n> ignorant of PostgreSQL\n> internals.\n",
"msg_date": "Thu, 9 May 2002 15:10:43 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: Issues tangential to win32 support"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: mlw [mailto:markw@mohawksoft.com]\n> Sent: Thursday, May 09, 2002 2:56 PM\n> To: Dann Corbit\n> Cc: PostgreSQL-development\n> Subject: Re: Issues tangential to win32 support\n> \n> \n> Dann Corbit wrote:\n> > Our package avoids Cygwin altogether. We wrote our own \n> POSIX layer from\n> > scratch, and we junked fork() for CreateProcess() {and \n> inserted copious:\n> > #ifdef ICKY_WIN32_KLUDGE\n> > /* our code goes here */\n> > #else\n> > /* Standard UNIX code goes here */\n> > #endif\n> \n> OK, what sorts of things did you do in your \n> ICKY_WIN32_KLUDGE? Were they ever\n> migrated back into the main tree? Did you simulate fork() or \n> a stand-alone?\n> \n> I know Windows very well, but I have thus far remained \n> ignorant of PostgreSQL\n> internals.\n> \n> > \n> > It's complete, and it performs like the burning blue \n> blazes. We have\n> > run the full PostgreSQL test suite to completion with \n> success. However,\n> > before we release any SQL tool, we have our own test suite \n> with tens of\n> > thousands of tests to perform. Hence, we won't have a release until\n> > June at the earliest.\n> > \n> > I think the Japanese one also does not use Cygwin (but I \n> have not tried\n> > installing it yet).\n> \n> The japanese site claims cygwin.\n\nThis is not correct. (Fortunately, we have someone here who reads and\nwrites Japanese).\nAt any rate, it is a complete, native implementation of PostgreSQL\nwithout any need for Cygwin.\nJust to be sure, I did a \"depends\" on each of the binaries and none of\nthem use Cywin.\n\nSo the Japanese site did exactly the same thing that we did.\n\nHere are bitmaps showing the complete dependency trees of both the\nJapanese efforts and ours as well:\nUs:\nftp://cap.connx.com/pub/chess-engines/new-approach/connx.bmp\n\nJapanese:\nftp://cap.connx.com/pub/chess-engines/new-approach/japanese.bmp\n\nNo Cygwin in sight in either case.\n\n",
"msg_date": "Thu, 9 May 2002 15:31:14 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: Issues tangential to win32 support"
},
{
"msg_contents": "Dann Corbit wrote:\nhttp://hp.vector.co.jp/authors/VA023283/PostgreSQLe.html\n\nMentions cygwin, am I misunderstanding?\n\nDoes not matter, the issue is that you guys said you did it. OK, have you been\nable to bring the changed back into the main source tree? (Are you not trying?)\n",
"msg_date": "Thu, 09 May 2002 18:33:50 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Issues tangential to win32 support"
},
{
"msg_contents": "> Dann Corbit wrote:\n> http://hp.vector.co.jp/authors/VA023283/PostgreSQLe.html\n> \n> Mentions cygwin, am I misunderstanding?\n\nAre you talking about following in the page?\n\n----------------------------------------------------------------\n* Notice: Based upon the GNU-cygwin, there is a version that works\nsimilar to the Unix-compatible operability. Tanida-san Web site is\nsupporting this environment in Japanese.\n----------------------------------------------------------------\n\nIt cleary refers to another work.\n--\nTatsuo Ishii\n",
"msg_date": "Fri, 10 May 2002 10:49:04 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Issues tangential to win32 support"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: mlw [mailto:markw@mohawksoft.com]\n> Sent: Thursday, May 09, 2002 3:34 PM\n> To: Dann Corbit\n> Cc: PostgreSQL-development\n> Subject: Re: Issues tangential to win32 support\n> \n> \n> Dann Corbit wrote:\n> http://hp.vector.co.jp/authors/VA023283/PostgreSQLe.html\n> \n> Mentions cygwin, am I misunderstanding?\n> \n> Does not matter, the issue is that you guys said you did it. \n> OK, have you been\n> able to bring the changed back into the main source tree? \n> (Are you not trying?)\n\nI am not enrolled in the CVS project, and don't even know how to use it.\nWe use \"Visual Source Safe\" here -- really an icky tool but at least\neveryone here knows it.\n\nThere is some debate here as to whether to keep the changes private or\nto turn them back to the project. Not sure how it will turn out.\n\nI am not sure that the project would want them anyway, since the\nrepresent some pretty major surgery and impact the readability of the\ncode in a quite adverse way.\n\nAt any rate, the Japanese version appears to be released. In fact, I\nhave downloaded the whole project and gave it a spin. It is actually\nvery nice. If you just need to use something for right now, why not go\nwith that version?\n\nIn any case, there is simply no way possible that anything will ever\nescape from here before June at the absolute earliest (full regression\ntest is company policy).\n",
"msg_date": "Thu, 9 May 2002 15:43:47 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: Issues tangential to win32 support"
},
{
"msg_contents": "Dann Corbit wrote:\n> I am not enrolled in the CVS project, and don't even know how to use it.\n> We use \"Visual Source Safe\" here -- really an icky tool but at least\n> everyone here knows it.\n\nSource Safe? Yikes. I haven't used that in a long time.\n\n> \n> There is some debate here as to whether to keep the changes private or\n> to turn them back to the project. Not sure how it will turn out.\n> \n> I am not sure that the project would want them anyway, since the\n> represent some pretty major surgery and impact the readability of the\n> code in a quite adverse way.\n\nI hear you on that. I have tons of code that has #ifdef GCC and #ifdef WIN32 in\nlots of places. \n\nObviously you wrap what you can in macros and/or functions, but you can't do\nthat 100% the time. Some people REALLY hate #ifdef/#endif and view them as a\nbad coding practice. Others, like myself, view them as a proper usage of the\nlanguage constructs and judicious use of them actually help the developer\nunderstand the code better.\n\n> \n> At any rate, the Japanese version appears to be released. In fact, I\n> have downloaded the whole project and gave it a spin. It is actually\n> very nice. If you just need to use something for right now, why not go\n> with that version?\n\nI have no desire for a Windows version for myself, but I see the need for it.\n\n> \n> In any case, there is simply no way possible that anything will ever\n> escape from here before June at the absolute earliest (full regression\n> test is company policy).\n\nok\n",
"msg_date": "Thu, 09 May 2002 18:51:32 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Issues tangential to win32 support"
}
] |
[
{
"msg_contents": "Hello everybody,\n\nAfter Marc Fournier commented, it is time for pgaccess.org to make a\ndecision.\n\nIt is clear the project needs the following tools.\n\n- web site\n- mailing list(s)\n- cvs\n- bug tracking system\n\nIt is clear, that there is a small new group with fresh desire to contribute\nin a dedicated way.\n\nIt is clear, that pgaccess has only one meaning and this is PostgreSQL.\n\nIt is clear, that the PostgreSQL core team is very supportive.\n\nIt is clear, that pgaccess.org efforts can not result in anything good\nwithout a close collaboration with the PostgreSQL core team.\n\nNow, when we heard many different opinions, the question is - what is the\nbest decision of organization.\n\nI would make the following summary, please, send your comments -\n\n\nSUMMARY\n\n1] In terms of infrastructure, a separate web site, mailing list(s) and bug\ntracking system will increase the flexibility of the pgaccess team and will\nnot create additional (and not very useful) burden for the PostgreSQL core\nteam. The pgaccess is a tool - it is not an integral part of PostgreSQL and\ndoes not need day-to-day sharing. In the beginning it will be developed\nrather for the stable, than for the future versions of PostgreSQL.\n\n2] It is clear that there must be one master copy of the CVS. The\npossibilities are two - this copy is kept with PostgreSQL or this copy is\nkept with pgaccess.org\n\nIf the PostgreSQL core team can provide a CVS repository with similar\nflexibility to that it would have being based on the pgaccess.org server - I\nwould vote for a PostgreSQL hosted CVS. This will be the naval cord between\nthe two projects.\n\n3] Still - the only thing that is not clear to me is - who is going to\ncollect all patches and make one whole form them. As long as each of us\nworks on a different thing - this should not be a big problem, but still -\nneeds to be one person.\n\nIavor\n\n--\nwww.pgaccess.org\n\n",
"msg_date": "Fri, 10 May 2002 10:58:28 +0200",
"msg_from": "\"Iavor Raytchev\" <iavor.raytchev@verysmall.org>",
"msg_from_op": true,
"msg_subject": "internal voting"
},
{
"msg_contents": "On Fri, 10 May 2002 10:58:28 +0200\n\"Iavor Raytchev\" <iavor.raytchev@verysmall.org> wrote:\n\n> Hello everybody,\n> \n> After Marc Fournier commented, it is time for pgaccess.org to make a\n> decision.\n> \n> It is clear the project needs the following tools.\n> \n> - web site\n> - mailing list(s)\n> - cvs\n> - bug tracking system\n> \n> It is clear, that there is a small new group with fresh desire to\n> contribute in a dedicated way.\n> \n> It is clear, that pgaccess has only one meaning and this is PostgreSQL.\n> \n> It is clear, that the PostgreSQL core team is very supportive.\n> \n> It is clear, that pgaccess.org efforts can not result in anything good\n> without a close collaboration with the PostgreSQL core team.\n> \n> Now, when we heard many different opinions, the question is - what is\n> the best decision of organization.\n> \n> I would make the following summary, please, send your comments -\n> \n> \n> SUMMARY\n> \n> 1] In terms of infrastructure, a separate web site, mailing list(s) and\n> bug tracking system will increase the flexibility of the pgaccess team\n> and will not create additional (and not very useful) burden for the\n> PostgreSQL core team. The pgaccess is a tool - it is not an integral\n> part of PostgreSQL and does not need day-to-day sharing. In the\n> beginning it will be developed rather for the stable, than for the\n> future versions of PostgreSQL.\n> \n> 2] It is clear that there must be one master copy of the CVS. The\n> possibilities are two - this copy is kept with PostgreSQL or this copy\n> is kept with pgaccess.org\n> \n> If the PostgreSQL core team can provide a CVS repository with similar\n> flexibility to that it would have being based on the pgaccess.org server\n> - I would vote for a PostgreSQL hosted CVS. This will be the naval cord\n> between the two projects.\n> \n> 3] Still - the only thing that is not clear to me is - who is going to\n> collect all patches and make one whole form them. As long as each of us\n> works on a different thing - this should not be a big problem, but still\n> - needs to be one person.\n> \n\nThis looks all good to me, except I have one question: How will pgaccess\nbe distributed? Personally, I like the idea that PG comes with pgaccess in\nthe distribution, so I would hate to see that go away. Even though there\nare people that don't use pgaccess, it is always nice to have a default \ntool that comes with PG (yes, I know there is psql).\n\n --brett\n\np.s. I am willing to help out as well...\n\n\n",
"msg_date": "Fri, 10 May 2002 04:25:52 -0700",
"msg_from": "Brett Schwarz <brett_schwarz@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: internal voting"
},
{
"msg_contents": "Iavor Raytchev writes:\n\n> 3] Still - the only thing that is not clear to me is - who is going to\n> collect all patches and make one whole form them. As long as each of us\n> works on a different thing - this should not be a big problem, but still -\n> needs to be one person.\n\nAs far as I'm concerned, there is no need to change anything. If someone\nhas patches for pgaccess, send them to -patches and they will be applied.\nWhen a new release of PostgreSQL happens, a new pgaccess will be\ndistributed. Simple enough.\n\nIf and when patches for pgaccess appear in significant numbers and for\nsome reason, which I cannot imagine, this procedure doesn't end up being\npractical, we can consider the alternatives. But before you spend a lot\nof time building a new infrastructure, let's see some code.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 10 May 2002 23:24:40 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: internal voting"
},
{
"msg_contents": "> If and when patches for pgaccess appear in significant numbers and for\n> some reason, which I cannot imagine, this procedure doesn't end up being\n> practical, we can consider the alternatives. But before you spend a lot\n> of time building a new infrastructure, let's see some code.\n>\n> --\n> Peter Eisentraut peter_e@gmx.net\n\nWe are working on it, because we have some code.\n\nDon't you believe us, or do you think we have a lot of free time to waste?\n\nWe - Chris, Bartus, Boyan and myself, have enough patches we want to merge.\nAnd we do not feel like asking for permisson to do it. We sent them to Teo\nand we were asked by Teo to meet and see what we can do with our patches.\nAnd we were nice enough to tell the world about this.\n\nI do not feel neither like 'asking for permisson', nor like 'proving'\nanything. If somebody wants to help - is welcome.\n\nI think the discussion is over.\n\nWe have some work to do.\n\nIavor\n\n",
"msg_date": "Fri, 10 May 2002 23:40:28 +0200",
"msg_from": "\"Iavor Raytchev\" <iavor.raytchev@verysmall.org>",
"msg_from_op": true,
"msg_subject": "pgaccess - the discussion is over"
},
{
"msg_contents": "On Fri, May 10, 2002 at 11:24:40PM +0200, Peter Eisentraut wrote:\n> Iavor Raytchev writes:\n> \n> > 3] Still - the only thing that is not clear to me is - who is going to\n> > collect all patches and make one whole form them. As long as each of us\n> > works on a different thing - this should not be a big problem, but still -\n> > needs to be one person.\n> \n> As far as I'm concerned, there is no need to change anything. If someone\n> has patches for pgaccess, send them to -patches and they will be applied.\n> When a new release of PostgreSQL happens, a new pgaccess will be\n> distributed. Simple enough.\n> \n> If and when patches for pgaccess appear in significant numbers and for\n> some reason, which I cannot imagine, this procedure doesn't end up being\n> practical, we can consider the alternatives. But before you spend a lot\n> of time building a new infrastructure, let's see some code.\n\nAll very practical, execpt for one point: the people being pulled togther\nfor this _have_ code, with nowhere to put it: they've been developing\nnew features for pgaccess, on top of the stable pgsql. Tracking CVS\ntip means that the current version of pgaccess there is either broken\nby underlying pgsql changes (I think that is currently true with Tom's\nschema work) or does not work with the current stable version og pgsql.\n\nWhile it would be nice to have one pgaccess that can work with any pgsql\nbackend, that's not currently the case. One solution would be to work\non the release branch, but that's discouraged - bug fixes only.\n\nRoss\n",
"msg_date": "Fri, 10 May 2002 16:42:05 -0500",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: internal voting"
},
{
"msg_contents": "...\n> While it would be nice to have one pgaccess that can work with any pgsql\n> backend, that's not currently the case. One solution would be to work\n> on the release branch, but that's discouraged - bug fixes only.\n\nActually, CVS can support this just fine (I'll mention how below) but\nafaict the discussion is moot because Iavor has declared that his group\nprefers to take another path for now.\n\n - Thomas\n\nThe solution could be to make a branch off of the stable branch to\nsupport the pgaccess work. When folks are ready to merge down and\ndevelop for 7.3, then they can do that using the \"-j\" option, jumping\nstraight from their development branch down to the main trunk (hence\nnever corrupting the stable branch), and then develop from there.\n",
"msg_date": "Fri, 10 May 2002 15:06:47 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: internal voting"
},
{
"msg_contents": "> Actually, CVS can support this just fine (I'll mention how below) but\n> afaict the discussion is moot because Iavor has declared that his group\n> prefers to take another path for now.\n>\n> - Thomas\n\nIt is not 'my' group! I just happened to ask somebody in my company to patch\nsomething and then I send the code to Teo. And Teo asked Chris, Bartus and\nmyself to bring our patches together. And I though of making this public.\n\nThe discussion, however, went far beyond my intention.\n\nIf somebody feels like managing this - I am off. I do not intend to moderate\nthe future of an open source project in such a heavy climate. Especialy of a\nproject I am not the author, not even a contributor - I am a pure manager.\n\nIf nobody feels like managing this - let's give it a little bit of life and\nmove it a bit forwards - and then talk again.\n\nNothing wrong can happen the next two weeks.\n\nIavor\n\n",
"msg_date": "Sat, 11 May 2002 00:29:11 +0200",
"msg_from": "\"Iavor Raytchev\" <iavor.raytchev@verysmall.org>",
"msg_from_op": true,
"msg_subject": "Re: internal voting"
},
{
"msg_contents": "...\n> If nobody feels like managing this - let's give it a little bit of life and\n> move it a bit forwards - and then talk again.\n\nIavor, I meant to be helpful; I was trying to put a name on The New\nGroup of Enthusiastic Developers Who Are Interested In Advancing\nPgaccess and shortened it to \"Iavor's group\". :)\n\n> Nothing wrong can happen the next two weeks.\n\nCertainly true. As The Developers Who Are Always Interested In Advancing\nPostgreSQL And Really Like Pgaccess, we have just been trying to be\nhelpful and to make TNGoEDWAIIAP feel welcome to take advantage of any\nexisting or new resources in postgresql.org which they might want. So,\nplease know that TDWAAIIAPARLP are happy to support anything that\nTNGoEDWAIIAP might want to do.\n\nRegards.\n\n - Thomas\n",
"msg_date": "Fri, 10 May 2002 16:32:24 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: internal voting"
},
{
"msg_contents": "Thomas :)))\n\nIn Europe it is 1:40 a.m.\n\nWish you a good night :)\n\nAnd thanks,\n\nIavor\n\n--\nTNGoEDWAIIAP\n\n-----Original Message-----\nFrom: lockhart@fourpalms.org [mailto:lockhart@fourpalms.org]\nSent: Saturday, May 11, 2002 1:32 AM\nTo: Iavor Raytchev\nCc: pgsql-hackers; Tom Lane; Stanislav Grozev; Ross J. Reedstrom; Nigel\nJ. Andrews; Marc G. Fournier; Constantin Teodorescu; Cmaj; Boyan\nFilipov; Boyan Dzambazov; Bartus. L; Brett Schwarz\nSubject: Re: [HACKERS] internal voting\n\n\n...\n> If nobody feels like managing this - let's give it a little bit of life\nand\n> move it a bit forwards - and then talk again.\n\nIavor, I meant to be helpful; I was trying to put a name on The New\nGroup of Enthusiastic Developers Who Are Interested In Advancing\nPgaccess and shortened it to \"Iavor's group\". :)\n\n> Nothing wrong can happen the next two weeks.\n\nCertainly true. As The Developers Who Are Always Interested In Advancing\nPostgreSQL And Really Like Pgaccess, we have just been trying to be\nhelpful and to make TNGoEDWAIIAP feel welcome to take advantage of any\nexisting or new resources in postgresql.org which they might want. So,\nplease know that TDWAAIIAPARLP are happy to support anything that\nTNGoEDWAIIAP might want to do.\n\nRegards.\n\n - Thomas\n\n",
"msg_date": "Sat, 11 May 2002 01:44:28 +0200",
"msg_from": "\"Iavor Raytchev\" <iavor.raytchev@verysmall.org>",
"msg_from_op": true,
"msg_subject": "Re: internal voting"
},
{
"msg_contents": "Ross J. Reedstrom writes:\n\n> All very practical, execpt for one point: the people being pulled togther\n> for this _have_ code, with nowhere to put it: they've been developing\n> new features for pgaccess, on top of the stable pgsql. Tracking CVS\n> tip means that the current version of pgaccess there is either broken\n> by underlying pgsql changes (I think that is currently true with Tom's\n> schema work) or does not work with the current stable version og pgsql.\n\nWe went through a very similar situation with the JDBC driver a release\nago. A number of people had developed fixes or features for the driver\nand no one was collecting them. We've got those people working on the 7.2\nbranch and everything worked out well. Yes, this meant that the features\nand fixes were not immediately available in the 7.1 branch. But the\nalternative of forking pgaccess now is that the available fixes and\nfeatures will not be available in the 7.3 branch for quite a while.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sat, 11 May 2002 14:15:48 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: internal voting"
},
{
"msg_contents": "On 2002.05.11 14:15 Peter Eisentraut wrote:\n> Ross J. Reedstrom writes:\n> \n> > All very practical, execpt for one point: the people being pulled\n> togther\n> > for this _have_ code, with nowhere to put it: they've been\n> developing\n> > new features for pgaccess, on top of the stable pgsql. Tracking CVS\n> > tip means that the current version of pgaccess there is either\n> broken\n> > by underlying pgsql changes (I think that is currently true with\n> Tom's\n> > schema work) or does not work with the current stable version og\n> pgsql.\n> \n> We went through a very similar situation with the JDBC driver a\n> release\n> ago. A number of people had developed fixes or features for the\n> driver\n> and no one was collecting them. We've got those people working on the\n> 7.2\n> branch and everything worked out well. Yes, this meant that the\n> features\n> and fixes were not immediately available in the 7.1 branch. But the\n> alternative of forking pgaccess now is that the available fixes and\n> features will not be available in the 7.3 branch for quite a while.\n> \nBut we have fixes and patches for this (7.2) version, why we sould wait\nfor the next version. I think, there is no connection (should not be)\nbetween the versions of the pgaccess and the versions of the pgsql.\nPgaccess is a visual tool for pgsql, that can be developed freely\nwithout having anything to do with the pgsql developement.\nSo I cannot understand why the majority of the oppinions says that\npgaccess should stay in the shadow of the pgsql.\nBreaking this tight connection we can help pgaccess to develop as fast\nas it can, and we let free space for other projects to appear. For me\nthe first thing is to make my daily job as good and fast as I can. And\nthis is much easier with using the best tool for the particular\nproblem. This is why I started to make patches to this project.\nSorry but I can't wait for the next pgsql release to have this patches \nincluded in the package.\n\n> --\n> Peter Eisentraut peter_e@gmx.net\n> \n> \n",
"msg_date": "Sat, 11 May 2002 14:36:17 +0200",
"msg_from": "Bartus Levente <bartus.l@bitel.hu>",
"msg_from_op": false,
"msg_subject": "Re: internal voting"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> We went through a very similar situation with the JDBC driver a release\n> ago. A number of people had developed fixes or features for the driver\n> and no one was collecting them. We've got those people working on the 7.2\n> branch and everything worked out well. Yes, this meant that the features\n> and fixes were not immediately available in the 7.1 branch.\n\nAu contraire --- what the JDBC folk did (and still are doing) was to\nmake \"unofficial\" releases consisting of snapshots pulled from their\nchunk of the CVS tree. There were people making use of the \"7.2 branch\"\nof JDBC long before the 7.2 server went beta, let alone final.\n\nNow this worked only because the JDBC driver makes a point of working\nwith older server versions as well as current, so it was possible to\nuse the updated driver with 7.1 and even older servers. I don't know\nwhether pgaccess does or should have a similar policy, but if it does\nthen the same approach should work well for it.\n\nThe alternative of maintaining a separate CVS tree and a separate\nrelease schedule would really force exactly that policy on pgaccess\nanyway --- if your releases aren't tied to the server's then you can\nhardly expect to be sure which server version people will try to use\nyour code with.\n\nOn the other hand, if the pgaccess developers would rather maintain\nseparate pgaccess versions for each server version, I see no reason\nwhy they couldn't do that in the context of our CVS. They could work\nin the REL7_2 branch for now (and make releases from it) then merge\nforward to HEAD when they want to start thinking about 7.3 issues.\nOr double-patch if they want to work on both versions concurrently.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 11 May 2002 11:15:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: internal voting "
},
{
"msg_contents": "On Sat, 11 May 2002, Tom Lane wrote:\n\n> Au contraire --- what the JDBC folk did (and still are doing) was to\n> make \"unofficial\" releases consisting of snapshots pulled from their\n> chunk of the CVS tree. There were people making use of the \"7.2 branch\"\n> of JDBC long before the 7.2 server went beta, let alone final.\n> \n> Now this worked only because the JDBC driver makes a point of working\n> with older server versions as well as current, so it was possible to\n> use the updated driver with 7.1 and even older servers. I don't know\n> whether pgaccess does or should have a similar policy, but if it does\n> then the same approach should work well for it.\n\nAh, I'm just composing an email on this subject destined for the -interfaces\nlist.\n\n> \n> The alternative of maintaining a separate CVS tree and a separate\n> release schedule would really force exactly that policy on pgaccess\n> anyway --- if your releases aren't tied to the server's then you can\n> hardly expect to be sure which server version people will try to use\n> your code with.\n> \n> On the other hand, if the pgaccess developers would rather maintain\n> separate pgaccess versions for each server version, I see no reason\n> why they couldn't do that in the context of our CVS. They could work\n> in the REL7_2 branch for now (and make releases from it) then merge\n> forward to HEAD when they want to start thinking about 7.3 issues.\n> Or double-patch if they want to work on both versions concurrently.\n\nReally, I'd like interested parties to have look at waht I'm posting to\n-interfaces so they can shoot down my ideas on this.\n\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants\n\n",
"msg_date": "Sat, 11 May 2002 16:51:52 +0100 (BST)",
"msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: internal voting"
},
{
"msg_contents": "\n[Note, I've changed the headers so everyone on the original distribution list\nis getting a copy via Bcc, including -hackers. It was the simplest way I could\nthink of making certain the discussion moved to -interfaces as Marc requested.]\n\n\nOn Sat, 11 May 2002, Bartus Levente wrote:\n> ... I think, there is no connection (should not be)\n> between the versions of the pgaccess and the versions of the pgsql.\n> Pgaccess is a visual tool for pgsql, that can be developed freely\n> without having anything to do with the pgsql developement.\n\nYes.\n\n> So I cannot understand why the majority of the oppinions says that\n> pgaccess should stay in the shadow of the pgsql.\n\nWho said shadow? FWIW, I'd never have bothered about pgaccess, that's even I'd\neven known about it, if it hadn't come in the main postgres tree.\n\n> Breaking this tight connection we can help pgaccess to develop as fast\n> as it can, and we let free space for other projects to appear. For me\n> the first thing is to make my daily job as good and fast as I can. And\n> this is much easier with using the best tool for the particular\n> problem. This is why I started to make patches to this project.\n> Sorry but I can't wait for the next pgsql release to have this patches \n> included in the package.\n\nUhoh, now we have a problem, unless your version is going to form the\ninitial repository or there's little or no impact across the preexisting code.\n\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants\n\n",
"msg_date": "Sat, 11 May 2002 19:27:30 +0100 (BST)",
"msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: internal voting"
},
{
"msg_contents": "On 2002.05.11 20:27 Nigel J. Andrews wrote:\n> \n> [Note, I've changed the headers so everyone on the original\n> distribution list\n> is getting a copy via Bcc, including -hackers. It was the simplest way\n> I could\n> think of making certain the discussion moved to -interfaces as Marc\n> requested.]\n> \n> \n> On Sat, 11 May 2002, Bartus Levente wrote:\n> > ... I think, there is no connection (should not be)\n> > between the versions of the pgaccess and the versions of the pgsql.\n> > Pgaccess is a visual tool for pgsql, that can be developed freely\n> > without having anything to do with the pgsql developement.\n> \n> Yes.\n> \n> > So I cannot understand why the majority of the oppinions says that\n> > pgaccess should stay in the shadow of the pgsql.\n> \n> Who said shadow? FWIW, I'd never have bothered about pgaccess, that's\n> even I'd\n> even known about it, if it hadn't come in the main postgres tree.\n> \n> > Breaking this tight connection we can help pgaccess to develop as\n> fast\n> > as it can, and we let free space for other projects to appear. For\n> me\n> > the first thing is to make my daily job as good and fast as I can.\n> And\n> > this is much easier with using the best tool for the particular\n> > problem. This is why I started to make patches to this project.\n> > Sorry but I can't wait for the next pgsql release to have this\n> patches\n> > included in the package.\n> \n> Uhoh, now we have a problem, unless your version is going to form the\n> initial repository or there's little or no impact across the\n> preexisting code.\n> \n\nSure, there is a problem, that's why the whole discussion started. A \nsoftware project stalled for at least a year. Why? There is no need for \nit? I can hardly beleive.\n\nSorry, but I cannot understand your last sentence. Could you explain to \nme, please?\n\nI don't want to hurt anybody's feelings, I just want to help this \nsoftware to be better, nothing more.\n\n> --\n> Nigel J. Andrews\n> Director\n> \n> ---\n> Logictree Systems Limited\n> Computer Consultants\n> \n> \n\nBest regards,\nLevi.\n",
"msg_date": "Sat, 11 May 2002 20:51:45 +0200",
"msg_from": "Bartus Levente <bartus.l@bitel.hu>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] internal voting"
},
{
"msg_contents": "\nOn Sat, 11 May 2002, Bartus Levente wrote:\n\n> On 2002.05.11 20:27 Nigel J. Andrews wrote:\n\n> > On Sat, 11 May 2002, Bartus Levente wrote:\n> > > [snip]\n> > > problem. This is why I started to make patches to this project.\n> > > Sorry but I can't wait for the next pgsql release to have this\n> > > patches included in the package.\n> > \n> > Uhoh, now we have a problem, unless your version is going to form the\n> > initial repository or there's little or no impact across the\n> > preexisting code.\n> \n> [snip]\n>\n> Sorry, but I cannot understand your last sentence. Could you explain to \n> me, please?\n\nAll I mean is that if your patches are making major changes to the majority of\nthe files making up pgaccess and that the new CVS repository is going to be\ninitialised from a version that does not have your patches _and_ that this\ninitial version has been patched by others then there is possibly going to be a\nlot of work to bring your patches into CVS. On the hand I suppose this could\njust be viewed as the normal problem of merging patches so I'm inclined to not\nworry. However, if people are working on pgaccess at the moment then\nit is a good idea that their work is applied to pgaccess in the postgres tree\nso everyone else can get their hands on it while we wait for the new CVS.\n\nI am assuming here that there is a new CVS repository coming along for\npgaccess, that it will be initialised with the code from the postgres tree [and\nthat pgaccess in the postgres tree will be synchronised frequently].\n\n> I don't want to hurt anybody's feelings, I just want to help this \n> software to be better, nothing more.\n\nMine aren't hurt :)\n\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants\n\n",
"msg_date": "Sat, 11 May 2002 22:57:45 +0100 (BST)",
"msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] internal voting"
},
{
"msg_contents": "On 2002.05.11 23:57 Nigel J. Andrews wrote:\n> \n> On Sat, 11 May 2002, Bartus Levente wrote:\n> \n> > On 2002.05.11 20:27 Nigel J. Andrews wrote:\n> \n> > > On Sat, 11 May 2002, Bartus Levente wrote:\n> > > > [snip]\n> > > > problem. This is why I started to make patches to this project.\n> > > > Sorry but I can't wait for the next pgsql release to have this\n> > > > patches included in the package.\n> > >\n> > > Uhoh, now we have a problem, unless your version is going to form\n> the\n> > > initial repository or there's little or no impact across the\n> > > preexisting code.\n> >\n> > [snip]\n> >\n> > Sorry, but I cannot understand your last sentence. Could you explain\n> to\n> > me, please?\n> \n> All I mean is that if your patches are making major changes to the\n> majority of\n> the files making up pgaccess and that the new CVS repository is going\n> to be\n> initialised from a version that does not have your patches _and_ that\n> this\n> initial version has been patched by others then there is possibly\n> going to be a\n> lot of work to bring your patches into CVS. On the hand I suppose this\n> could\n> just be viewed as the normal problem of merging patches so I'm\n> inclined to not\n> worry. However, if people are working on pgaccess at the moment then\n> it is a good idea that their work is applied to pgaccess in the\n> postgres tree\n> so everyone else can get their hands on it while we wait for the new\n> CVS.\n\nWe would like to merge our patches into the last release of the \npgaccess (apperently this is now in the postgres tree), and make this \nfor the future patches too.\nSo we set up pgaccess.org, with it's own cvs, mailing lists and \nhomepage. When Iavor searched for the last release almost a war started.\n\n> \n> I am assuming here that there is a new CVS repository coming along for\n> pgaccess, that it will be initialised with the code from the postgres\n> tree [and\n> that pgaccess in the postgres tree will be synchronised frequently].\n> \n\nWe would like to syncronise the postgres tree, and to give people a \nplace where they can find the software, tha latest release, can drop \nwishes, and browse the to-do list easier. Is there any to-do list in \nthe postgres package regarding the pgaccess?\n\n> > I don't want to hurt anybody's feelings, I just want to help this\n> > software to be better, nothing more.\n> \n> Mine aren't hurt :)\n> \n> \n> --\n> Nigel J. Andrews\n> Director\n> \n> ---\n> Logictree Systems Limited\n> Computer Consultants\n> \n\nBest regards,\nLevi.\n",
"msg_date": "Sun, 12 May 2002 12:15:37 +0200",
"msg_from": "Bartus Levente <bartus.l@bitel.hu>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] internal voting"
},
{
"msg_contents": "Iavor Raytchev wrote:\n> \n> > If and when patches for pgaccess appear in significant numbers and for\n> > some reason, which I cannot imagine, this procedure doesn't end up being\n> > practical, we can consider the alternatives. But before you spend a lot\n> > of time building a new infrastructure, let's see some code.\n> >\n> > --\n> > Peter Eisentraut peter_e@gmx.net\n> \n> We are working on it, because we have some code.\n> \n> Don't you believe us, or do you think we have a lot of free time to waste?\n> \n> We - Chris, Bartus, Boyan and myself, have enough patches we want to merge.\n> And we do not feel like asking for permisson to do it. We sent them to Teo\n> and we were asked by Teo to meet and see what we can do with our patches.\n> And we were nice enough to tell the world about this.\n> \n> I do not feel neither like 'asking for permisson', nor like 'proving'\n> anything. If somebody wants to help - is welcome.\n\nI find that this group is frustrating to work with. They seem very intolerant\nof the plurality.\n\nI did a configuration patch several months ago. I liked it, as did some others.\nIt did not affect any existing behavior, but added the ability to store\nconfiguration information in a different location than the data, and share\nfiles between multiple PostgreSQL instances.\n\nRather than evaluate the patch, and say it needs these changes, or simply\napplying it, you know, working with the contributor's to make a better project,\nthey ranted and raved how they didn't like it, how they wanted something\nbetter, etc. No good technical reasons were given, mind you, just \"I don't like\nthis.\" \n\nSo, I did the work, for what? Nothing. It is pointless for me to make the\nchanges for each release. Fortunately it wasn't too much work. So, my\nexperience tells me that unless the work you do is something they want, you are\nwasting your time. If you try to get some feedback from them about an approach\nyou wish to take, so you don't waste your time, they flame you and tell you to\nput up or shut up.\n\nIf you intend to undertake a major work on PostgreSQL, it had better be for\nsomething other than contribution back to the group, otherwise, there is a good\npossibility that you are going to waste your time.\n\nI do not get paid to work on PostgreSQL, the time I spend on it is either my\nown or for a project I am working on. I am finding it very unsatisfying.\n",
"msg_date": "Mon, 13 May 2002 10:56:29 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pgaccess - the discussion is over"
},
{
"msg_contents": "On Mon, 13 May 2002, mlw wrote:\n\n> Iavor Raytchev wrote:\n> >\n> > > If and when patches for pgaccess appear in significant numbers and for\n> > > some reason, which I cannot imagine, this procedure doesn't end up being\n> > > practical, we can consider the alternatives. But before you spend a lot\n> > > of time building a new infrastructure, let's see some code.\n> > >\n> > > --\n> > > Peter Eisentraut peter_e@gmx.net\n> >\n> > We are working on it, because we have some code.\n> >\n> > Don't you believe us, or do you think we have a lot of free time to waste?\n> >\n> > We - Chris, Bartus, Boyan and myself, have enough patches we want to merge.\n> > And we do not feel like asking for permisson to do it. We sent them to Teo\n> > and we were asked by Teo to meet and see what we can do with our patches.\n> > And we were nice enough to tell the world about this.\n> >\n> > I do not feel neither like 'asking for permisson', nor like 'proving'\n> > anything. If somebody wants to help - is welcome.\n>\n> I find that this group is frustrating to work with. They seem very intolerant\n> of the plurality.\n>\n> I did a configuration patch several months ago. I liked it, as did some others.\n> It did not affect any existing behavior, but added the ability to store\n> configuration information in a different location than the data, and share\n> files between multiple PostgreSQL instances.\n>\n> Rather than evaluate the patch, and say it needs these changes, or simply\n> applying it, you know, working with the contributor's to make a better project,\n> they ranted and raved how they didn't like it, how they wanted something\n> better, etc. No good technical reasons were given, mind you, just \"I don't like\n> this.\"\n>\n> So, I did the work, for what? Nothing. It is pointless for me to make the\n> changes for each release. Fortunately it wasn't too much work. So, my\n> experience tells me that unless the work you do is something they want, you are\n> wasting your time. If you try to get some feedback from them about an approach\n> you wish to take, so you don't waste your time, they flame you and tell you to\n> put up or shut up.\n>\n> If you intend to undertake a major work on PostgreSQL, it had better be for\n> something other than contribution back to the group, otherwise, there is a good\n> possibility that you are going to waste your time.\n>\n> I do not get paid to work on PostgreSQL, the time I spend on it is either my\n> own or for a project I am working on. I am finding it very unsatisfying.\n\nThis is the unfortunate impression I'm getting from some people on the\nHACKERS list, which is why discussion has moved temporarily to\nINTERFACES until pgaccess.org has its own mailing list. Plus that and\nthe fact that pgaccess is an interface to postgresql. Also, INTERFACES\nseems to be a lot more newbie questions but these people are trying to\nlearn so it is a more welcoming environment.\n\nWhat is strange is that you weren't even talking about a fork, which\nseems to be the central philosophical issue with pgaccess at the moment.\nAll I can say to reassure people is that it is still _PG_access, not\n_access_ *ick*.\n\n--Chris\n\n-- \n\ncmaj_at_freedomcorpse_dot_info\nfingerprint 5EB8 2035 F07B 3B09 5A31 7C09 196F 4126 C005 1F6A\n\n\n",
"msg_date": "Mon, 13 May 2002 11:49:51 -0400 (EDT)",
"msg_from": "\"C. Maj\" <cmaj@freedomcorpse.info>",
"msg_from_op": false,
"msg_subject": "Re: pgaccess - the discussion is over"
},
{
"msg_contents": "[trimmed cc list, but left on HACKERS due to the nature of the subject (which \nwas changed]\nOn Monday 13 May 2002 10:56 am, mlw wrote:\n> Iavor Raytchev wrote:\n> > Peter Eisentraut wrote:\n> > > let's see some code.\n\n> > I do not feel neither like 'asking for permisson', nor like 'proving'\n> > anything. If somebody wants to help - is welcome.\n\n> I find that this group is frustrating to work with. They seem very\n> intolerant of the plurality.\n\n> I did a configuration patch several months ago. I liked it, as did some\n> others. It did not affect any existing behavior, but added the ability to\n> store configuration information in a different location than the data, and\n> share files between multiple PostgreSQL instances.\n\nWhile I personally felt that your patch was useful, there were other concerns.\n\n> Rather than evaluate the patch, and say it needs these changes, or simply\n> applying it, you know, working with the contributor's to make a better\n> project, they ranted and raved how they didn't like it, how they wanted\n> something better, etc. No good technical reasons were given, mind you, just\n> \"I don't like this.\"\n\nI think you might want to reread that thread. There _were_ in fact technical \naspects of the situation -- primarily due to the _plurality_ of the \ndevelopment process around here. It isn't 'plural' to have someone announce \nthat they have a patch, and then it gets applied without the discussion of \nthe established developers. No -- changes to this codebase are done by a \nplurality -- meaning the entire pgsql-hackers group. (Well, at least that's \nhow it's supposed to be -- it doesn't always work that way.....).\n\nYour patch was discussed -- the resolution seemed to me to be in favor of \nincluding that functionality in 7.3. To which I was very happy.\n\nThis isn't the Linux kernel with a benevolent dictator who can unilaterally \napply patches -- this is an oligarchy with the six core developers, together \nwith the rest of us, making those decisions as a group. The discussions \naren't flames -- at least I didn't take any of those discussions to be \nflames. While there are a few opinionated ones here (myself included), we do \ntend to take things on technical merit. Had you patch merited inclusion \nwithout discussion -- well, that wouldn't have happened, regardless of its \nmerits -- we were in beta, IIRC. IIRI, then I apologize. In beta new \nfeatures are frowned upon -- and your patch introduced a substantial new \nfeature, one that needed careful thought before implemention.\n\nWhile your patch works for you, as written it didn't necessarily work for \neveryone. BTW, it would have worked great for me and my purposes, but \nPostgreSQL isn't a vehicle for my personal purposes.\n\nThe discussion I remember was a little antsy primarily due to the fact that we \nwere in beta. Then was not the time; now is. Reintroduce the topic now, and \nlet's see what happens.\n\n> I do not get paid to work on PostgreSQL, the time I spend on it is either\n> my own or for a project I am working on. I am finding it very unsatisfying.\n\nI do not currently get paid for working on it either. Do I find it \nsatisfying? Most of the time I do. But if you don't find it satisfying, \nwell, there could be more than one reason.\n\nBut the biggest problem I see was the inappropriate timing of the patch. \nAgain, _NOW_ would be a good time to reintroduce the topic, as we're not in \nbeta, and all of the developers are much more likely to be open to these \nideas. But go back to the previous thread in the archives and see where we \nleft off first, so that everybody starts on the same page of music.\n\nBut understand that those who don't need the functionality are likely not not \nbe thrilled by changes to a currently stable codebase. Although this config \nfile stuff is small potatoes compared to the Win32 stuff as recently \ndiscussed. And for that, please understand that most of the developers here \nconsider Win32 an inferior server platform. In fact, Win32 _is_ an inferior \nserver platform, at least in my opinion. But, if you want to do the work, \nand it doesn't break my non-Win32 server build, by all means go for it.\n\nWith that said, I hope you'll consider sticking it out and seeing it through \nat least two major cycles.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 13 May 2002 19:05:55 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Discontent with development process (was:Re: pgaccess - the\n\tdiscussion is over)"
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> Although this config file stuff is small potatoes compared to the\n> Win32 stuff as recently discussed. And for that, please understand\n> that most of the developers here consider Win32 an inferior server\n> platform. In fact, Win32 _is_ an inferior server platform, at least\n> in my opinion. But, if you want to do the work, and it doesn't break\n> my non-Win32 server build, by all means go for it.\n\nNote that \"doesn't break non-Win32 builds\" is not really the standard\nthat will get applied. Ongoing readability and maintainability of the\ncodebase is a very high priority in my eyes, and I think in the eyes\nof most of the key developers. To the extent that Win32 support can\nbe added without hurting those goals, I have nothing against it.\nI'll even put up with localized ugliness (see the BeOS support hacks\nfor an example of what I'd call localized ugliness). But I get unhappy\nwhen there's airy handwaving about moving all static variables into some\nglobal data structure, to take just one of the points that were under\ndiscussion last week. That'd be a big maintainability penalty IMHO.\n\nAs for the more general point --- my recollection of that thread was\nthat mlw himself was more than a bit guilty of adopting a \"my way or no\nway\" attitude; if he sees some pushback from the other developers maybe\nhe should consider the possibility that he's creating his own problem.\nIn general this development community is one of the most civilized I've\never seen. I don't think it's that hard to get consensus on most\ntopics. The consensus isn't always the same as my personal opinion...\nbut that's the price of being part of a community.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 May 2002 22:03:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Discontent with development process (was:Re: pgaccess - the\n\tdiscussion is over)"
},
{
"msg_contents": "On Mon, 13 May 2002, Lamar Owen wrote:\n\n> But understand that those who don't need the functionality are likely not not\n> be thrilled by changes to a currently stable codebase. Although this config\n> file stuff is small potatoes compared to the Win32 stuff as recently\n> discussed. And for that, please understand that most of the developers here\n> consider Win32 an inferior server platform. In fact, Win32 _is_ an inferior\n> server platform, at least in my opinion. But, if you want to do the work,\n> and it doesn't break my non-Win32 server build, by all means go for it.\n\nActually, even for those that wuldn't need the patch ... as long as the\n\"default behaviour\" doesn't change, and unless there are no valid\ntechnical arguments around it, there is no reason why a patch shouldn't be\nincluded ...\n\n",
"msg_date": "Tue, 14 May 2002 01:13:28 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Discontent with development process (was:Re: pgaccess"
},
{
"msg_contents": "> Actually, even for those that wuldn't need the patch ... as long as the\n> \"default behaviour\" doesn't change, and unless there are no valid\n> technical arguments around it, there is no reason why a patch shouldn't be\n> included ...\n\nUnless it's going to interfere with implementing the general case in the\nfuture, making it a painful feature to keep backwards-compatibility with.\nWhich is what the discussion was about IIRC...\n\nChris\n\n",
"msg_date": "Tue, 14 May 2002 12:26:01 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Discontent with development process (was:Re: pgaccess"
},
{
"msg_contents": "On Tue, 2002-05-14 at 04:03, Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > Although this config file stuff is small potatoes compared to the\n> > Win32 stuff as recently discussed. And for that, please understand\n> > that most of the developers here consider Win32 an inferior server\n> > platform. In fact, Win32 _is_ an inferior server platform, at least\n> > in my opinion. But, if you want to do the work, and it doesn't break\n> > my non-Win32 server build, by all means go for it.\n> \n> Note that \"doesn't break non-Win32 builds\" is not really the standard\n> that will get applied. Ongoing readability and maintainability of the\n> codebase is a very high priority in my eyes, and I think in the eyes\n> of most of the key developers. To the extent that Win32 support can\n> be added without hurting those goals, I have nothing against it.\n> I'll even put up with localized ugliness (see the BeOS support hacks\n> for an example of what I'd call localized ugliness). But I get unhappy\n> when there's airy handwaving about moving all static variables into some\n> global data structure,\n\nWhat would your opinion be of some hack with macros, like \n\n#if (Win32 or THREADED)\n#define GLOBAL_ pg_globals.\n#else\n#define GLOBAL_\n#endif\n\nand then use global variables as\n\nGLOBAL_globvar\n\nAt least in my opinion that would increase both readability and\nmaintainability.\n\n> to take just one of the points that were under\n> discussion last week. That'd be a big maintainability penalty IMHO.\n\n-----------------\nHannu\n\n\n",
"msg_date": "14 May 2002 10:21:02 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Discontent with development process (was:Re: pgaccess"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> What would your opinion be of some hack with macros, like \n\n> #if (Win32 or THREADED)\n> #define GLOBAL_ pg_globals.\n> #else\n> #define GLOBAL_\n> #endif\n\n> and then use global variables as\n\n> GLOBAL_globvar\n\n> At least in my opinion that would increase both readability and\n> maintainability.\n\n From a code readability viewpoint this is not at all better than just\nmoving everything to pg_globals. You're only spelling \"pg_globals.\"\na little differently. And it introduces twin possibilities for error:\nomitting GLOBAL_ (if you're a Unix developer) or writing\npg_globals. explicitly (if you're a Win32 guy). I suppose these errors\nwould be caught as soon as someone tried to compile on the other\nplatform, but it still seems like a mess with little redeeming value.\n\nI think there had been some talk of\n\n#if Win32\n#define myvar pg_globals.f_myvar\n#else\nstatic int myvar;\n#endif\n\nwhich seems a more effective use of macros --- it would at least allow\nthe code to be written without explicit awareness of the special status\nof the variable. Still seems like a maintenance nightmare though.\n\nThe real problem with airily saying \"we'll just move that variable to\npg_globals\" is that it falls down the instant that you consider\nnon-scalar variables. What if it's a pointer to a palloc'd structure?\nSure we can get around this, but not transparently.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 May 2002 09:52:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Discontent with development process (was:Re: pgaccess - the\n\tdiscussion is over)"
},
{
"msg_contents": "Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > Although this config file stuff is small potatoes compared to the\n> > Win32 stuff as recently discussed. And for that, please understand\n> > that most of the developers here consider Win32 an inferior server\n> > platform. In fact, Win32 _is_ an inferior server platform, at least\n> > in my opinion. But, if you want to do the work, and it doesn't break\n> > my non-Win32 server build, by all means go for it.\n>\n> Note that \"doesn't break non-Win32 builds\" is not really the standard\n> that will get applied. Ongoing readability and maintainability of the\n> codebase is a very high priority in my eyes, and I think in the eyes\n> of most of the key developers. To the extent that Win32 support can\n> be added without hurting those goals, I have nothing against it.\n\n The tricky twist will be to keep good readability while\n taking different solution approaches for different Systems\n (e.g. fork() only for *NIX vs. CreateProcess() for Win). I\n agree that your high priority goal is a good one. Thinking\n about good old Unix semantics, having a higher priority means\n not beeing as nice as others, right? Then again, even with\n the lowest possible nice level a process doesn't own the CPU\n exclusively (so it never becomes rude).\n\n> I'll even put up with localized ugliness (see the BeOS support hacks\n> for an example of what I'd call localized ugliness). But I get unhappy\n> when there's airy handwaving about moving all static variables into some\n> global data structure, to take just one of the points that were under\n> discussion last week. That'd be a big maintainability penalty IMHO.\n\n As I understood it the idea was to put the stuff, the\n backends inherit from the postmaster, into a centralized\n place, instead of having it spread out all over the place.\n What's wrong with that?\n\n> As for the more general point --- my recollection of that thread was\n> that mlw himself was more than a bit guilty of adopting a \"my way or no\n> way\" attitude; if he sees some pushback from the other developers maybe\n> he should consider the possibility that he's creating his own problem.\n> In general this development community is one of the most civilized I've\n> ever seen. I don't think it's that hard to get consensus on most\n> topics. The consensus isn't always the same as my personal opinion...\n> but that's the price of being part of a community.\n\n Yeah, maybe democracy wasn't such a perfect idea at all ...\n\n\nJan ;-)\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n",
"msg_date": "Tue, 14 May 2002 10:17:31 -0400 (EDT)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Discontent with development process (was:Re: pgaccess"
},
{
"msg_contents": "Jan Wieck <janwieck@yahoo.com> writes:\n> As I understood it the idea was to put the stuff, the\n> backends inherit from the postmaster, into a centralized\n> place, instead of having it spread out all over the place.\n> What's wrong with that?\n\nThe main objection to it in my mind is that what had been private\nvariables in specific modules now become exceedingly public. Instead of\nlooking at \"static int foo\" and *knowing* that all the references are in\nthe current file, you have to go trolling the entire backend to see who\nis referencing pg_globals.foo.\n\nI have not counted to see how many variables are really affected; if\nthere's only a few then it doesn't matter much. But the people who\nhave done this so far have all reported inserting tons of #ifdefs,\nwhich leads me to the assumption that there's a lot of 'em.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 May 2002 10:30:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Discontent with development process (was:Re: pgaccess - the\n\tdiscussion is over)"
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n>Hannu Krosing <hannu@tm.ee> writes:\n>\n>>What would your opinion be of some hack with macros, like \n>>\n>\n>>#if (Win32 or THREADED)\n>>#define GLOBAL_ pg_globals.\n>>#else\n>>#define GLOBAL_\n>>#endif\n>>\n>\n>>and then use global variables as\n>>\n>\n>>GLOBAL_globvar\n>>\n>\n>>At least in my opinion that would increase both readability and\n>>maintainability.\n>>\n>\n>>From a code readability viewpoint this is not at all better than just\n>moving everything to pg_globals. You're only spelling \"pg_globals.\"\n>a little differently. And it introduces twin possibilities for error:\n>omitting GLOBAL_ (if you're a Unix developer) or writing\n>pg_globals. explicitly (if you're a Win32 guy). I suppose these errors\n>would be caught as soon as someone tried to compile on the other\n>platform, but it still seems like a mess with little redeeming value.\n>\n\nAnother suggestion might be to create a global hashtable that stores the \nsize and pointer\nto global structures for each subsection. Each subsection can define \nits own globals\nstructure and register them with the hashtable. This would not impact \nreadablity and\nmake the gobal environment easy to copy. IMHO, this is possible with \nminimal performance\nimpact.\n\nMyron Scott\nmkscott@sacadia.com\n\n\n",
"msg_date": "Tue, 14 May 2002 07:54:36 -0700",
"msg_from": "Myron Scott <mkscott@sacadia.com>",
"msg_from_op": false,
"msg_subject": "Re: Discontent with development process (was:Re: pgaccess - the\n\tdiscussion is over)"
},
{
"msg_contents": "Myron Scott <mkscott@sacadia.com> writes:\n> Another suggestion might be to create a global hashtable that stores\n> the size and pointer to global structures for each subsection. Each\n> subsection can define its own globals structure and register them with\n> the hashtable.\n\nHmm ... now *that* is an interesting idea.\n\nWith a little more intelligence in the manager of this table, this could\nalso solve my concern about pointer variables. Perhaps the entries\ncould include not just address/size but some type information. If the\nmanager knows \"this variable is a pointer to a palloc'd string\" then it\ncould do the Right Thing during fork. Not sure offhand what the\ncategories would need to be, but we could derive those if anyone has\ncataloged the variables that get passed down from postmaster to children.\n\nI don't think it needs to be a hashtable --- you wouldn't ever be doing\nlookups in it, would you? Just a simple list of things-to-copy ought to\ndo fine.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 May 2002 11:59:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Discontent with development process (was:Re: pgaccess - the\n\tdiscussion is over)"
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n>\n>\n>With a little more intelligence in the manager of this table, this could\n>also solve my concern about pointer variables. Perhaps the entries\n>could include not just address/size but some type information. If the\n>manager knows \"this variable is a pointer to a palloc'd string\" then it\n>could do the Right Thing during fork. Not sure offhand what the\n>categories would need to be, but we could derive those if anyone has\n>cataloged the variables that get passed down from postmaster to children.\n>\n>I don't think it needs to be a hashtable --- you wouldn't ever be doing\n>lookups in it, would you? Just a simple list of things-to-copy ought to\n>do fine.\n>\t\n>\nI'm thinking in a threaded context where a method may need to lookup a\nglobal that is not passed in. But for copying, I suppose no lookups \nwould be\nneccessary.\n\n\nMyron Scott\nmkscott@sacadia.com\n\n\n",
"msg_date": "Tue, 14 May 2002 09:17:58 -0700",
"msg_from": "Myron Scott <mkscott@sacadia.com>",
"msg_from_op": false,
"msg_subject": "Re: Discontent with development process (was:Re: pgaccess - the\n\tdiscussion is over)"
},
{
"msg_contents": "\n\nMark (mlw) ... could you generate a listing of those variables you feel\nwould need to be moved to a 'global structure' and post that to the list?\nThat would at least give us a starting point, instead of both sides\nguessing at what is/would be involved ...\n\n\nOn Tue, 14 May 2002, Tom Lane wrote:\n\n> Jan Wieck <janwieck@yahoo.com> writes:\n> > As I understood it the idea was to put the stuff, the\n> > backends inherit from the postmaster, into a centralized\n> > place, instead of having it spread out all over the place.\n> > What's wrong with that?\n>\n> The main objection to it in my mind is that what had been private\n> variables in specific modules now become exceedingly public. Instead of\n> looking at \"static int foo\" and *knowing* that all the references are in\n> the current file, you have to go trolling the entire backend to see who\n> is referencing pg_globals.foo.\n>\n> I have not counted to see how many variables are really affected; if\n> there's only a few then it doesn't matter much. But the people who\n> have done this so far have all reported inserting tons of #ifdefs,\n> which leads me to the assumption that there's a lot of 'em.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n",
"msg_date": "Tue, 14 May 2002 13:29:23 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Global Variables (Was: Re: Discontent with development process\n\t(was:Re: pgaccess - the discussion is over) )"
},
{
"msg_contents": "\"Marc G. Fournier\" wrote:\n> \n> Mark (mlw) ... could you generate a listing of those variables you feel\n> would need to be moved to a 'global structure' and post that to the list?\n> That would at least give us a starting point, instead of both sides\n> guessing at what is/would be involved ...\n\n(1) All the configuration info.\n(2) All the globals in postmaster.c\n(3) Make sure that memory contexts are initialized correctly.\n(4) Exception handling.\n(5) Make sure that the statistics and other child processes work too.\n\nIn BackendStartup(), rather than \"pid = fork();\" You should split the routine\nat that point, one end will be called for error and successful exec of child\nprocess, another will be called for the child.\n\nOn UNIX, it will merely be a slight rearrangement of the code. On Windows it\nwill represent a different function which will copy the globals from the\nparent, and call in. Think of it like this:\n\nCurrently it looks something like this:\n\nBackendStartup(port)\n{\n\t....\n\tpid = fork();\n\n\tif( pid < 0)\n\t\t// error\n\telse if(pid )\n\t\t// Still in Parent\n\telse\n\t\t// Do child\t\n}\n\nThis would have to change to this:\n\nBackendStartup(port)\n{\n\t...\n\n\tpid = StartBackendProcess(port);\n\n\tif(pid < 0)\n\t\t// Error\n\telse if(pid)\n\t\t// Still in Parent\n\telse\n\t\texit(DoBackend()); // Will never happen on Windows.\n}\n\n#ifdef WIN32\nStartBackendProcess(port)\n{\n\tHANDLE hprocess= CreateProcess(\"...../postgres\", ....);\n\n\t(initialize process here)\n\treturn hprocess;\n}\n#endif\n#ifdef HAS_FORK\nStartBackendProcess(port)\n{\n\treturn fork();\n}\n#endif\n\nIn the main code (src/backend/main), you would have to pass a parameter to the\nbackend to inform it that is being started as a child of the postmaster, and to\ncall DoBackend() under windows. MPI does this sort of thing.\n\nI see the whole thing as fairly straight forward. Fork is nothing more than a\ncopy. We should be able to identify what postmaster does prior to the fork of a\nbackend. The tricks are to handle the shared memory and semaphores, but those\nare easy too. We could create a DLL for Postgres which has implicitly shared\ndata amongst the processes, and make sure that Postmaster updates all the\nshared data prior to entering its server loop. That way the backends are only\nreading data from a shared resource.\n\nOnce the Windows version of PostgreSQL is able to exec the child, I think the\nareas where there are things that were missed should be pretty obvious.\n\nIt should take a pretty good engineer a few (full time, 40+ hours) weeks. It\nshould be mostly done the first week, the last two weeks would be chasing bugs\ncreated by variables that were not initialized. This assumes, of course, that\nyou are using a cygwin build environment without the cygwin or cygipc dlls. If\nwe were to use MS C/C++ it would take a much longer time, although ultimately\nthat may be the desired direction.\n\nP.S.\nI have unsubscribed from the hackers list, if you wish to contact me, use my\nemail address directly.\n",
"msg_date": "Tue, 14 May 2002 13:55:19 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Global Variables (Was: Re: Discontent with development "
},
{
"msg_contents": "On Tue, 14 May 2002, Myron Scott wrote:\n\n>\n>\n> Tom Lane wrote:\n>\n> >\n> >\n> >With a little more intelligence in the manager of this table, this could\n> >also solve my concern about pointer variables. Perhaps the entries\n> >could include not just address/size but some type information. If the\n> >manager knows \"this variable is a pointer to a palloc'd string\" then it\n> >could do the Right Thing during fork. Not sure offhand what the\n> >categories would need to be, but we could derive those if anyone has\n> >cataloged the variables that get passed down from postmaster to children.\n> >\n> >I don't think it needs to be a hashtable --- you wouldn't ever be doing\n> >lookups in it, would you? Just a simple list of things-to-copy ought to\n> >do fine.\n> >\n> >\n\n> I'm thinking in a threaded context where a method may need to lookup a\n> global that is not passed in. But for copying, I suppose no lookups\n> would be neccessary.\n\nif we can, can we keep the whole 'threaded' concept in mind when\ndeveloping this ... if going a hashtable would be required for this, let's\ngo the more complete route and eliminate potential issues down the road\n...\n\n\n\n",
"msg_date": "Wed, 15 May 2002 00:45:26 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Discontent with development process (was:Re: pgaccess"
}
] |
[
{
"msg_contents": "Dear all,\n\nHere is a copy of a mail received from\n\"Robert Collins\" <robert.collins@itdomain.com.au>.\n\nJean-Michel POURE\n\n> -----Original Message-----\n> From: Jean-Michel POURE [mailto:jm.poure@freesurf.fr] \n> Sent: Friday, May 10, 2002 6:30 PM\n> \n> Does setup.exe support uninstalling just like rpm -e package \n> name does? Are \n> dependencies taken into account during uninstall? \n\nNot currently, but it should. It will for the cygwin project eventually.\n \n> Is Cygwin listed in each package dependency? \n\nNo, as I said - it's optional (because cygwin itself is in base). As we\nare talking about the use of the codebase for debian, it doesn't really\nmatter what the cygwin setup.ini files contain though.\n \n> OK then. If I only want ot install PostgreSQL, it will only \n> download the \n> required dependencies, right? Does the installer check \n> version dependency?\n\nIt only downloads whats needed for what you install. i.e. If you install\n(say) ncurses, it will download libncurses automatically. And\ndependencies are transitive. If A requires B, and B requires C. A does\nnot need to list C unless A is directly dependent on C. Setup will 'do\nthe right thing'.\nNot currently, it's on the todo, as is 'provides:'.\n \n> > Why do you say setup.exe is horrible? Bad architecture? Bad GUI? \n> > Doesn't work? The last two days of MD5 related errors that \n> I have not \n> > had time to look at?\n> \n> Bad GUI for sure.\n> \n> 1) There should be a small descrition of each package like in \n> .DEB or .RPM \n> packages. A single line is not enought. A Windows user does \n> not know what he \n> is downloading.\n\nWe want to put popups when you mouse over the packages. Also we want\nmore keyboard control, to assist partially disabled (or whatever the\npolitically correct discription is) users. \n \n> 2) Packages should be listed in an on-line database. With a \n> full description \n> and manuals.\n\nhttp://www.cygwin.com/packages. However, because setup.ini, like the\ndebian Packages database is federated, this cannot be a complete list,\nonly a list of the cygwin-ditribution's packages.\n \n> 3) Cygwin installer should be accessible in the Control Panel \n> directly or in \n> Add/Remove software. Presently, it can only be access through \n> the setup.exe\n\nThere's no reason that it can't be. It'd only take a few registry\nentries. I've added this to the TODO list. However, the user would have\nto choose when to register setup.exe, because if the user chooses 'run\nfrom net' you wouldn't want the temporary copy of setup.exe to be\nregistered with Add/Remove.\n \n> 4) We need a setup.exe command line tool to implement limited \n> installers that \n> will not conflict with setup.exe. Example : if we release a \n> limited Cygwin \n> installer at PostgreSQL, we need to be sure it will not \n> conflict with Cygwin.\n\nThe setup.exe code base in HEAD is being heavily modified for reuse.\nIt's been a long term goal to make setup.exe's code available without a\nfull fork() being made of the code base. The first tool to appear will\nbe a setup.ini linter, similar to lintian, which will use the setup.ini\nparser, but nothing else. The code is in C++, and is slowly becoming\nclean. (It started off life as a sort-of C++ using C methodology\nproject, and that made it very hard to change.)\n \n> What is the on-going work as for setup.exe? Could you \n> describe shortly what is \n> in the hub ?\n\nThe cygwin-apps@cygwin.com mailing list is the best place to discuss\nsetup.exe. I think it's a little off-topic.\n\nSuffice to say, setup.exe is not a trivial application, and while a\nminimal version can be created quite easily, I really believe that\ncontributing to/leveraging setup.exe will be much more time-effective.\n\nRob\n\nCurrent WISHLIST and TODO's from CVS follow:\n\n(Some of these have been done, but not tested enough to remove from the\nlist).\n\nTODO:\n\n* Chooser dialog needs work.\n* Mirrors list orer is snafued.\n* Don't downgrade if the curr version is <= installed?\n* support rpm/deb files for reading the package from. (To allow the\nmaintainers\nthe use of rpm/deb tools to create packages.)\n * make a librar(y|ies) for setup and cygcheck to use containing\n 1) Something to translate POSIX -> native. Currently called \"cygpath\"\n in setup, although this is probably a bad choice of name.\n 2) Something to return the list of installed packages.\n 3) Something to return the cygwin mount table. Currently, I have\nimplemented\n a lightweight setmntent and getmntent using the code in\n 4) Something to parse a tar file name into package/version or\naltenatively,\n return that information from 2)\n 5) Something to return a list of files associated with a package.\n* When installing and enough packages default to visible, the RH\nscrollbar is\n sometimes hidden.\n* Mark versions as prev/curr/test in the GUI when clicking through them.\n* Remove *empty* directories on uninstalls\n* Correctly overwrite -r--r--r-- files.\n* Make setup.exe available through Add/Remove\n\nWISHLIST:\n * rsync:// support\n * Some way to download *all* the source\n * When clicking on a category that is showing a partial list (auto\nadded item\ns due to dependencies) show the full list rather than minising.\n * incremental/recoverable download capability.\n * build-depends\n * FTP control connections should be closed when we are awaiting user\ninput.\n * Show a sdesc for each category\n * Add friendly error reporting to simpsock.cc\n * scan newly installed files for README files, show list to user, let\nthem rea\nd them if they want.\n * Mouse wheel support broken/missing for *some* users.\n * When in category view, and changing from prev->normal->exp the\ncategories g\net collapsed This is non-intuitive.\n * mirrors.lst to be copied to setup.ini and cached locally. Then the\nmaster m\nirrors list is reserved for bootstrapping.\n * clicking on a package that is in multiple categories should update\nthe view\n of the package in both locations on screen. - Done?\n * remember the view mode - ie if you leave setup in partial, it\nreturns to pa\nrtial automatically.\n * new view - \"action / category / package\"\n * Downloading from the internet should be _able_ to list based on\nwhat is pre\nsent in the cache, as opposed to what is installed. (To help building a\ncomplete\n install set for a different machine).\n * new view - show installed packages only. Probably not categorised.\n * new view - show non installed packages only.\n - Have an option to display any downloaded READMEs (or at least\nmention that they exist)\n - Don't ask about the start menu or desktop options if they already\nexist\n * Save the manual proxy settings so they don't need to be retyped.\n * detect files in mulitple packages\n * save all options\n * run a different script after finishing setup.\n * Show bin and src download size\n * Set ntsec permissions correctly, and for new installs enable ntsec.\n\n",
"msg_date": "Fri, 10 May 2002 19:11:47 +1000",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Cygwin Setup.exe future"
}
] |
[
{
"msg_contents": "1) Cygwin latest CVS installer version supports command lines.\n2) Cygwin setup.exe is not needed. According to Robert Collins, an appropriate \nsetup.ini file can be used for automatic installation.\n\nCheers,\nJean-Michel POURE\n",
"msg_date": "Fri, 10 May 2002 11:46:27 +0200",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Two pieces of information about Cygwin installer"
}
] |
[
{
"msg_contents": "Hi,\n\nif it is acceptable for subtransactions to use up transaction numbers,\nthen here is a half baked RFC for a possible implementation.\nIf not, forget the rest of this message.\n\nThe proposed implementation works much like the current transaction\nhandling. It needs an additional system table\npg_subtrans (child XactId PRIMARY KEY, parent XactId).\n\nBEGIN; -- starts a new (top level) transaction, say 100\n\nINSERT row1; -- row1.xmin = 100\nDELETE row2; -- row2.xmax = 100\n\nBEGIN; -- starts a subtransaction, let's call it 200,\n -- stores 100 on the parent transaction stack\n -- (a local memory structure),\n -- inserts (200, 100) into pg_subtrans\n\nINSERT row3; -- row3.xmin = 200, row3.XMIN_IS_SUB = true\n\nDELETE row4; -- row4.xmax = 200, row4.XMAX_IS_SUB = true\n\nCOMMIT; -- resets CurrentTransaction to 100 (pop from xact stack),\n -- does *NOT* mark T200 as committed\n\nBEGIN; -- starts a subtransaction, let's call it 300,\n -- pushes 100 on the parent transaction stack,\n -- inserts (300, 100) into pg_subtrans\n\nBEGIN; -- starts a 3rd level subtransaction (400),\n -- pushes 300 on the parent transaction stack,\n -- inserts (400, 300) into pg_subtrans\n\n...\n\nCOMMIT; -- resets CurrentTransaction to 300 (transaction stack),\n -- does NOT mark T400 as committed\n\nINSERT row5; -- row5.xmin = 300, row5.XMIN_IS_SUB = true\n\nDELETE row6; -- row6.xmax = 300, row6.XMAX_IS_SUB = true\n\nROLLBACK; -- resets CurrentTransaction to 100 (transaction stack),\n -- optionally removes (300, 100) from pg_subtrans,\n -- marks T300 as aborted\n\nCOMMIT; -- marks T100 as committed\nor\nROLLBACK; -- marks T100 as aborted\n\n\nVisibility:\n-----------\n\nThe checks for xmin and xmax are very similar. We look at xmin here:\n\nTraditionally a tuple is visible, if xmin has committed before the\ncurrent snapshot was taken, or if xmin == CurrentTransaction().\n\nA subtransaction is considered aborted, if it is marked aborted. Else\nit is considered to be in the same state as its parent transaction\n(which again can be a subtransaction).\n\nThe effects of tup.xmin are considered visible, if ...\n(This is not a formal specification. It shall only illustrate the\ndifference to the existing implementation of HeapTupleSatisfiesXxx()\nin tqual.c)\n\n if (tup.XMIN_ABORTED) // flag set by prior visitor\n return false;\n\n if (tup.XMIN_COMMITTED) // flag set by prior visitor\n return true;\n\n // xmin neither known aborted nor known committed,\n // could be active\n // or finished and tup not yet visited\n for (xmin = tup.xmin; IsValid(xmin); xmin = GetParentXact(xmin)) {\n if (TransactionIdDidAbort(xmin)) {\n tup.XMIN_ABORTED = true;\n return false;\n }/*if*/\n\n if (IsCurrentTransaction(xmin)) {\n // tup.xmin is one of my own subtransactions,\n // it is already committed. So tup can be\n // considered belonging to the current transaction.\n tup.xmin = xmin;\n tup.XMIN_IS_SUB = CurrentTransactionIsSub();\n return true; // or rather check cmin ...\n }/*if*/\n \n if (TransactionIdDidCommit(xmin)) {\n // xmin is a top level transaction\n tup.xmin = xmin;\n tup.XMIN_IS_SUB = false;\n tup.XMIN_COMMITTED = true;\n return true;\n }/*if*/\n\n if (!tup.XMIN_IS_SUB) {\n // Don't try expensive GetParentXact()\n break;\n }/*if*/\n }/*for*/\n\n // tup.xmin still active\n return false;\n\nTransactionId GetParentXact(TransactionId xnum) uses pg_subtrans to\nfind the parent transaction of xnum. It returns InvalidTransaction,\nif it doesn't find one.\n\n\nPerformance:\n------------\n\n. Zero overhead, if nested transactions are not used.\n\n. BEGIN SUB has to insert a pair of TransactionIds into pg_subtrans.\nApart from that it is not slower than BEGIN top level transaction.\n\n. COMMIT SUB is faster than COMMIT.\n\n. ROLLBACK SUB is much like ROLLBACK, plus (optionally) deletes one\nentry from pg_subtrans.\n\n. COMMIT and ROLLBACK of top level transactions don't care about\nsubtransactions.\n\n. Access a tuple inserted/deleted by a subtransaction: Zero\noverhead, if the subtransaction has been rolled back, otherwise the\nparent transaction has to be looked up in pg_subtrans (possibly\nrecursive). This price has to be paid only once per tuple (well, once\nfor xmin and once for xmax). More accurate: \"once after the\ninserting/deleting top level transaction has finished\".\n\n\nProblems:\n---------\n\n. pg_subtrans grows by 8 bytes per subtransaction.\n\n. Other pitfalls???\n\n\nAdministration:\n---------------\n\nAs soon as a top level transaction has finished, its subtransaction\nids are replaced by the top level transaction id on the next access to\neach tuple.\n\nVACUUM (*not* VACUUM tablename) removes old entries from pg_subtrans.\nAn entry is old, if the parent transaction has finished, before VACUUM\nstarted.\n\n\nChallenge:\n----------\n\nFor heavy use of subtransactions there has to be a really fast\nimplementation of pg_subtrans, maybe something like a b-tree.\n\nAFAICS small WAL changes: pg_subtrans inserts (and deletes?) have to\nbe logged.\n\nEverything else is straightforward.\n\nComments?\n\nServus\n Manfred\n",
"msg_date": "Fri, 10 May 2002 13:12:21 +0200",
"msg_from": "Manfred Koizar <mkoi-pg@aon.at>",
"msg_from_op": true,
"msg_subject": "Nested transactions RFC"
},
{
"msg_contents": "Manfred Koizar <mkoi-pg@aon.at> writes:\n> TransactionId GetParentXact(TransactionId xnum) uses pg_subtrans to\n> find the parent transaction of xnum.\n\nThis is not only extremely expensive, but in practice would cause\ninfinite recursion: any attempt to validate the commit state of a\nrow in pg_subtrans would result in a recursive attempt to search\npg_subtrans. I don't think we can do table access from inside the\ntqual.c routines.\n\nA practical implementation, which would cost little except tuple header\nspace (and yes I know that won't make you happy) would require 3 fields\ninstead of 2 for both the min and the max:\n\ttransaction ID\n\tsubtransaction ID\n\tcommand ID\nFirst check the transaction ID: if aborted or (in-progress and not\nmine), tuple is not visible. Next, if the subtransaction ID is not\nzero, similarly check it. Finally, if xid and sub-xid are both mine,\nthe command ID has to be checked.\n\nIn this scenario, subtransactions commit or abort by marking their\npg_clog entries, but no one else will care until the parent transaction\ncommits. So there is no extra state anywhere except for the stack\nof active transaction numbers inside each backend.\n\nPossibly we could use techniques similar to what you already suggested\nfor cmin/cmax to reduce the amount of physical storage needed for the\nsix logical fields involved.\n\n\t\t\tregards, tom lane\n\nPS: unfortunately, tuple validity checking is only a small part of what\nhas to be done to support subtransactions. The really nasty part is\nin fixing error recovery inside the backend so that (most) errors can\nbe dealt with by aborting only the innermost subtransaction.\n",
"msg_date": "Sat, 11 May 2002 11:51:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Nested transactions RFC "
},
{
"msg_contents": "Tom,\n\nreading my message again and your response, I see, that some points\nwere a bit unclear.\n\nOn Fri, 10 May 2002 13:12:21 +0200, I wrote:\n|if it is acceptable for subtransactions to use up transaction numbers,\nOf course, \"use up\" is nonsense, as it sounds like \"use all\navailable\"; this should have been \"use\" or \"draw from the pool of\".\nShould have listened better to my English teacher :-)\n\nOn Sat, 11 May 2002 11:51:37 -0400, Tom Lane <tgl@sss.pgh.pa.us>\nwrote:\n>Manfred Koizar <mkoi-pg@aon.at> writes:\n>> TransactionId GetParentXact(TransactionId xnum) uses pg_subtrans to\n>> find the parent transaction of xnum.\n>\n>This is not only extremely expensive, but in practice would cause\n>infinite recursion: any attempt to validate the commit state of a\n>row in pg_subtrans would result in a recursive attempt to search\n>pg_subtrans. I don't think we can do table access from inside the\n>tqual.c routines.\n\nI wrote:\n|It needs an additional system table\n|pg_subtrans (child XactId PRIMARY KEY, parent XactId).\n\nBut no! \"Table\" is not the correct word for what I mean. I rather\nwant something living outside transactions and not accessed via normal\nSQL statements. It is to be handled by highly specialized and\noptimized routines, because fast access is crucial for the whole\nproposal. That's why I called \"a really fast implementation of\npg_subtrans\" a challenge. I had pg_clog in mind, but didn't find the\nright words.\n\n>A practical implementation, which would cost little except tuple header\n>space (and yes I know that won't make you happy) would require 3 fields\n :-)\n\n>instead of 2 for both the min and the max:\n>\ttransaction ID\n>\tsubtransaction ID\n>\tcommand ID\nThis was my first attempt. I've dismissed it for several reasons.\n\n>First check the transaction ID: if aborted or (in-progress and not\n>mine), tuple is not visible.\nI agree up to here.\n\n>Next, if the subtransaction ID is not\n>zero, similarly check it.\nNow imagine\nBEGIN 1;\nBEGIN 2;\nBEGIN 3;\nINSERT tup3;\nCOMMIT 3;\nROLLBACK 2;\nCOMMIT 1;\n\nThen in tup3 we would have xid==1 and subxid==3, both of which are\ncommitted, but nevertheless tup3 is invisible, because xact 2 aborted.\n\n>Finally, if xid and sub-xid are both mine,\n>the command ID has to be checked.\n>\n>In this scenario, subtransactions commit or abort by marking their\n>pg_clog entries, but no one else will care until the parent transaction\n>commits. So there is no extra state anywhere except for the stack\n>of active transaction numbers inside each backend.\nA *stack* of _active_ transaction numbers is not sufficient, we need\nthe whole *tree* of _all_ transactions belonging to the current top\nlevel transaction. This is, want I wanted to model in my pg_subtrans\n\"table\". And pg_subtrans cannot be a private structure, because it\nhas to be inspected by other transactions too (cf. example above).\n\n>PS: unfortunately, tuple validity checking is only a small part of what\n>has to be done to support subtransactions. The really nasty part is\n>in fixing error recovery inside the backend so that (most) errors can\n>be dealt with by aborting only the innermost subtransaction.\nIs this really related to subtransactions? The current behaviour is,\nthat an error not only aborts the offending command, but the whole\n(top level) transaction. My proposal doesn't change anything\nregarding this. Though I agree it would be desirable to have finer\ngrained error handling.\n\nYou have quoted only small parts of my posting. Do you agree to the\nrest? Or didn't you bother to comment, because you considered the\nwhole proposal refuted by your counter-arguments? I'll be fine either\nway, I just want to know.\n\nBTW, there's something missing from my visibility checks:\n| if (IsCurrentTransaction(xmin)) {\nhere we have to add \"or xmin is one of my (grand)*parents\".\n\nAnd of course, it would be nice to have named savepoints:\nBEGIN;\nBEGIN foo;\nBEGIN bar;\n...\nROLLBACK foo;\nCOMMIT; -- top level transaction\n\nServus\n Manfred\n",
"msg_date": "Sun, 12 May 2002 01:17:42 +0200",
"msg_from": "Manfred Koizar <mkoi-pg@aon.at>",
"msg_from_op": true,
"msg_subject": "Re: Nested transactions RFC "
},
{
"msg_contents": "Manfred Koizar <mkoi-pg@aon.at> writes:\n> A *stack* of _active_ transaction numbers is not sufficient, we need\n> the whole *tree* of _all_ transactions belonging to the current top\n> level transaction. This is, want I wanted to model in my pg_subtrans\n> \"table\". And pg_subtrans cannot be a private structure, because it\n> has to be inspected by other transactions too (cf. example above).\n\nHmm. This seems to me to be vastly overdesigning the feature. I've\nnever yet seen a practical application for more than one level of\nsubtransaction, so I question whether we should buy into a substantially\nmore complex implementation to support the more general case.\n\n> Is this really related to subtransactions? The current behaviour is,\n> that an error not only aborts the offending command, but the whole\n> (top level) transaction. My proposal doesn't change anything\n> regarding this.\n\nEvery single application that I've seen for subtransactions is all about\nerror recovery. If we don't fix that then there's no point.\n\n> You have quoted only small parts of my posting.\n\nI don't believe in quoting whole postings, only enough to remind people\nwhat it was I'm responding to.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 12 May 2002 11:31:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Nested transactions RFC "
},
{
"msg_contents": "\nOn Sun, 12 May 2002, Tom Lane wrote:\n\n> Manfred Koizar <mkoi-pg@aon.at> writes:\n> > A *stack* of _active_ transaction numbers is not sufficient, we need\n> > the whole *tree* of _all_ transactions belonging to the current top\n> > level transaction. This is, want I wanted to model in my pg_subtrans\n> > \"table\". And pg_subtrans cannot be a private structure, because it\n> > has to be inspected by other transactions too (cf. example above).\n>\n> Hmm. This seems to me to be vastly overdesigning the feature. I've\n> never yet seen a practical application for more than one level of\n> subtransaction, so I question whether we should buy into a substantially\n> more complex implementation to support the more general case.\n\nI think it'd depend on how pervasive the feature is going to be. If we\nallow functions/rules/etc to start subtransactions I'm not sure it'd\nbe safe to say that only one level is safe since you might not know that\nyour subtransaction calls something that wants to start a subtransaction,\nbut you'd probably expect that anything it does would be undone when you\nrollback your subtransaction, just like it would if the items weren't\nin a subtransaction.\n\n",
"msg_date": "Sun, 12 May 2002 10:23:32 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Nested transactions RFC "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Manfred Koizar <mkoi-pg@aon.at> writes:\n> > A *stack* of _active_ transaction numbers is not sufficient, we need\n> > the whole *tree* of _all_ transactions belonging to the current top\n> > level transaction. This is, want I wanted to model in my pg_subtrans\n> > \"table\". And pg_subtrans cannot be a private structure, because it\n> > has to be inspected by other transactions too (cf. example above).\n> \n> Hmm. This seems to me to be vastly overdesigning the feature. I've\n> never yet seen a practical application for more than one level of\n> subtransaction, so I question whether we should buy into a substantially\n> more complex implementation to support the more general case.\n\nI'm for Manfred with this point. I would never suppose\nthat nested transactions supports only 1 level.\n\n> \n> > Is this really related to subtransactions? The current behaviour is,\n> > that an error not only aborts the offending command, but the whole\n> > (top level) transaction. My proposal doesn't change anything\n> > regarding this.\n> \n> Every single application that I've seen for subtransactions is all about\n> error recovery. If we don't fix that then there's no point.\n\nThe problem exists with any implementation.\nThough tuple validity checking may be only a small part\n(I don't think so), I've never seen such proposal other\nthan Manfred's one.\n\nIf I remember correctly, savepoints functionality\nwas planned for 7.0 but probably we wouldn't have\nit in 7.3. The TODO may be a TODO for ever unless\nthe direction to solve the TODO is decided.\n\n1) without UNDO for individual tuples.\n2) with UNDO for individual tuples under no\n overwriting smgr.\n3) UNDO under overwriting smgr.\n\nregards,\nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n",
"msg_date": "Mon, 13 May 2002 09:53:13 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Nested transactions RFC"
}
] |
[
{
"msg_contents": "Some comments from Jason Tishler the Cygwin-PostgreSQL maintainer...\n\n> -----Original Message-----\n> From: Jason Tishler [mailto:jason@tishler.net] \n> Sent: 10 May 2002 15:00\n> To: Dave Page\n> Cc: pgsql-cygwin@postgresql.org\n> Subject: Cygwin PostgreSQL Information and Suggestions\n> \n> \n> Dave,\n> \n> Would you forward this to pgsql-hackers since I'm not subscribed?\n> \n> On Thu, May 09, 2002 at 10:45:42PM +0100, Dave Page wrote:\n> > > -----Original Message-----\n> > > From: Jason Tishler [mailto:jason@tishler.net]\n> > > Sent: 09 May 2002 21:52\n> > > To: Dave Page\n> > > \n> > > On Thu, May 09, 2002 at 07:51:33PM +0100, Dave Page wrote:\n> > > > BTW Are you aware there is currently a rather busy thread\n> > > > about native Windows/Beos ports on -hackers...\n> > > \n> > > No, I'm not subscribed, but I just read all that I could find\n> > > in the archives.\n> > > [snip]\n> > > \n> > > > ...which is currently drifting towards a cutdown Cygwin version?\n> > > \n> > > Maybe I'll be out of (another) job soon? :,)\n> > \n> > [snip]\n> > \n> > Personnally, I think (from a 'good for PostgreSQL' rather \n> than 'good \n> > for Cygwin' perspective) that the way forward is a Cygwin \n> based system \n> > but using a tailored downloader/installer that installs the system \n> > 'like a Windows app' (and quickly & easily etc.) rather than the \n> > current way which is Windows 'being' *nix. I think that's very \n> > offputting for many potential users (as others have said on the \n> > -hackers thread).\n> \n> I agree with the above, but more can be done with Cygwin and \n> its setup.exe that can give a fair amount of bang for the \n> buck for some good short time gains too. I will give some \n> details below.\n> \n> I also wanted to dispel some misinformation (IMO) that I \n> perceived from the above mentioned posts and/or elaborate on \n> some of the items:\n> \n> 1. Cygwin's setup.exe supports categories and dependencies. \n> Hence, there is no reason to install all Cygwin packages in \n> order to ensure properly PostgreSQL operation. Someone just \n> has to determine what is the minimal set of packages \n> necessary for PostgreSQL and I will update the setup.hint \n> accordingly. The current setup.hint is as follows:\n> \n> sdesc: \"PostgreSQL Data Base Management System\"\n> category: Database\n> requires: ash cygwin readline zlib libreadline5\n> \n> Sorry, but since I install all Cygwin packages plus about 30 \n> additional ones I haven't desire to determine what are the \n> minimal requirements.\n> \n> 2. Cygwin's setup.exe is customizable. There is a tool \n> called \"upset\" that generates the setup.ini file that drives \n> setup.exe. PostgreSQL could offer a customized setup. For \n> example, this is what the XEmacs folks are doing.\n> \n> 3. Cygwin's setup.exe can run package specific postinstall \n> scripts during the installation. Hence, someone could \n> automate the steps enumerated (e.g., postmaster NT service \n> installation, initdb, etc.) in my README:\n> \n \nhttp://www.tishler.net/jason/software/postgresql/postgresql-7.2.1.README\n\nto ease the installation burden.\n\n4. Cygwin PostgreSQL is perceived to have poor performance. I have\nnever done any benchmarks regarding this issue, but apparently Terry\nCarlin (from the defunct Great Bridge) did:\n\n http://archives.postgresql.org/pgsql-cygwin/2001-08/msg00029.php\n\nSpecifically, he indicates the following:\n\n BTW, Up through 40 users, PostgreSQL under CYGWIN using the TPC-C\n benchmark performed very much the same as Linux PostgreSQL on the\n exact hardware.\n\n5. Cygwin PostgreSQL is perceived to have poor reliability.\nUnfortunately, I have not been able to gather data to concur or refute\nthis perception due a sudden job \"change\" last summer. :,) However,\nthere are reports such as the following on the pgsql-cygwin list:\n\n http://archives.postgresql.org/pgsql-cygwin/2002-04/msg00021.php\n\nIMO, the biggest reliability issue with Cygwin PostgreSQL is it's\ndependency on cygipc. There is some very recent work to create a Cygwin\ndaemon to support features such as System V IPC. So soon the cygipc\ndependency and its \"problems\" will be going way.\n\nThose interested in a \"Windows\" PostgreSQL should possibly consider\ncontributing in this area or other \"hard edges\" (due to Windows-isms)\nthat would improve the reliability of Cygwin PostgreSQL. BTW, I have\nfound the Cygwin core developers very responsive to PostgreSQL problems\nbecause it drives the Cygwin DLL harder than most other applications.\n\n6. Satisfying the Cygwin license for binary distribution is very simple.\nJust include the source for the Cygwin DLL and all executables that are\nlinked with it in your distribution package. It is really that easy.\n\nJason\n",
"msg_date": "Fri, 10 May 2002 14:55:53 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "FW: Cygwin PostgreSQL Information and Suggestions"
},
{
"msg_contents": "\"Dave Page\" <dpage@vale-housing.co.uk> forwards:\n> 4. Cygwin PostgreSQL is perceived to have poor performance. I have\n> never done any benchmarks regarding this issue, but apparently Terry\n> Carlin (from the defunct Great Bridge) did:\n\n> http://archives.postgresql.org/pgsql-cygwin/2001-08/msg00029.php\n\n> Specifically, he indicates the following:\n\n> BTW, Up through 40 users, PostgreSQL under CYGWIN using the TPC-C\n> benchmark performed very much the same as Linux PostgreSQL on the\n> exact hardware.\n\nIt should be noted that the benchmark Terry is describing fires up\nN concurrent backends and then measures the runtime for a specific query\nworkload. So it's not measuring connection startup time, which is\nalleged by some to be Cygwin's weak spot. Nonetheless, I invite the\nPostgres-on-Cygwin-isn't-worth-our-time camp to produce some benchmarks\nsupporting their position. I'm getting tired of reading unsubstantiated\nassertions.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 May 2002 12:31:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FW: Cygwin PostgreSQL Information and Suggestions "
},
{
"msg_contents": "> > 1. Cygwin's setup.exe supports categories and dependencies.\n> > Hence, there is no reason to install all Cygwin packages in\n> > order to ensure properly PostgreSQL operation. Someone just\n> > has to determine what is the minimal set of packages\n> > necessary for PostgreSQL and I will update the setup.hint\n> > accordingly. The current setup.hint is as follows:\n> >\n> > sdesc: \"PostgreSQL Data Base Management System\"\n> > category: Database\n> > requires: ash cygwin readline zlib libreadline5\n> >\n> > Sorry, but since I install all Cygwin packages plus about 30\n> > additional ones I haven't desire to determine what are the\n> > minimal requirements.\n\nIf no one else has done this, I'll be happy to dig in and answer this.\n\n> > 2. Cygwin's setup.exe is customizable. There is a tool\n> > called \"upset\" that generates the setup.ini file that drives\n> > setup.exe. PostgreSQL could offer a customized setup. For\n> > example, this is what the XEmacs folks are doing.\n\nThis is a great start to a more Win-feeling PG.\n\n> > 3. Cygwin's setup.exe can run package specific postinstall\n> > scripts during the installation. Hence, someone could\n> > automate the steps enumerated (e.g., postmaster NT service\n> > installation, initdb, etc.) in my README:\n> >\n>\n> http://www.tishler.net/jason/software/postgresql/postgresql-7.2.1.README\n\nThis is a great document. I had missed this before.\n\n>\n> Specifically, he indicates the following:\n>\n> BTW, Up through 40 users, PostgreSQL under CYGWIN using the TPC-C\n> benchmark performed very much the same as Linux PostgreSQL on the\n> exact hardware.\n\nInteresting. Does anyone that has mentioned poor performance on cygwin have\nany numbers to dispute this?\n\n> Jason\n\nThanks for the info, and thanks for your work on the PG + cygwin stuff!\n\n- J.\n\nJoel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton\nKnowledge Management & Technology Consultant\n\n",
"msg_date": "Fri, 10 May 2002 12:43:16 -0400",
"msg_from": "\"Joel Burton\" <joel@joelburton.com>",
"msg_from_op": false,
"msg_subject": "Re: FW: Cygwin PostgreSQL Information and Suggestions"
},
{
"msg_contents": "> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Tom Lane\n> Sent: Friday, May 10, 2002 12:31 PM\n> To: Dave Page\n> Cc: pgsql-hackers@postgresql.org; Jason@tishler.net\n> Subject: Re: [HACKERS] FW: Cygwin PostgreSQL Information and Suggestions\n>\n>\n>\n> \"Dave Page\" <dpage@vale-housing.co.uk> forwards:\n> > 4. Cygwin PostgreSQL is perceived to have poor performance. I have\n> > never done any benchmarks regarding this issue, but apparently Terry\n> > Carlin (from the defunct Great Bridge) did:\n>\n> > http://archives.postgresql.org/pgsql-cygwin/2001-08/msg00029.php\n>\n> > Specifically, he indicates the following:\n>\n> > BTW, Up through 40 users, PostgreSQL under CYGWIN using the TPC-C\n> > benchmark performed very much the same as Linux PostgreSQL on the\n> > exact hardware.\n>\n> It should be noted that the benchmark Terry is describing fires up\n> N concurrent backends and then measures the runtime for a specific query\n> workload. So it's not measuring connection startup time, which is\n> alleged by some to be Cygwin's weak spot. Nonetheless, I invite the\n> Postgres-on-Cygwin-isn't-worth-our-time camp to produce some benchmarks\n> supporting their position. I'm getting tired of reading unsubstantiated\n> assertions.\n\n... and it's worth remembering, too, that for some cases, connect time is\ncompletely unimportant: most of my work against PG is using shared,\npersistent connections from a web app (Zope); it could take 20 mins to make\nthe initial connection and I'd still be happy. (Note to hackers: do not\nimplement this 20min connect, though. :) )\n\n- J.\n\nJoel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton\nKnowledge Management & Technology Consultant\n\n",
"msg_date": "Fri, 10 May 2002 12:51:41 -0400",
"msg_from": "\"Joel Burton\" <joel@joelburton.com>",
"msg_from_op": false,
"msg_subject": "Re: FW: Cygwin PostgreSQL Information and Suggestions "
},
{
"msg_contents": "Joel,\n\nOn Fri, May 10, 2002 at 12:43:16PM -0400, Joel Burton wrote:\n> > > Sorry, but since I install all Cygwin packages plus about 30\n> > > additional ones I haven't desire to determine what are the\n> > > minimal requirements.\n> \n> If no one else has done this, I'll be happy to dig in and answer this.\n\nSaravanan Bellan has provided a file list, but not a package list.\nWould you be willing to do the conversion? See the following:\n\n http://archives.postgresql.org/pgsql-cygwin/2002-05/msg00030.php\n\n> > http://www.tishler.net/jason/software/postgresql/postgresql-7.2.1.README\n\n> This is a great document. I had missed this before.\n\nI'm glad that you found the above useful.\n\n> Thanks for the info, and thanks for your work on the PG + cygwin stuff!\n\nYou are very welcome.\n\nBTW, I'm on pgsql-hackers now -- YAML, sigh... If you want to get my\nattention, just make sure \"cygwin\" is in the subject and procmail will\ndo its magic. :,)\n\nJason\n",
"msg_date": "Sun, 12 May 2002 10:34:37 -0400",
"msg_from": "Jason Tishler <jason@tishler.net>",
"msg_from_op": false,
"msg_subject": "Re: FW: Cygwin PostgreSQL Information and Suggestions"
}
] |
[
{
"msg_contents": "Hi,\n\nI have two tables, one has 25000 rows and the other\nhas 6.5 million rows.\n\n(25000 rows)\ntable1\n(id text,\nstart int,\nstop int)\n\nwith seperate index on three individual fiels.\n\n6.5 million rows\ntable2\n(id text,\nstart int,\nstop int)\n\nwith seperate index on three individual fields.\n\nWhen I query this two table and try to find overlaped\nrecords, I have used this query:\n\n**************************************************************************************************\nselect count(distinct(table1.id))\nfrom table1, table2\nwhere table1.id=table2.id\nand ( (table2.start>=table1.start and table2.start <=\ntable1.stop)\n or\n (table2.start <= table1.start and table1.start <=\ntable2.stop) );\n***************************************************************************************************\n\nwhen I do a explain, I got this back:\n\n************************************************************************************************\nAggregate (cost=353859488.21..353859488.21 rows=1\nwidth=78)\n -> Merge Join (cost=1714676.02..351297983.38\nrows=1024601931 width=78)\n -> Index Scan using genescript_genomseqid on\ngenescript (cost=0.00..750.35 rows=25115 width=62)\n -> Sort (cost=1714676.02..1714676.02\nrows=6801733 width=16)\n -> Seq Scan on mouseblathuman \n(cost=0.00..153685.33 rows=6801733 width=16)\n\nEXPLAIN\n*************************************************************************************************\n\nMy question is: 1) Why the query start a seq scan on\na much bigger table from beginning? I think it should\nstart to scan the first table and use index for the\nbigger table.\n 2) The query itself takes\nforever, is there a way to speed up it?\n 3) Does this has anything to\ndo with query planner?\n\nThis is kind of a urgent project, so your prompt help\nis greatly appreciated. Thanks.\n\nJim\n\n__________________________________________________\nDo You Yahoo!?\nYahoo! Shopping - Mother's Day is May 12th!\nhttp://shopping.yahoo.com\n",
"msg_date": "Fri, 10 May 2002 11:04:17 -0700 (PDT)",
"msg_from": "large scale <largescale_1999@yahoo.com>",
"msg_from_op": true,
"msg_subject": "Join of small table with large table"
},
{
"msg_contents": "The big problem with the query probably isn't the scans due to your\ndataset and the way indexes work.\nI'm actually rather surprised it chose an index in the smaller table.\n\nIt's the agregate thats taking the time. Which means, faster CPU or\nsimpler aggregate will do the trick. Ie. Do you really need that\nDISTINCT part?\n\n--\nRod\n----- Original Message -----\nFrom: \"large scale\" <largescale_1999@yahoo.com>\nTo: <pgsql-hackers@postgresql.org>\nSent: Friday, May 10, 2002 2:04 PM\nSubject: [HACKERS] Join of small table with large table\n\n\n> Hi,\n>\n> I have two tables, one has 25000 rows and the other\n> has 6.5 million rows.\n>\n> (25000 rows)\n> table1\n> (id text,\n> start int,\n> stop int)\n>\n> with seperate index on three individual fiels.\n>\n> 6.5 million rows\n> table2\n> (id text,\n> start int,\n> stop int)\n>\n> with seperate index on three individual fields.\n>\n> When I query this two table and try to find overlaped\n> records, I have used this query:\n>\n>\n**********************************************************************\n****************************\n> select count(distinct(table1.id))\n> from table1, table2\n> where table1.id=table2.id\n> and ( (table2.start>=table1.start and table2.start <=\n> table1.stop)\n> or\n> (table2.start <= table1.start and table1.start <=\n> table2.stop) );\n>\n**********************************************************************\n*****************************\n>\n> when I do a explain, I got this back:\n>\n>\n**********************************************************************\n**************************\n> Aggregate (cost=353859488.21..353859488.21 rows=1\n> width=78)\n> -> Merge Join (cost=1714676.02..351297983.38\n> rows=1024601931 width=78)\n> -> Index Scan using genescript_genomseqid on\n> genescript (cost=0.00..750.35 rows=25115 width=62)\n> -> Sort (cost=1714676.02..1714676.02\n> rows=6801733 width=16)\n> -> Seq Scan on mouseblathuman\n> (cost=0.00..153685.33 rows=6801733 width=16)\n>\n> EXPLAIN\n>\n**********************************************************************\n***************************\n>\n> My question is: 1) Why the query start a seq scan on\n> a much bigger table from beginning? I think it should\n> start to scan the first table and use index for the\n> bigger table.\n> 2) The query itself takes\n> forever, is there a way to speed up it?\n> 3) Does this has anything to\n> do with query planner?\n>\n> This is kind of a urgent project, so your prompt help\n> is greatly appreciated. Thanks.\n>\n> Jim\n>\n> __________________________________________________\n> Do You Yahoo!?\n> Yahoo! Shopping - Mother's Day is May 12th!\n> http://shopping.yahoo.com\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n",
"msg_date": "Mon, 13 May 2002 10:11:21 -0400",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Join of small table with large table"
},
{
"msg_contents": "Of course, something else you may want to do is is allow postgresql to\nuse a whack load more sort space in ram -- assumming you have ram\nfree.\n\nIts probably hitting the disk alot for temporary storage space.\n\nhttp://www.ca.postgresql.org/docs/momjian/hw_performance/\nhttp://www.argudo.org/postgresql/soft-tuning.html\n\n\n--\nRod\n----- Original Message -----\nFrom: \"large scale\" <largescale_1999@yahoo.com>\nTo: <pgsql-hackers@postgresql.org>\nSent: Friday, May 10, 2002 2:04 PM\nSubject: [HACKERS] Join of small table with large table\n\n\n> Hi,\n>\n> I have two tables, one has 25000 rows and the other\n> has 6.5 million rows.\n>\n> (25000 rows)\n> table1\n> (id text,\n> start int,\n> stop int)\n>\n> with seperate index on three individual fiels.\n>\n> 6.5 million rows\n> table2\n> (id text,\n> start int,\n> stop int)\n>\n> with seperate index on three individual fields.\n>\n> When I query this two table and try to find overlaped\n> records, I have used this query:\n>\n>\n**********************************************************************\n****************************\n> select count(distinct(table1.id))\n> from table1, table2\n> where table1.id=table2.id\n> and ( (table2.start>=table1.start and table2.start <=\n> table1.stop)\n> or\n> (table2.start <= table1.start and table1.start <=\n> table2.stop) );\n>\n**********************************************************************\n*****************************\n>\n> when I do a explain, I got this back:\n>\n>\n**********************************************************************\n**************************\n> Aggregate (cost=353859488.21..353859488.21 rows=1\n> width=78)\n> -> Merge Join (cost=1714676.02..351297983.38\n> rows=1024601931 width=78)\n> -> Index Scan using genescript_genomseqid on\n> genescript (cost=0.00..750.35 rows=25115 width=62)\n> -> Sort (cost=1714676.02..1714676.02\n> rows=6801733 width=16)\n> -> Seq Scan on mouseblathuman\n> (cost=0.00..153685.33 rows=6801733 width=16)\n>\n> EXPLAIN\n>\n**********************************************************************\n***************************\n>\n> My question is: 1) Why the query start a seq scan on\n> a much bigger table from beginning? I think it should\n> start to scan the first table and use index for the\n> bigger table.\n> 2) The query itself takes\n> forever, is there a way to speed up it?\n> 3) Does this has anything to\n> do with query planner?\n>\n> This is kind of a urgent project, so your prompt help\n> is greatly appreciated. Thanks.\n>\n> Jim\n>\n> __________________________________________________\n> Do You Yahoo!?\n> Yahoo! Shopping - Mother's Day is May 12th!\n> http://shopping.yahoo.com\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n",
"msg_date": "Mon, 13 May 2002 10:17:57 -0400",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Join of small table with large table"
},
{
"msg_contents": "large scale <largescale_1999@yahoo.com> writes:\n> Aggregate (cost=353859488.21..353859488.21 rows=1\n> width=78)\n> -> Merge Join (cost=1714676.02..351297983.38\n> rows=1024601931 width=78)\n> -> Index Scan using genescript_genomseqid on\n> genescript (cost=0.00..750.35 rows=25115 width=62)\n> -> Sort (cost=1714676.02..1714676.02\n> rows=6801733 width=16)\n> -> Seq Scan on mouseblathuman \n> (cost=0.00..153685.33 rows=6801733 width=16)\n\nThat plan seems odd to me too. Have you done VACUUM ANALYZE on these\ntables?\n\nI would think that a hash join would be preferable. You might need to\nincrease the SORT_MEM parameter to let the whole smaller table be\nstuffed into memory before the planner will think so, though.\nTry setting it to 10000 or so (ie, 10 MB).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 May 2002 10:48:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Join of small table with large table "
},
{
"msg_contents": "On Fri, 10 May 2002, large scale wrote:\n\n> Hi,\n>\n> I have two tables, one has 25000 rows and the other\n> has 6.5 million rows.\n>\n> (25000 rows)\n> table1\n> (id text,\n> start int,\n> stop int)\n>\n> with seperate index on three individual fiels.\n>\n> 6.5 million rows\n> table2\n> (id text,\n> start int,\n> stop int)\n>\n> with seperate index on three individual fields.\n\nWe'll start with the standard questions: Have you\nvacuum analyzed? What version are you running? (if\nit's less than 7.2, you may want to see about\nupgrading) If you do a set enable_seqscan=false;\nwhat does the explain show then? I'd be interested\nin know if 1024601931 is even remotely a valid number\nof rows from that join as well (which is about\n.5% of an entire cartesian join if my math is right).\n\nPerhaps some exists style thing would be faster since\nthat would at least presumably be able to stop when\nit found a matching table2 row for a particular table1\nid.\n\n",
"msg_date": "Mon, 13 May 2002 09:09:39 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Join of small table with large table"
}
] |
[
{
"msg_contents": "On Sat, 2002-05-11 at 02:25, Peter Eisentraut wrote:\n> The remaining issue is the sort order. I think this can be solved for\n> practical purposes by creating two expected files for each affected test,\n> say char.out and char-locale.out. The regression test driver would try\n> the first one, if that fails try the second one.\n> \n> The assumption here is that all locales will choose the same sort order as\n> long as they're dealing only with the core 26 letters. This does not have\n> to be true in theory, but I think it works for the vast majority of\n> practical cases.\n\net_EE locale has the following order for \"core 26 letters\" _ are other\nletters\n\nABCDEFGHIJKLMNOPQRS_Z_TUVW____XY (notice position of Z)\n\nand I'm not sure if V and W are distinguished when sorting words that\nhave anything after them.\n\nI've heard that in some other locales there are other veir behaviours\n(like sorting on or two of the same letters as equivalent)\n\n------------\nHannu\n\n\n",
"msg_date": "11 May 2002 00:59:31 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": true,
"msg_subject": "Re: Making the regression tests locale-proof"
},
{
"msg_contents": "Since locale support is now enabled by default, it is desirable that the\nregression tests can pass if the clusters locale is not C.\n\nAs a first step I have included the following statements in pg_regress\nright after the database is created:\n\nalter database \"$dbname\" set lc_messages to 'C';\nalter database \"$dbname\" set lc_monetary to 'C';\nalter database \"$dbname\" set lc_numeric to 'C';\nalter database \"$dbname\" set lc_time to 'C';\n\nThis gets rid of a boatload of failures related to number formatting.\nFor that purpose I have changed the permissions on these options to\nUSERSET. (I'm still debating making lc_messages SUSET, because otherwise\nusers can screw with admins by changing the language of the log output all\nthe time. Comments?)\n\nThe remaining issue is the sort order. I think this can be solved for\npractical purposes by creating two expected files for each affected test,\nsay char.out and char-locale.out. The regression test driver would try\nthe first one, if that fails try the second one.\n\nThe assumption here is that all locales will choose the same sort order as\nlong as they're dealing only with the core 26 letters. This does not have\nto be true in theory, but I think it works for the vast majority of\npractical cases.\n\nWe could also cut down the number of affected tests by making the\nselect_implicit and select_having not use mixed-case strings in the test\ntables. Then we have only char, varchar, and select_views left.\n\nComments?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 10 May 2002 23:25:14 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Making the regression tests locale-proof"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n\n> The assumption here is that all locales will choose the same sort order as\n> long as they're dealing only with the core 26 letters. This does not have\n> to be true in theory, but I think it works for the vast majority of\n> practical cases.\n\n\nNot for uppercase vs. lowercase versions of them.\n\nWith no locale used (straight ASCII), you get A C b, with a locale\nyou'll get A b C.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "10 May 2002 21:44:12 +0000",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: Making the regression tests locale-proof"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> For that purpose I have changed the permissions on these options to\n> USERSET. (I'm still debating making lc_messages SUSET, because otherwise\n> users can screw with admins by changing the language of the log output all\n> the time. Comments?)\n\nHm. Don't the regression tests already assume they are run by the\nsuperuser? They've got create/drop user commands in them. So I'd\nsay SUSET is fine from the point of view of the tests, and I agree\nwith your concern about making the logs unreadable.\n\n> The assumption here is that all locales will choose the same sort order as\n> long as they're dealing only with the core 26 letters.\n\nNope. For instance, on HPUX I get this sort order in English:\n\n$ LANG=en_US.iso88591 sort testll\neix\nela\nella\nellm\nelm\neln\nenx\n\nand this in Spanish:\n\n$ LANG=es_ES.iso88591 sort testll\neix\nela\nelm\neln\nella\nellm\nenx\n\nbecause the Spanish treat LL as a single collating element. (Actually,\nmy very-rusty recollection is that they sort LL the same as one L, which\nwould mean that HPUX's behavior is not quite right here: it's treating\nLL as one symbol that sorts after L. Linux seems to have no clue that\nLL is special at all though...)\n\n> We could also cut down the number of affected tests by making the\n> select_implicit and select_having not use mixed-case strings in the test\n> tables. Then we have only char, varchar, and select_views left.\n\nIn practice we could perhaps use test data that doesn't hit any of the\nspecial cases in the popular languages. But I wonder whether this would\nnot be shirking our responsibility as testers. Seems like if you avoid\nexercising these kinds of cases, you avoid finding corner-case bugs.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 May 2002 19:23:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Making the regression tests locale-proof "
},
{
"msg_contents": "Tom Lane escribi�: \n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n\n> > The assumption here is that all locales will choose the same sort order as\n> > long as they're dealing only with the core 26 letters.\n> \n> Nope. For instance, on HPUX I get this sort order in English:\n[...]\n\n> because the Spanish treat LL as a single collating element. (Actually,\n> my very-rusty recollection is that they sort LL the same as one L, which\n> would mean that HPUX's behavior is not quite right here: it's treating\n> LL as one symbol that sorts after L. Linux seems to have no clue that\n> LL is special at all though...)\n\nHPUX's behaviour is broken, because in spanish LL (as well as CH)\nstopped being a special symbol some five years ago (it used to be\ntreated as one collating element sorted after \"L\", so HPUX behaviour was\nright then).\n\n\n> > We could also cut down the number of affected tests by making the\n> > select_implicit and select_having not use mixed-case strings in the test\n> > tables. Then we have only char, varchar, and select_views left.\n\nMaybe it would be better to prepare various results, one for each of a\nsubset of the locales supported (C, en_EN, some other \"western\" and\nmaybe a couple multibyte?). That way at least you make sure the C\nlibrary is working as expected.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"No deja de ser humillante para una persona de ingenio saber\nque no hay tonto que no le pueda ense�ar algo.\" (Jean B. Say)\n\n",
"msg_date": "Fri, 10 May 2002 21:06:51 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: Making the regression tests locale-proof "
},
{
"msg_contents": "Alvaro Herrera <alvherre@atentus.com> writes:\n> HPUX's behaviour is broken, because in spanish LL (as well as CH)\n> stopped being a special symbol some five years ago (it used to be\n> treated as one collating element sorted after \"L\", so HPUX behaviour was\n> right then).\n\nWell, this is an old release ;-) ... the localedef files are dated\naround 1996. (And you don't want to know how long it's been since\nI could speak passable Spanish.)\n\nIn any case, the fact that the official rules have changed does not\ninvalidate my point: there are systems on which the assumption Peter\nwants to make will fail.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 May 2002 21:19:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Making the regression tests locale-proof "
},
{
"msg_contents": "Tom Lane writes:\n\n> In practice we could perhaps use test data that doesn't hit any of the\n> special cases in the popular languages. But I wonder whether this would\n> not be shirking our responsibility as testers. Seems like if you avoid\n> exercising these kinds of cases, you avoid finding corner-case bugs.\n\nThere is a locale test suite under src/test/locale, which isn't very well\nknown currently. There we can test the collation order in the wildest\nextremes for any particular locale. For the main test suite, I think we\ncan boldly assume that if sorting works at all then it would also work\nequally well if more complicated strings were substituted, since the\nactual collating isn't done by us anyway.\n\nWhat I'm thinking now is to simply collect a number of possible results\nand store expected files char_0.out, char_1.out, etc. and have the driver\ntry all of these, basically meaning \"any of these may be right\".\n\nThe alternative I had in the back of my head was to query the locale and\nprepare files char_en.out, char_de.out, etc. but as you showed, we can't\nrely on these locales working in a particular way.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sun, 12 May 2002 17:46:53 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Making the regression tests locale-proof "
}
] |
[
{
"msg_contents": "Hi everybody,\n I'm a hookie in this discussion list. Well, my intent is to get some informations about\nPostgreSQL internals to work on a project. There is an excellent GPL'ed tool to work with Oracle\ncalled TOra. It is as good as TOAD and SQL Navigator from Quest Software. As a meaning of\ncollaborate with the Open Source world i was thinking in port TOra to PostgreSQL. So, we'll have a\ngreat database and a great tool to manage it. \n Problem is: reading PostgreSQL documentation i didn't find any information about system tables\nhaving runtime informations as Oracle has. And one of the great features of TOra is the\npossibility to see in charts, in real-time, all kind of I/O operations, memory usage, queries\nbeing executed, etc...\n If i didn't make myself clear, please point your browser to http://www.globecom.se/tora/\nand see what i am suggesting to adapt to PostgreSQL.\n I hope i did not disturb anybody here.\n And, keep doing your great job. We are in debt with you guys!\n\nBest regards,\n Daniel.\n\n \n\n__________________________________________________\nDo You Yahoo!?\nYahoo! Shopping - Mother's Day is May 12th!\nhttp://shopping.yahoo.com\n",
"msg_date": "Fri, 10 May 2002 13:42:21 -0700 (PDT)",
"msg_from": "\"Daniel H. F. e Silva\" <dhfs@yahoo.com>",
"msg_from_op": true,
"msg_subject": "Monitoring backend activities"
},
{
"msg_contents": "> Hi everybody,\n> I'm a hookie in this discussion list. Well, my intent is to get some\n> informations about PostgreSQL internals to work on a project. There\n> is an excellent GPL'ed tool to work with Oracle called TOra. It is as\n> good as TOAD and SQL Navigator from Quest Software. As a meaning of\n> collaborate with the Open Source world i was thinking in port TOra to\n> PostgreSQL. So, we'll have a great database and a great tool to manage\n> it.\n\nI think that would be \"rookie;\" the term \"hookie\" refers to what you're \n\"playing\" if you skip school.\n\n> Problem is: reading PostgreSQL documentation i didn't find any\n> information about system tables having runtime informations as Oracle\n> has. And one of the great features of TOra is the possibility to see\n> in charts, in real-time, all kind of I/O operations, memory usage,\n> queries being executed, etc...\n\nThe only problem I see is that TOra already seems quite well supported for \nPostgreSQL. I'm running it at the moment, and it works quite well...\n--\n(concatenate 'string \"cbbrowne\" \"@cbbrowne.com\")\nhttp://www.cbbrowne.com/info/lsf.html\n\"Put simply, the antitrust laws in this country are basically a joke,\nprotecting us just enough to not have to re-name our park service the\nPhillip Morris National Park Service.\" \n-- Courtney Love, Salon.com, June 14, 2000\n\n-- \n(concatenate 'string \"cbbrowne\" \"@ntlug.org\")\nhttp://www.cbbrowne.com/info/rdbms.html\nRules of the Evil Overlord #220. \"Whatever my one vulnerability is, I\nwill fake a different one. For example, ordering all mirrors removed\nfrom the palace, screaming and flinching whenever someone accidentally\nholds up a mirror, etc. In the climax when the hero whips out a mirror\nand thrusts it at my face, my reaction will be ``Hmm...I think I need\na shave.''\" <http://www.eviloverlord.com/>\n\n\n\n-- \n(reverse (concatenate 'string \"moc.enworbbc@\" \"sirhc\"))\nhttp://www.cbbrowne.com/info/linuxxian.html\nAs of next Monday, MACLISP will no longer support list structure.\nPlease downgrade your programs.",
"msg_date": "Fri, 10 May 2002 17:40:03 -0400",
"msg_from": "cbbrowne@cbbrowne.com",
"msg_from_op": false,
"msg_subject": "Re: Monitoring backend activities "
}
] |
[
{
"msg_contents": "Hello everybody,\n\nI personally had enough discussions during the last few days. I want to\nthank everybody who expressed their opinion and to remind you all that we\nstarted as a small group of four people who actively use and patch pgaccess\nnow, asked by Teo to see what we can do about bringing our patches together.\n\nI do not feel like moderating a pgaccess war.\n\nMy opinion is that some of the last postings are by people who have not\nfollowed the whole discussion from the start, and also some go into\nproblems - too distant for the current fragile day.\n\nWith this message I would like to invite all people who have fresh patches\n(newer than one year) and all people who are interesting in working and/or\nsupporting pgaccess now (during the next few weeks) to send a signal during\nthe weekend.\n\nAnd all the rest to be quite for a while.\n\nI want on Monday to proceed and to merge at least the code we produced in\nour company with the latest pgaccess that can be found and with the code of\nChris and Bartus.\n\nAnd to see if and how we (Chris, Bartus, Boyan, Teo and myself, plus all\nactive people who send a signal) can work together.\n\nI think it is clear that pgaccess is for PostgreSQL, that nobody wants to\ntake it away or to kill it.\n\nAfter we do this small step next week and 'show some code' to Peter\nEisentraut, we can think how is best to make it available for the PostgreSQL\nusers.\n\nAnd how to continue.\n\nI think some of you want too much from a half asleep project. Let it wake\nup, let is see on which earth it is. And let it then decide.\n\nThanks,\n\nIavor\n\n--\nwww.pgaccess.org\n\n",
"msg_date": "Sat, 11 May 2002 00:34:43 +0200",
"msg_from": "\"Iavor Raytchev\" <iavor.raytchev@verysmall.org>",
"msg_from_op": true,
"msg_subject": "pgaccess.org - invitation for a working meeting"
},
{
"msg_contents": "Iavor Raytchev wrote:\n> I want on Monday to proceed and to merge at least the code we produced in\n> our company with the latest pgaccess that can be found and with the code of\n> Chris and Bartus.\n> \n> And to see if and how we (Chris, Bartus, Boyan, Teo and myself, plus all\n> active people who send a signal) can work together.\n> \n> I think it is clear that pgaccess is for PostgreSQL, that nobody wants to\n> take it away or to kill it.\n> \n> After we do this small step next week and 'show some code' to Peter\n> Eisentraut, we can think how is best to make it available for the PostgreSQL\n> users.\n\n[ Just catching up.]\n\nI think we have a few options. You can maintain both your current\npgaccess source and previous version CVS (if you wish) using the\nPostgreSQL CVS tree. Any changes you commit will be released\nautomatically as part of PostgreSQL. You will have commit privileges\nfor PostgreSQL CVS so you will have control over the changes. (The jdbc\ngroup already does this successfully.) Others PostgreSQL developers\nwill also have access to the source tree and hopefully will make changes\nto keep pgaccess in sync with backend changes. You would not be\nrequired to get approval for the patches you apply, but we do ask you to\nrestrict your changes to the pgaccess subdirectory unless you discuss\nnon-pgaccess changes on hackers.\n\nAnother options is to return to the old way we did it, where I grabbed\nthe most recent pgaccess version when it was released and updated the\nPostgreSQL CVS.\n\nThe choice is yours.\n\nAlso, we can make our ftp distribution and mailing lists available if\nthey would be of value to you.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 4 Jun 2002 14:51:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgaccess.org - invitation for a working meeting"
},
{
"msg_contents": "Folks,\n\nIf I might just say ... I think it's phenominal that somebody has\nstarted working on the PGAccess code again and I really look forward to\ntesting (and documenting!) your work.\n\nIf you don't read the Novice list, new Postgres users have been dying\nfor a PGAccess update for the last year. Your contribution will be\nwidely appreciated by Postgres users everywhere.\n\n-Josh Berkus\n PostgreSQL/Techdocs Writer\n",
"msg_date": "Wed, 05 Jun 2002 08:34:37 -0700",
"msg_from": "\"Josh Berkus\" <josh@agliodbs.com>",
"msg_from_op": false,
"msg_subject": "Re: pgaccess.org - invitation for a working meeting"
},
{
"msg_contents": "Thanks Josh,\n\nGood, encouraging words. There are two mailing lists - please feel free to\nmake them popular -\n\ndevelopers@pgaccess.org\n users@pgaccess.org\n\nBoth are run on qmail/ezmlm, for help send a blank e-mail to -\n\ndevelopers-help@pgaccess.org\n users-help@pgaccess.org\n\nThe lists are not moderated. Please, ONLY people who develop or would like\nto develop subscribe to 'developers'. On 'users' we'll be happy to read what\npeople expect form pgaccess.\n\nThere will be bugzilla soon on bugzilla.pgaccess.org\n\nThe current stage is - we are about the merge the three major (known)\npatches to pgaccess done by Bartus, Chris and Boyan with the latest known\nversion of pgaccess. Tacho (tacho@verysmall.org) will be the release\nengineer in the beginning. This should happen this week or so (Chris is\nabout to submit his work).\n\nTo repeat again - we all use, like and admire PostgreSQL. And pgaccess. The\nonly reason to set up a separate community for it is because we believe that\nit needs a bit more air.\n\npgaccess has no meaning outside PostgreSQL. We do not try to steal it or\ntake it over,... One day I should write a page how the whole things started,\nso that I do not have to explain it every time.\n\nAll best to everyone,\n\nIavor\n\nPS Josh, testing and documenting are two good activities. What we urgently\nneed is a dedicated release engineer - Tacho will do this in the beginning\nbut he can not promise long term commitment. And Teo is busy right now.\n\n> -----Original Message-----\n> From: Josh Berkus [mailto:josh@agliodbs.com]\n> Sent: Mittwoch, 05. Juni 2002 17:35\n> To: Bruce Momjian; Iavor Raytchev\n> Cc: Tom Lane; Thomas Lockhart; Stanislav Grozev; Ross J. Reedstrom;\n> Nigel J. Andrews; Marc G. Fournier; Constantin Teodorescu; Cmaj; Boyan\n> Filipov; Boyan Dzambazov; Bartus. L; Brett Schwarz; pgsql-hackers\n> Subject: Re: [HACKERS] pgaccess.org - invitation for a working meeting\n>\n>\n> Folks,\n>\n> If I might just say ... I think it's phenominal that somebody has\n> started working on the PGAccess code again and I really look forward to\n> testing (and documenting!) your work.\n>\n> If you don't read the Novice list, new Postgres users have been dying\n> for a PGAccess update for the last year. Your contribution will be\n> widely appreciated by Postgres users everywhere.\n>\n> -Josh Berkus\n> PostgreSQL/Techdocs Writer\n\n",
"msg_date": "Wed, 5 Jun 2002 20:26:47 +0200",
"msg_from": "\"Iavor Raytchev\" <iavor.raytchev@verysmall.org>",
"msg_from_op": true,
"msg_subject": "Re: pgaccess.org - invitation for a working meeting"
},
{
"msg_contents": "Iavor Raytchev wrote:\n> pgaccess has no meaning outside PostgreSQL. We do not try to steal it or\n> take it over,... One day I should write a page how the whole things started,\n> so that I do not have to explain it every time.\n\nI have not heard --- where are you going to keep the master CVS?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 5 Jun 2002 20:55:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgaccess.org - invitation for a working meeting"
}
] |
[
{
"msg_contents": "In the Documentation section of www.pgaccess.org I am trying to build a\nsmall sub-section where I list with few words what people use pgaccess for.\n\nPlease, drop a line if you have been using it for something useful.\n\nPlease, drop a line if you could not use it for something due to\nbugs/missing features.\n\nThanks,\n\nIavor\n\n--\nwww.pgaccess.org\n\n",
"msg_date": "Sat, 11 May 2002 01:49:57 +0200",
"msg_from": "\"Iavor Raytchev\" <iavor.raytchev@verysmall.org>",
"msg_from_op": true,
"msg_subject": "what do people use pgaccess for?"
},
{
"msg_contents": "\nOkay, this, and any other thread, concerning pgaccess, should be moved to\n-interfaces .. not sure why it was ever on -hackers, actually, but that is\nneither here-nor-there ...\n\nOn Sat, 11 May 2002, Iavor Raytchev wrote:\n\n> In the Documentation section of www.pgaccess.org I am trying to build a\n> small sub-section where I list with few words what people use pgaccess for.\n>\n> Please, drop a line if you have been using it for something useful.\n>\n> Please, drop a line if you could not use it for something due to\n> bugs/missing features.\n>\n> Thanks,\n>\n> Iavor\n>\n> --\n> www.pgaccess.org\n>\n>\n\n",
"msg_date": "Fri, 10 May 2002 20:57:06 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: what do people use pgaccess for?"
}
] |
[
{
"msg_contents": "\nHi Again,\n\n10:22am up 15:06, 1 user, load average: 9.02, 9.02, 8.98\n85 processes: 73 sleeping, 1 running, 11 zombie, 0 stopped\nCPU states: 0.0% user, 0.4% system, 0.0% nice, 99.4% idle\nMem: 1028484K av, 1017488K used, 10996K free, 0K shrd, 8996K buff\nSwap: 971004K av, 240344K used, 730660K free 760208K \ncached\n\nIn my postgresql server load avearge is very high but cpu is 99.4 % idle\n\nthis is not strictly a pgsql issue but , can anyone tell me how can i \nfind what is loading my server heavily\n\n\nregds\nMallah.\n\n\n\n\n\n-- \nRajesh Kumar Mallah,\nProject Manager (Development)\nInfocom Network Limited, New Delhi\nphone: +91(11)6152172 (221) (L) ,9811255597 (M)\n\nVisit http://www.trade-india.com ,\nIndia's Leading B2B eMarketplace.\n\n\n",
"msg_date": "Sat, 11 May 2002 10:07:47 +0530",
"msg_from": "\"Rajesh Kumar Mallah.\" <mallah@trade-india.com>",
"msg_from_op": true,
"msg_subject": "Very high load average but no cpu utilization ?"
},
{
"msg_contents": "Hi ,\n\ni am sorry to bother you people again and again , but i guess this\nis a bad patch for me which will soon pass on ;-)\n\nmy postmaster is running but most of the backeds are defunct ,\nand on connecting get following error message:\n\n$ psql -h 130.94.22.209 -U tradein tradein_clients\npsql: server closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\n[rmallah@server rmallah]$\n\n\nhow do i bring down postmaster safetly?\n\nps output is as below.\n\n[root@linux10320 root2]# ps auxwww| grep post\npostgres 1131 0.0 0.0 139424 4 ? D May1004/usr/local/pgsql/bin/postmaster\npostgres 1132 0.0 0.0 140412 4 ? D May10 0:13 postgres: stats buffer process\npostgres 1133 0.0 0.0 139576 4 ? S May10 0:18 postgres: stats collector process\npostgres 8046 0.0 0.0 238712 4 ? D 00:25 0:13 postgres: tradein tradein_clients 130.94.20.27 SELECT\npostgres 8089 0.0 0.0 139812 4 ? D 00:26 0:00 postgres: checkpoint subprocess\npostgres 11442 0.0 0.0 218152 4 ? D 04:25 0:03 postgres: tradein tradein_clients 130.94.20.27 SELECT\npostgres 15453 0.1 0.0 0 0 ? Z 08:17 0:09 [postmaster <defunct>]\npostgres 15455 0.0 0.0 0 0 ? Z 08:17 0:00 [postmaster <defunct>]\npostgres 15456 0.0 0.0 0 0 ? Z 08:18 0:00 [postmaster <defunct>]\npostgres 15457 0.0 0.0 0 0 ? Z 08:19 0:00 [postmaster <defunct>]\npostgres 15462 0.0 0.0 0 0 ? Z 08:20 0:01 [postmaster <defunct>]\npostgres 15463 0.0 0.0 0 0 ? Z 08:20 0:00 [postmaster <defunct>]\npostgres 15465 0.0 0.0 0 0 ? Z 08:21 0:01 [postmaster <defunct>]\npostgres 15466 0.0 0.0 0 0 ? Z 08:22 0:00 [postmaster <defunct>]\npostgres 15491 0.0 0.0 0 0 ? Z 08:24 0:00 [postmaster <defunct>]\npostgres 15494 0.0 0.0 0 0 ? Z 08:24 0:00 [postmaster <defunct>]\npostgres 15496 0.0 0.0 0 0 ? Z 08:24 0:00 [postmaster <defunct>]\npostgres 15510 0.2 10.2 238712 105008 ? D 08:25 0:20 postgres: tradein tradein_clients 130.94.20.27 SELECT\nroot 19268 0.0 0.0 1364 528 pts/1 S 10:42 0:00 grep post\n[root@linux10320 root2]#\n\n\n\nOn Saturday 11 May 2002 10:07 am, Rajesh Kumar Mallah. wrote:\n> Hi Again,\n>\n> 10:22am up 15:06, 1 user, load average: 9.02, 9.02, 8.98\n> 85 processes: 73 sleeping, 1 running, 11 zombie, 0 stopped\n> CPU states: 0.0% user, 0.4% system, 0.0% nice, 99.4% idle\n> Mem: 1028484K av, 1017488K used, 10996K free, 0K shrd, 8996K\n> buff Swap: 971004K av, 240344K used, 730660K free \n> 760208K cached\n>\n> In my postgresql server load avearge is very high but cpu is 99.4 % idle\n>\n> this is not strictly a pgsql issue but , can anyone tell me how can i\n> find what is loading my server heavily\n>\n>\n> regds\n> Mallah.\n\n-- \nRajesh Kumar Mallah,\nProject Manager (Development)\nInfocom Network Limited, New Delhi\nphone: +91(11)6152172 (221) (L) ,9811255597 (M)\n\nVisit http://www.trade-india.com ,\nIndia's Leading B2B eMarketplace.\n\n\n",
"msg_date": "Sat, 11 May 2002 10:35:05 +0530",
"msg_from": "\"Rajesh Kumar Mallah.\" <mallah@trade-india.com>",
"msg_from_op": true,
"msg_subject": "Further info : Very high load average but no cpu utilization ?"
},
{
"msg_contents": "\"Rajesh Kumar Mallah.\" <mallah@trade-india.com> writes:\n> [root@linux10320 root2]# ps auxwww| grep post\n> postgres 1131 0.0 0.0 139424 4 ? D May1004/usr/local/pgsql/bin/postmaster\n> postgres 1132 0.0 0.0 140412 4 ? D May10 0:13 postgres: stats buffer process\n> postgres 1133 0.0 0.0 139576 4 ? S May10 0:18 postgres: stats collector process\n> postgres 8046 0.0 0.0 238712 4 ? D 00:25 0:13 postgres: tradein tradein_clients 130.94.20.27 SELECT\n> postgres 8089 0.0 0.0 139812 4 ? D 00:26 0:00 postgres: checkpoint subprocess\n> postgres 11442 0.0 0.0 218152 4 ? D 04:25 0:03 postgres: tradein tradein_clients 130.94.20.27 SELECT\n> postgres 15453 0.1 0.0 0 0 ? Z 08:17 0:09 [postmaster <defunct>]\n> postgres 15455 0.0 0.0 0 0 ? Z 08:17 0:00 [postmaster <defunct>]\n> postgres 15456 0.0 0.0 0 0 ? Z 08:18 0:00 [postmaster <defunct>]\n> postgres 15457 0.0 0.0 0 0 ? Z 08:19 0:00 [postmaster <defunct>]\n> postgres 15462 0.0 0.0 0 0 ? Z 08:20 0:01 [postmaster <defunct>]\n\nI think your postmaster is stuck; it should have reaped those defunct\nsubprocesses instantly. Given that you also seem to have a stuck\ncheckpoint process (8 hours to run a checkpoint?) there is probably\nsomething hosed in the interprocess communication logic, but it's hard\nto guess what from this amount of info.\n\nAt this point probably your best bet is to kill all the running postgres\nprocesses (try SIGTERM first, then SIGKILL if that doesn't work) and\nlaunch a postmaster from a fresh start. Don't forget the ulimit this\ntime.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 11 May 2002 11:59:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Further info : Very high load average but no cpu utilization ? "
},
{
"msg_contents": "Hi there,\n\nI have observed that it is nearly impossible to\nget rid of postmaster or backends by any signal\nwhen it decides not to quit.\n\nEven the OS( Linux rh62) refuses to reboot in such a situation.\nand my system admin had to power off the system ,\nthen fsck .... and stuff.\n\nbut this only happens when postmaster is stuck for \nsome reason , i feel filling up of postmasters log\nfile was the reason of my postmaster getting stuck.\n\nregds\nmallah.\n\n\n\n\nOn Saturday 11 May 2002 09:29 pm, Tom Lane wrote:\n> \"Rajesh Kumar Mallah.\" <mallah@trade-india.com> writes:\n> > [root@linux10320 root2]# ps auxwww| grep post\n> > postgres 1131 0.0 0.0 139424 4 ? D \n> > May1004/usr/local/pgsql/bin/postmaster postgres 1132 0.0 0.0 140412 \n> > 4 ? D May10 0:13 postgres: stats buffer process postgres \n> > 1133 0.0 0.0 139576 4 ? S May10 0:18 postgres: stats\n> > collector process postgres 8046 0.0 0.0 238712 4 ? D 00:25\n> > 0:13 postgres: tradein tradein_clients 130.94.20.27 SELECT postgres \n> > 8089 0.0 0.0 139812 4 ? D 00:26 0:00 postgres: checkpoint\n> > subprocess postgres 11442 0.0 0.0 218152 4 ? D 04:25 0:03\n> > postgres: tradein tradein_clients 130.94.20.27 SELECT postgres 15453 0.1\n> > 0.0 0 0 ? Z 08:17 0:09 [postmaster <defunct>]\n> > postgres 15455 0.0 0.0 0 0 ? Z 08:17 0:00\n> > [postmaster <defunct>] postgres 15456 0.0 0.0 0 0 ? Z \n> > 08:18 0:00 [postmaster <defunct>] postgres 15457 0.0 0.0 0 0 ?\n> > Z 08:19 0:00 [postmaster <defunct>] postgres 15462 0.0 0.0 \n> > 0 0 ? Z 08:20 0:01 [postmaster <defunct>]\n>\n> I think your postmaster is stuck; it should have reaped those defunct\n> subprocesses instantly. Given that you also seem to have a stuck\n> checkpoint process (8 hours to run a checkpoint?) there is probably\n> something hosed in the interprocess communication logic, but it's hard\n> to guess what from this amount of info.\n>\n> At this point probably your best bet is to kill all the running postgres\n> processes (try SIGTERM first, then SIGKILL if that doesn't work) and\n> launch a postmaster from a fresh start. Don't forget the ulimit this\n> time.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \nRajesh Kumar Mallah,\nProject Manager (Development)\nInfocom Network Limited, New Delhi\nphone: +91(11)6152172 (221) (L) ,9811255597 (M)\n\nVisit http://www.trade-india.com ,\nIndia's Leading B2B eMarketplace.\n\n\n",
"msg_date": "Sun, 12 May 2002 11:16:30 +0530",
"msg_from": "\"Rajesh Kumar Mallah.\" <mallah@trade-india.com>",
"msg_from_op": true,
"msg_subject": "Re: Further info : Very high load average but no cpu utilization ?"
},
{
"msg_contents": "On May 12, 2002 01:46 am, Rajesh Kumar Mallah. wrote:\n> Hi there,\n>\n> I have observed that it is nearly impossible to\n> get rid of postmaster or backends by any signal\n> when it decides not to quit.\n>\n> Even the OS( Linux rh62) refuses to reboot in such a situation.\n> and my system admin had to power off the system ,\n> then fsck .... and stuff.\n\nNot even kill -9 worked? I had that happen too but I thought it was a \nproblem with AIX. Kill -9 is supposed to kill any process. It can't be \ncaught. Is it possible that PostgreSQL is doing something that makes it that \nunkillable?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Sun, 12 May 2002 07:05:37 -0400",
"msg_from": "\"D'Arcy J.M. Cain\" <darcy@druid.net>",
"msg_from_op": false,
"msg_subject": "Re: Further info : Very high load average but no cpu utilization ?"
},
{
"msg_contents": "\"D'Arcy J.M. Cain\" <darcy@druid.net> writes:\n> Not even kill -9 worked? I had that happen too but I thought it was a \n> problem with AIX. Kill -9 is supposed to kill any process. It can't be \n> caught. Is it possible that PostgreSQL is doing something that makes it that\n> unkillable?\n\nCould there be a kernel bug associated with processes that are trying to\nwrite past the 2Gb limit? The postmaster is certainly not doing\nanything deliberate to make itself unkillable, but on some platforms\nkill -9 will not work on processes that are wedged in a system call...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 12 May 2002 11:37:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Further info : Very high load average but no cpu utilization ? "
},
{
"msg_contents": "Well,\n\nIts advocated \"dont kill -9 the postmaster\" and i rarely do that.\n\nPostmaster tends to be immortal ,\nAnd it is *not* so only for the case when postmaster is trying to\nwrite past 2GB limit ,\nI have only recently started logging postmaster to that extent.\n\nregds\nmallah.\n\nOn Sunday 12 May 2002 09:07 pm, Tom Lane wrote:\n> \"D'Arcy J.M. Cain\" <darcy@druid.net> writes:\n> > Not even kill -9 worked? I had that happen too but I thought it was a\n> > problem with AIX. Kill -9 is supposed to kill any process. It can't be\n> > caught. Is it possible that PostgreSQL is doing something that makes it\n> > that unkillable?\n>\n> Could there be a kernel bug associated with processes that are trying to\n> write past the 2Gb limit? The postmaster is certainly not doing\n> anything deliberate to make itself unkillable, but on some platforms\n> kill -9 will not work on processes that are wedged in a system call...\n>\n> \t\t\tregards, tom lane\n\n-- \nRajesh Kumar Mallah,\nProject Manager (Development)\nInfocom Network Limited, New Delhi\nphone: +91(11)6152172 (221) (L) ,9811255597 (M)\n\nVisit http://www.trade-india.com ,\nIndia's Leading B2B eMarketplace.\n\n\n",
"msg_date": "Mon, 13 May 2002 10:20:07 +0530",
"msg_from": "\"Rajesh Kumar Mallah.\" <mallah@trade-india.com>",
"msg_from_op": true,
"msg_subject": "Re: Further info : Very high load average but no cpu utilization ?"
},
{
"msg_contents": "On May 13, 2002 12:50 am, Rajesh Kumar Mallah. wrote:\n> Its advocated \"dont kill -9 the postmaster\" and i rarely do that.\n\nAdvocated or not, kill -9 is supposed to be the last resort. If nothing else \nworks then kill -9 should kill any Unix process. As Tom says, if it doesn't \nthen it suggests an OS (probably driver) problem.\n\nNow if only I could get IBM to understand that. They still claim that my \nproblem is that PostgreSQL (an \"unsupported\" application) is doing something \nto catch SIGKILL.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Mon, 13 May 2002 06:21:19 -0400",
"msg_from": "\"D'Arcy J.M. Cain\" <darcy@druid.net>",
"msg_from_op": false,
"msg_subject": "Re: Further info : Very high load average but no cpu utilization ?"
},
{
"msg_contents": "\nHi,\n\nI vaguely remember postmaster not responding to \nkill -9 even on Linux,\n\ni can confirm next time when (god forbids) my postmaster\ngoes crazy. ;-)\n\ni feel lucky now that my postmaster is up for more that \n(24 hrs) \n\n(no offence intended , there have been instances of my postmaster \nrunning for as long as 3 months)\n\nregds\nmallah.\n\nOn Monday 13 May 2002 03:51 pm, D'Arcy J.M. Cain wrote:\n> On May 13, 2002 12:50 am, Rajesh Kumar Mallah. wrote:\n> > Its advocated \"dont kill -9 the postmaster\" and i rarely do that.\n>\n> Advocated or not, kill -9 is supposed to be the last resort. If nothing\n> else works then kill -9 should kill any Unix process. As Tom says, if it\n> doesn't then it suggests an OS (probably driver) problem.\n>\n> Now if only I could get IBM to understand that. They still claim that my\n> problem is that PostgreSQL (an \"unsupported\" application) is doing\n> something to catch SIGKILL.\n\n-- \nRajesh Kumar Mallah,\nProject Manager (Development)\nInfocom Network Limited, New Delhi\nphone: +91(11)6152172 (221) (L) ,9811255597 (M)\n\nVisit http://www.trade-india.com ,\nIndia's Leading B2B eMarketplace.\n\n\n",
"msg_date": "Mon, 13 May 2002 17:34:12 +0530",
"msg_from": "\"Rajesh Kumar Mallah.\" <mallah@trade-india.com>",
"msg_from_op": true,
"msg_subject": "Re: Further info : Very high load average but no cpu utilization ?"
},
{
"msg_contents": "On Monday 13 May 2002 02:04 pm, you wrote:\n> Hi,\n>\n> I vaguely remember postmaster not responding to\n> kill -9 even on Linux,\n>\n> i can confirm next time when (god forbids) my postmaster\n> goes crazy. ;-)\n>\n> i feel lucky now that my postmaster is up for more that\n> (24 hrs)\n>\n> (no offence intended , there have been instances of my postmaster\n> running for as long as 3 months)\n>\n> regds\n> mallah.\n\nI've been reading this thread with interest, may I ask a few additional \nquestions ? \n\n- What version of postgresql are you running, compiled from tarball source or \nRPM version ? \n- Is there any good reason to still run this server on a RedHat 6.2 (non \nsupported platform from RedHat) ? \n- How large are your databases, and how much usage do you have ? \n- What kinds of API's do you use to interface ? \n- Is the application running locally or do you use IP connections remotely ?\n\nReason I am asking is that I still have never had postgresql go bad like \nthat.. I can always stop it properly, and to this date have had very few \nproblems with postgresql itself. I am interested in your problems because I'd \nlike to be aware of issues I can eventually run into.. (Better care before \nthan later ? :-)\n\nMy own configuration is like this : \n\n- Postgresql 7.1.3 with OpenFTS, both compiled from source tarballs\n- Debian Linux 2.2.x platform\n- PHP and Perl applications accessing postgresql from remote machines via IP\n- Not very large databases (100s of MBs) but very frequent read, and quite \nfrequent insert / update activity\n\nFor the record, server uptime now is equal to the last time I rebooted for a \nkernel recompilation, and during that time I have been forced to restart \npostgresql only because of hangups on the Application servers (apache w. php \n/ perl) .. \n\nRegards\n-- \nDenis Braekhus\n",
"msg_date": "Thu, 23 May 2002 10:18:17 +0200",
"msg_from": "Denis <denis@startsiden.no>",
"msg_from_op": false,
"msg_subject": "Re: Further info : Very high load average but no cpu utilization ?"
},
{
"msg_contents": "Hi Dennis,\n\nthanks for your interest and i like your \nidea of \"Better care before later.....\"\n\nI feel the best care you can take before its late is is\nto monitor the sever , what is happening and when.\n\nbasic parameters like load average , iostat do reveal if\nanything going fishy.\n\nIn my case i do have an heavly loaded webserver, but i do not\nfeel it was the load that brought the server to its toes.\n\nits more of mismangement on my part. I do not have documented which \nall programs run and when , how much do they load may be\nsome wicked script running a query that would never finish etc etc,\n\nits not that everytime my server crashed in unexplained manner.\neg at one time i had redirected the postmaster log to a file which\nran out of sapce!.\n\nI feel If you are concerned abt sever health you should install\nsoftwares like sysstat to monitor various system paramenters at \nvarious time , plot charts etc, and analyze .\n\n\n\nmy postmaster is cool now running for quite sometime without getting wild.\n\nI have got sar installed on my system and now a days\nwriting a GD cgi application to closely monitor\nwhats happending to system and when.\n\n\ni have replied your other questions point wise below:\n\nOn Thursday 23 May 2002 01:48 pm, Denis wrote:\n\n> I've been reading this thread with interest, may I ask a few additional\n> questions ?\n>\n> - What version of postgresql are you running, compiled from tarball source\n> or RPM version ?\n\n\n- Is there any good reason to still run this server on a RedHat 6.2 (non\n supported platform from RedHat) ?\n\nnot many , it costs bucks to upgrade becoz my server and ISP are in US \nand i do not have physical access and not too interested to give my\nISP $$$.\n\n\n - How large are your databases, and how much usage do you have ?\n\nnot very large $PGDATA is betweeb 1.5 GB to 2.0 GB\n\n> - What kinds of API's do you use to interface ?\n> - Is the application running locally or do you use IP connections remotely\n> ?\n\nPerl DBI , remote ip connections but in same network.\n\n\n>\n> Reason I am asking is that I still have never had postgresql go bad like\n> that.. I can always stop it properly, and to this date have had very few\n> problems with postgresql itself. I am interested in your problems because\n> I'd like to be aware of issues I can eventually run into.. (Better care\n> before than later ? :-)\n\nEven I did not have problems for months together. \nAnd I feel there is/was a hardrive problem. \n\nIn my plots even now i see very high peaks at times \nand i am still to investigate into it.\n\n\n> My own configuration is like this :\n>\n> - Postgresql 7.1.3 with OpenFTS, both compiled from source tarballs\n> - Debian Linux 2.2.x platform\n\nI too used OpenFTS till recently but migrated to contrib/tsearch now.\nHey upgrade to PG 7.2.1 its *really* worth it. read the release notes.\n\n\n> - PHP and Perl applications accessing postgresql from remote machines via\n> IP - Not very large databases (100s of MBs) but very frequent read, and\n> quite frequent insert / update activity\n\n>\n> For the record, server uptime now is equal to the last time I rebooted for\n> a kernel recompilation, and during that time I have been forced to restart\n> postgresql only because of hangups on the Application servers (apache w.\n> php / perl) ..\n>\n> Regards\n\n-- \nRajesh Kumar Mallah,\nProject Manager (Development)\nInfocom Network Limited, New Delhi\nphone: +91(11)6152172 (221) (L) ,9811255597 (M)\n\nVisit http://www.trade-india.com ,\nIndia's Leading B2B eMarketplace.\n\n\n",
"msg_date": "Thu, 23 May 2002 18:56:14 +0530",
"msg_from": "\"Rajesh Kumar Mallah.\" <mallah@trade-india.com>",
"msg_from_op": true,
"msg_subject": "Re: Further info : Very high load average but no cpu utilization ?"
},
{
"msg_contents": "D'Arcy J.M. Cain wrote:\n> On May 13, 2002 12:50 am, Rajesh Kumar Mallah. wrote:\n> > Its advocated \"dont kill -9 the postmaster\" and i rarely do that.\n> \n> Advocated or not, kill -9 is supposed to be the last resort. If nothing else \n> works then kill -9 should kill any Unix process. As Tom says, if it doesn't \n> then it suggests an OS (probably driver) problem.\n> \n> Now if only I could get IBM to understand that. They still claim that my \n> problem is that PostgreSQL (an \"unsupported\" application) is doing something \n> to catch SIGKILL.\n\nFirst, an application can't catch SIGKILL. It never arrives to\napplications. It is supposed to pull the process with no warning.\n\nHowever, there are things processes can do to wedge themselves in a\nsystem call so they don't see the SIGKILL. Of course, as soon as they\nreturn from the system call, they die.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 5 Jun 2002 12:33:48 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Further info : Very high load average but no cpu utilization"
},
{
"msg_contents": "On June 5, 2002 12:33 pm, Bruce Momjian wrote:\n> D'Arcy J.M. Cain wrote:\n> > On May 13, 2002 12:50 am, Rajesh Kumar Mallah. wrote:\n\nCatching up on an old mailbox, Bruce? :-)\n\n> > Now if only I could get IBM to understand that. They still claim that my\n> > problem is that PostgreSQL (an \"unsupported\" application) is doing\n> > something to catch SIGKILL.\n>\n> First, an application can't catch SIGKILL. It never arrives to\n> applications. It is supposed to pull the process with no warning.\n>\n> However, there are things processes can do to wedge themselves in a\n> system call so they don't see the SIGKILL. Of course, as soon as they\n> return from the system call, they die.\n\nExactly. What IBM was saying was was that we were \"catching\" SIGKILL and I \ncould not convince the (supposedly technical) IBMers that they were talking \nout their ass.\n\nAnyway, I am pretty sure that PostgreSQL is not the culprit here. As it \nhappens this project is back on the table for me so it is interesting that \nyour email popped up now. I just compiled the latest version of PostgreSQL \non my AIX system and it generated lots of errors and then completed and \ninstalled fine. Makes me sort of nervous. We'll see how it goes. Anyone \nhave any horror/success stories about PostgreSQL on AIX for me?\n\nChanged subject and mailing list.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Thu, 6 Jun 2002 06:34:35 -0400",
"msg_from": "\"D'Arcy J.M. Cain\" <darcy@druid.net>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL on AIX"
},
{
"msg_contents": "I've been using PosgreSQL 7.2 on AIX 4.3.3 with no probelms at all.\n\n-----Original Message-----\nFrom: pgsql-sql-owner@postgresql.org\n[mailto:pgsql-sql-owner@postgresql.org]On Behalf Of D'Arcy J.M. Cain\nSent: Thursday, June 06, 2002 6:35 AM\nTo: Bruce Momjian; pgsql-sql@postgresql.org\nCc: pgsql-sql@postgresql.org; pgsql-hackers@postgresql.org\nSubject: Re: [SQL] PostgreSQL on AIX\n\n\nOn June 5, 2002 12:33 pm, Bruce Momjian wrote:\n> D'Arcy J.M. Cain wrote:\n> > On May 13, 2002 12:50 am, Rajesh Kumar Mallah. wrote:\n\nCatching up on an old mailbox, Bruce? :-)\n\n> > Now if only I could get IBM to understand that. They still claim that\nmy\n> > problem is that PostgreSQL (an \"unsupported\" application) is doing\n> > something to catch SIGKILL.\n>\n> First, an application can't catch SIGKILL. It never arrives to\n> applications. It is supposed to pull the process with no warning.\n>\n> However, there are things processes can do to wedge themselves in a\n> system call so they don't see the SIGKILL. Of course, as soon as they\n> return from the system call, they die.\n\nExactly. What IBM was saying was was that we were \"catching\" SIGKILL and\nI\ncould not convince the (supposedly technical) IBMers that they were\ntalking\nout their ass.\n\nAnyway, I am pretty sure that PostgreSQL is not the culprit here. As it\nhappens this project is back on the table for me so it is interesting that\nyour email popped up now. I just compiled the latest version of\nPostgreSQL\non my AIX system and it generated lots of errors and then completed and\ninstalled fine. Makes me sort of nervous. We'll see how it goes. Anyone\nhave any horror/success stories about PostgreSQL on AIX for me?\n\nChanged subject and mailing list.\n\n--\nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org",
"msg_date": "Thu, 6 Jun 2002 08:47:49 -0400",
"msg_from": "\"Travis Hoyt\" <thoyt@npc.net>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL on AIX"
},
{
"msg_contents": "D'Arcy J.M. Cain wrote:\n> On June 5, 2002 12:33 pm, Bruce Momjian wrote:\n> > D'Arcy J.M. Cain wrote:\n> > > On May 13, 2002 12:50 am, Rajesh Kumar Mallah. wrote:\n> \n> Catching up on an old mailbox, Bruce? :-)\n> \n> > > Now if only I could get IBM to understand that. They still claim that my\n> > > problem is that PostgreSQL (an \"unsupported\" application) is doing\n> > > something to catch SIGKILL.\n> >\n> > First, an application can't catch SIGKILL. It never arrives to\n> > applications. It is supposed to pull the process with no warning.\n> >\n> > However, there are things processes can do to wedge themselves in a\n> > system call so they don't see the SIGKILL. Of course, as soon as they\n> > return from the system call, they die.\n> \n> Exactly. What IBM was saying was was that we were \"catching\" SIGKILL and I \n> could not convince the (supposedly technical) IBMers that they were talking \n> out their ass.\n\nYes, they didn't know \"catching\" from \"ignoring because in\nuninterruptible system call\".\n\n> Anyway, I am pretty sure that PostgreSQL is not the culprit here. As it \n> happens this project is back on the table for me so it is interesting that \n> your email popped up now. I just compiled the latest version of PostgreSQL \n> on my AIX system and it generated lots of errors and then completed and \n> installed fine. Makes me sort of nervous. We'll see how it goes. Anyone \n> have any horror/success stories about PostgreSQL on AIX for me?\n\nWould you check those error/warnings and send us patches or a list of\nthem. Sometimes different compilers like AIX can show problems gcc\ndoesn't.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 7 Jun 2002 22:51:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL on AIX"
},
{
"msg_contents": "\nAlso, Tatsuo uses AIX a lot and knows all the issues.\n\n---------------------------------------------------------------------------\n\nTravis Hoyt wrote:\n> I've been using PosgreSQL 7.2 on AIX 4.3.3 with no probelms at all.\n> \n> -----Original Message-----\n> From: pgsql-sql-owner@postgresql.org\n> [mailto:pgsql-sql-owner@postgresql.org]On Behalf Of D'Arcy J.M. Cain\n> Sent: Thursday, June 06, 2002 6:35 AM\n> To: Bruce Momjian; pgsql-sql@postgresql.org\n> Cc: pgsql-sql@postgresql.org; pgsql-hackers@postgresql.org\n> Subject: Re: [SQL] PostgreSQL on AIX\n> \n> \n> On June 5, 2002 12:33 pm, Bruce Momjian wrote:\n> > D'Arcy J.M. Cain wrote:\n> > > On May 13, 2002 12:50 am, Rajesh Kumar Mallah. wrote:\n> \n> Catching up on an old mailbox, Bruce? :-)\n> \n> > > Now if only I could get IBM to understand that. They still claim that\n> my\n> > > problem is that PostgreSQL (an \"unsupported\" application) is doing\n> > > something to catch SIGKILL.\n> >\n> > First, an application can't catch SIGKILL. It never arrives to\n> > applications. It is supposed to pull the process with no warning.\n> >\n> > However, there are things processes can do to wedge themselves in a\n> > system call so they don't see the SIGKILL. Of course, as soon as they\n> > return from the system call, they die.\n> \n> Exactly. What IBM was saying was was that we were \"catching\" SIGKILL and\n> I\n> could not convince the (supposedly technical) IBMers that they were\n> talking\n> out their ass.\n> \n> Anyway, I am pretty sure that PostgreSQL is not the culprit here. As it\n> happens this project is back on the table for me so it is interesting that\n> your email popped up now. I just compiled the latest version of\n> PostgreSQL\n> on my AIX system and it generated lots of errors and then completed and\n> installed fine. Makes me sort of nervous. We'll see how it goes. Anyone\n> have any horror/success stories about PostgreSQL on AIX for me?\n> \n> Changed subject and mailing list.\n> \n> --\n> D'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\n> http://www.druid.net/darcy/ | and a sheep voting on\n> +1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 7 Jun 2002 22:51:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL on AIX"
}
] |
[
{
"msg_contents": "I just did a fresh build from current cvs and found the following \nregression from 7.2:\n\ncreate table test (cola bigint);\nupdate test set cola = 10000000000;\n\nIn 7.3 the update results in the following error:\n\nERROR: column \"cola\" is of type 'bigint' but expression is of type \n'double precision'\n\tYou will need to rewrite or cast the expression\n\nIn 7.2 the update worked. (updated 0 rows in this case)\n\nIt is interesting to note that if I use 'cola = 10000000000' in a where \nclause instead of as an assignment (i.e. select * from test where cola = \n10000000000) this works in both 7.3 and 7.2.\n\nthanks,\n--Barry\n\n",
"msg_date": "Fri, 10 May 2002 23:03:05 -0700",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": true,
"msg_subject": "bug? in current cvs with bigint datatype"
},
{
"msg_contents": "Barry Lind <barry@xythos.com> writes:\n> create table test (cola bigint);\n> update test set cola = 10000000000;\n> ERROR: column \"cola\" is of type 'bigint' but expression is of type \n> 'double precision'\n> \tYou will need to rewrite or cast the expression\n\ndtoi8 is currently marked \"not proimplicit\". People seem to have lost\ninterest in the discussion thread about which coercions should be allowed\nimplicitly, but the issues still need to be resolved before 7.3.\n\nThis particular example perhaps says that when assigning to a table\ncolumn, we should allow not-proimplicit coercions to be invoked\nimplicitly anyway. Since there isn't any question about either the\nsource type or the target type, allowing this case doesn't seem to pose\nany risk of surprising choices being made.\n\nComments anyone?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 11 May 2002 12:05:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: bug? in current cvs with bigint datatype "
}
] |
[
{
"msg_contents": "For my own protection I'm adding checks to truncate so that if there\nis an ON DELETE trigger it will not execute the truncate command.\n\nAnyway, should it really only be 'Disallow TRUNCATE on tables that are\ninvolved in referential constraints'?\n\nI'm thinking it should check for an on delete rule as well as user\ntriggers.\n\n\n--\nRod\n\n",
"msg_date": "Sun, 12 May 2002 11:53:08 -0400",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "TRUNCATE"
},
{
"msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> I'm thinking it should check for an on delete rule as well as user\n> triggers.\n\nSeems reasonable to me.\n\nShould there be a \"FORCE\" option to override these checks and do it\nanyway? Or is that just asking for trouble?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 12 May 2002 12:30:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TRUNCATE "
},
{
"msg_contents": "The only time I can think of that a FORCE type mechanism would be\nallowed would be internal functions. Perhaps a new cluster (copy\ndata, truncate table, copy data back sorted).\n\nInternal stuff can call heap_truncate() directly rather than going\nthrough TruncateRelation.\n\nA user style force is to simply drop all rules, foreign keys,\ntriggers, etc -- do the action -- re-apply constraints. Anything else\ncould mean their data isn't consistent.\n\n--\nRod\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Rod Taylor\" <rbt@zort.ca>\nCc: \"Hackers List\" <pgsql-hackers@postgresql.org>\nSent: Sunday, May 12, 2002 12:30 PM\nSubject: Re: [HACKERS] TRUNCATE\n\n\n> \"Rod Taylor\" <rbt@zort.ca> writes:\n> > I'm thinking it should check for an on delete rule as well as user\n> > triggers.\n>\n> Seems reasonable to me.\n>\n> Should there be a \"FORCE\" option to override these checks and do it\n> anyway? Or is that just asking for trouble?\n>\n> regards, tom lane\n>\n\n",
"msg_date": "Sun, 12 May 2002 12:40:09 -0400",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "Re: TRUNCATE "
},
{
"msg_contents": "> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Tom Lane\n> Sent: Sunday, May 12, 2002 12:30 PM\n> To: Rod Taylor\n> Cc: Hackers List\n> Subject: Re: [HACKERS] TRUNCATE\n>\n>\n> \"Rod Taylor\" <rbt@zort.ca> writes:\n> > I'm thinking it should check for an on delete rule as well as user\n> > triggers.\n>\n> Seems reasonable to me.\n>\n> Should there be a \"FORCE\" option to override these checks and do it\n> anyway? Or is that just asking for trouble?\n\nI've relied on being able to TRUNCATE w/o having RI kick in to lots of data\nclean ups, forced sorts, etc. I'd find it annoying if I couldn't do this\nanymore (or had to do equally-annoying things, like manually drop then\nrecreate the triggers, etc.)\n\nI'm happy w/o the FORCE option (just let TRUNCATE do it), but if enough\npeople think that the FORCE keyword should be added to allow overriding of\ntriggers, that could be a good compromise.\n\nBut, please, don't take away the ability to TRUNCATE. Doing it when there\nare triggers is one the strengths of TRUNCATE, IMNSHO.\n\n- J.\n\nJoel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton\nKnowledge Management & Technology Consultant\n\n",
"msg_date": "Sun, 12 May 2002 15:48:35 -0400",
"msg_from": "\"Joel Burton\" <joel@joelburton.com>",
"msg_from_op": false,
"msg_subject": "Re: TRUNCATE "
},
{
"msg_contents": " From my limited understanding of truncate in Oracle is it requires the\nuser to first disable integrity constraints on the table before\ntruncate will run.\n\nIn SQL Server that truncate will not allow truncate if foreign key\nconstraints exist, but does not execute user delete triggers.\n\nCan't remember nor confirm either of these now. But, for consistency\nsake we should enforce the foreign key case. But I really think it\nshould apply to all constraints, system or user enforced (rules, user\nwritten triggers).\n\nBesides that, theres always Codds twelfth rule which I've always\nliked:\nThe nonsubversion rule: If low-level access is permitted it should not\nbypass security or integrity rules.\n\n--\nRod\n----- Original Message -----\nFrom: \"Joel Burton\" <joel@joelburton.com>\nTo: \"Tom Lane\" <tgl@sss.pgh.pa.us>; \"Rod Taylor\" <rbt@zort.ca>\nCc: \"Hackers List\" <pgsql-hackers@postgresql.org>\nSent: Sunday, May 12, 2002 3:48 PM\nSubject: RE: [HACKERS] TRUNCATE\n\n\n> > -----Original Message-----\n> > From: pgsql-hackers-owner@postgresql.org\n> > [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Tom Lane\n> > Sent: Sunday, May 12, 2002 12:30 PM\n> > To: Rod Taylor\n> > Cc: Hackers List\n> > Subject: Re: [HACKERS] TRUNCATE\n> >\n> >\n> > \"Rod Taylor\" <rbt@zort.ca> writes:\n> > > I'm thinking it should check for an on delete rule as well as\nuser\n> > > triggers.\n> >\n> > Seems reasonable to me.\n> >\n> > Should there be a \"FORCE\" option to override these checks and do\nit\n> > anyway? Or is that just asking for trouble?\n>\n> I've relied on being able to TRUNCATE w/o having RI kick in to lots\nof data\n> clean ups, forced sorts, etc. I'd find it annoying if I couldn't do\nthis\n> anymore (or had to do equally-annoying things, like manually drop\nthen\n> recreate the triggers, etc.)\n>\n> I'm happy w/o the FORCE option (just let TRUNCATE do it), but if\nenough\n> people think that the FORCE keyword should be added to allow\noverriding of\n> triggers, that could be a good compromise.\n>\n> But, please, don't take away the ability to TRUNCATE. Doing it when\nthere\n> are triggers is one the strengths of TRUNCATE, IMNSHO.\n>\n> - J.\n>\n> Joel BURTON | joel@joelburton.com | joelburton.com | aim:\nwjoelburton\n> Knowledge Management & Technology Consultant\n>\n\n",
"msg_date": "Sun, 12 May 2002 17:35:55 -0400",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "Re: TRUNCATE "
},
{
"msg_contents": "> I'm happy w/o the FORCE option (just let TRUNCATE do it), but if enough\n> people think that the FORCE keyword should be added to allow overriding of\n> triggers, that could be a good compromise.\n>\n> But, please, don't take away the ability to TRUNCATE. Doing it when there\n> are triggers is one the strengths of TRUNCATE, IMNSHO.\n\nIt seems to me that there's more and more need for an 'SET CONSTRAINTS\nDISABLED' and 'SET CONSTRAINTS ENABLED' command that affects only foreign\nkeys. This would basically make it ignore foreign key checks for the\nremainder of the transaction. This could be used before a TRUNCATE command,\nand would also be essential when we switch to dumping ALTER TABLE/FOREIGN\nKEY commands in pg_dump, and we don't want them to be checked...\n\nChris\n\n",
"msg_date": "Mon, 13 May 2002 10:17:07 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: TRUNCATE "
},
{
"msg_contents": "> >From my limited understanding of truncate in Oracle is it requires the\n> user to first disable integrity constraints on the table before\n> truncate will run.\n>\n> In SQL Server that truncate will not allow truncate if foreign key\n> constraints exist, but does not execute user delete triggers.\n>\n> Can't remember nor confirm either of these now. But, for consistency\n> sake we should enforce the foreign key case. But I really think it\n> should apply to all constraints, system or user enforced (rules, user\n> written triggers).\n>\n> Besides that, theres always Codds twelfth rule which I've always\n> liked:\n> The nonsubversion rule: If low-level access is permitted it should not\n> bypass security or integrity rules.\n\nDare I go against Codd, but, really, I've found it very convenient to be\nable to export a single table, TRUNCATE it, clean up the data in another\nprogram, and pull it back in. It's much more of a pain to have to dump the\nwhole db (neccessary or at least sanity preserving if there are lots of\ncomplicated foreign key or trigger rules) or to drop/recreate the\ntriggers/rules.\n\nThe security issue is important, though: it's very likely that I might want\nto let an certain class of user DELETE a record (with all the usual\nrules/triggers/RI applying), but not let them bypass all that to TRUNCATE.\n\nBut I still wouldn't want to see hassle-free truncation disappear in the\nname of security or idiot-proofing, if there are reasonable compromises.\n\n- J.\n\nJoel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton\nKnowledge Management & Technology Consultant\n\n",
"msg_date": "Mon, 13 May 2002 00:12:15 -0400",
"msg_from": "\"Joel Burton\" <joel@joelburton.com>",
"msg_from_op": false,
"msg_subject": "Re: TRUNCATE "
},
{
"msg_contents": "> -----Original Message-----\n> From: Christopher Kings-Lynne [mailto:chriskl@familyhealth.com.au]\n> Sent: Sunday, May 12, 2002 10:17 PM\n> To: Joel Burton; Tom Lane; Rod Taylor\n> Cc: Hackers List\n> Subject: RE: [HACKERS] TRUNCATE\n>\n>\n> > I'm happy w/o the FORCE option (just let TRUNCATE do it), but if enough\n> > people think that the FORCE keyword should be added to allow\n> overriding of\n> > triggers, that could be a good compromise.\n> >\n> > But, please, don't take away the ability to TRUNCATE. Doing it\n> when there\n> > are triggers is one the strengths of TRUNCATE, IMNSHO.\n>\n> It seems to me that there's more and more need for an 'SET CONSTRAINTS\n> DISABLED' and 'SET CONSTRAINTS ENABLED' command that affects only foreign\n> keys. This would basically make it ignore foreign key checks for the\n> remainder of the transaction. This could be used before a\n> TRUNCATE command,\n> and would also be essential when we switch to dumping ALTER TABLE/FOREIGN\n> KEY commands in pg_dump, and we don't want them to be checked...\n\nThis would be different than SET CONSTRAINTS DEFERRED, in that DISABLED\nwould never perform the checks, even at the end of the transaction?\n\n- J.\n\nJoel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton\nKnowledge Management & Technology Consultant\n\n",
"msg_date": "Mon, 13 May 2002 00:14:54 -0400",
"msg_from": "\"Joel Burton\" <joel@joelburton.com>",
"msg_from_op": false,
"msg_subject": "Re: TRUNCATE "
},
{
"msg_contents": "On Mon, May 13, 2002 at 10:17:07AM +0800, Christopher Kings-Lynne wrote:\n> > I'm happy w/o the FORCE option (just let TRUNCATE do it), but if enough\n> > people think that the FORCE keyword should be added to allow overriding of\n> > triggers, that could be a good compromise.\n> >\n> > But, please, don't take away the ability to TRUNCATE. Doing it when there\n> > are triggers is one the strengths of TRUNCATE, IMNSHO.\n> \n> It seems to me that there's more and more need for an 'SET CONSTRAINTS\n> DISABLED' and 'SET CONSTRAINTS ENABLED' command that affects only foreign\n> keys.\n\nI really dislike the idea of referring to \"constraints\" but only affecting\nforeign key constraints.\n\nAnd what would be the security/data-integrity ramifications of allowing\nthis?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n",
"msg_date": "Mon, 13 May 2002 00:24:22 -0400",
"msg_from": "nconway@klamath.dyndns.org (Neil Conway)",
"msg_from_op": false,
"msg_subject": "Re: TRUNCATE"
},
{
"msg_contents": "> > It seems to me that there's more and more need for an 'SET CONSTRAINTS\n> > DISABLED' and 'SET CONSTRAINTS ENABLED' command that affects\n> only foreign\n> > keys.\n>\n> I really dislike the idea of referring to \"constraints\" but only affecting\n> foreign key constraints.\n\nAll the other SET CONSTRAINTS statments refer only to foreign keys...\n\n> And what would be the security/data-integrity ramifications of allowing\n> this?\n\nWell, if only super users could do it...\n\nChris\n\n",
"msg_date": "Mon, 13 May 2002 13:14:46 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: TRUNCATE"
},
{
"msg_contents": "I still highly recommend that it be a drop foreign key, grab data,\ntruncate, import data, reapply foreign key (which will double check\nyour work) as I believe data and schema integrity should be high goals\nof Postgresql (myself anyway).\n\nHowever, I'd like to know what your doing. ie. Why is this method\nthe fastest and easiest way.\n\nGiven a dataset, how much (%age wise) do you generally modify when you\nclean it up? And what is the general dataset size (half million\nrecords?).\n\nI'm making the assumption you almost never delete data (primary key\nwise), otherwise foreign keyd data may no longer align. I'm also\nmaking the assumption your either the sole user of the database, or\nhave a long period where the database is not in use (overnight?).\n\nWhat do you use to clean it up? Custom script for each job? Regular\nexpressions? Simple spreadsheet like format filling in numbers?\nComplete dump and replace of the data?\n\n\nLastly, would a data diff make it easier? Compare the data between\nthe table (based on the primary key) and your working copy then update\nold records as necessary to bring them up to date and insert new\nrecords?\n\n--\nRod\n----- Original Message -----\nFrom: \"Joel Burton\" <joel@joelburton.com>\nTo: \"Rod Taylor\" <rbt@zort.ca>; \"Tom Lane\" <tgl@sss.pgh.pa.us>\nCc: \"Hackers List\" <pgsql-hackers@postgresql.org>\nSent: Monday, May 13, 2002 12:12 AM\nSubject: Re: [HACKERS] TRUNCATE\n\n\n> > >From my limited understanding of truncate in Oracle is it\nrequires the\n> > user to first disable integrity constraints on the table before\n> > truncate will run.\n> >\n> > In SQL Server that truncate will not allow truncate if foreign key\n> > constraints exist, but does not execute user delete triggers.\n> >\n> > Can't remember nor confirm either of these now. But, for\nconsistency\n> > sake we should enforce the foreign key case. But I really think\nit\n> > should apply to all constraints, system or user enforced (rules,\nuser\n> > written triggers).\n> >\n> > Besides that, theres always Codds twelfth rule which I've always\n> > liked:\n> > The nonsubversion rule: If low-level access is permitted it should\nnot\n> > bypass security or integrity rules.\n>\n> Dare I go against Codd, but, really, I've found it very convenient\nto be\n> able to export a single table, TRUNCATE it, clean up the data in\nanother\n> program, and pull it back in. It's much more of a pain to have to\ndump the\n> whole db (neccessary or at least sanity preserving if there are lots\nof\n> complicated foreign key or trigger rules) or to drop/recreate the\n> triggers/rules.\n>\n> The security issue is important, though: it's very likely that I\nmight want\n> to let an certain class of user DELETE a record (with all the usual\n> rules/triggers/RI applying), but not let them bypass all that to\nTRUNCATE.\n>\n> But I still wouldn't want to see hassle-free truncation disappear in\nthe\n> name of security or idiot-proofing, if there are reasonable\ncompromises.\n>\n> - J.\n>\n> Joel BURTON | joel@joelburton.com | joelburton.com | aim:\nwjoelburton\n> Knowledge Management & Technology Consultant\n>\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n",
"msg_date": "Mon, 13 May 2002 08:18:58 -0400",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "Re: TRUNCATE "
},
{
"msg_contents": "> I still highly recommend that it be a drop foreign key, grab data,\n> truncate, import data, reapply foreign key (which will double check\n> your work) as I believe data and schema integrity should be high goals\n> of Postgresql (myself anyway).\n\nI agree that they should be high goals.\n\n> However, I'd like to know what your doing. ie. Why is this method\n> the fastest and easiest way.\n\nIt's easier than dropping and recreating rules because that takes a bit of\ntrouble. (If there were any easy way in pg_dump or in psql directly to get\nthe text of just the rules/triggers/RI declarations for a table, that would\nmake it a bit easier than pulling that out of the other table stuff in\npg_dump output).\n\nIt's easier than a full-database dump/fix/restore because sometimes\n(hopefully now historically :) ) pg_dump wasn't a perfect tool: for a while,\nit would drop RI statements, or occassionally have a hard time recreating a\nview, etc. Plus, of course, with a large database, it can take quite a while\nto process.\n\nA limited-to-that-table dump/fix/restore can be a problem because of the\ninterrelationships of RI among tables. If there were any easier way to dump\ninformation about a table so that I could restore the RI that other tables\nhave on it, that might be a solution.\n\n> Given a dataset, how much (%age wise) do you generally modify when you\n> clean it up? And what is the general dataset size (half million\n> records?).\n\nMore often than not, I'm working with complex tables and fairly small # of\nrows. Perhaps 30 fields x 10,000 records.\n\n> I'm making the assumption you almost never delete data (primary key\n> wise), otherwise foreign keyd data may no longer align. I'm also\n> making the assumption your either the sole user of the database, or\n> have a long period where the database is not in use (overnight?).\n\nNo, I wouldn't delete things. I don't want to bypass RI, just not have to\ndeal with removing/creating all the rules every time I need to clean up some\ndata.\n\nIn most cases, yes, I can either take db offline for an hour or ensure that\nthere will be no writes to the db.\n\n> What do you use to clean it up? Custom script for each job? Regular\n> expressions? Simple spreadsheet like format filling in numbers?\n> Complete dump and replace of the data?\n\nGenerally, I'm doing something like pulling the data into a text file and\nusing regexes or spreadsheet tools to clean it up. Some of which could be\ndone (through plperl or plpython or such), but is often easier with full\ntext manipulation/emacs/etc.\n\nSometimes, though, I'm just cleaning out test data. For example: often, I'll\ncreate a table where records can't be deleted w/out logging information\ngoing into another table (via rule or trigger, and I usually prohibit\ndeletions at all from the log table). I'll put some fake records in, delete\na few, see the logging data, and later, when I want to delete the fake data\n(& the fake logging data), I'll use TRUNCATE. I could only do this w/a\nnormal DELETE by dropping these rules/triggers, deleting, and re-creating.\nWhich is more of a pain than I'd like to do.\n\nGiven that only the owner of a table can truncate it, I'm not too worried\nabout the security of truncate: the owner is the person who would understand\nthe ramifications of truncate vs. delete. Having it either emit a warning\nthat there were triggers/rules/RI or (better) requiring a FORCE parameter to\ntruncate when there are might make others feel safe, though.\n\n- J.\n\nJoel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton\nKnowledge Management & Technology Consultant\n\n",
"msg_date": "Mon, 13 May 2002 14:43:09 -0400",
"msg_from": "\"Joel Burton\" <joel@joelburton.com>",
"msg_from_op": false,
"msg_subject": "Re: TRUNCATE "
},
{
"msg_contents": "> A limited-to-that-table dump/fix/restore can be a problem because of\nthe\n> interrelationships of RI among tables. If there were any easier way\nto dump\n> information about a table so that I could restore the RI that other\ntables\n> have on it, that might be a solution.\n\nAgreed about making that easier.\n\n> > What do you use to clean it up? Custom script for each job?\nRegular\n> > expressions? Simple spreadsheet like format filling in numbers?\n> > Complete dump and replace of the data?\n>\n> Generally, I'm doing something like pulling the data into a text\nfile and\n> using regexes or spreadsheet tools to clean it up. Some of which\ncould be\n> done (through plperl or plpython or such), but is often easier with\nfull\n> text manipulation/emacs/etc.\n\nInternal regex support would be useful, as would plpgsql from anywhere\n(merge most into standard frontend parser).\n\n> Sometimes, though, I'm just cleaning out test data. For example:\noften, I'll\n> create a table where records can't be deleted w/out logging\ninformation\n\nYou don't create database testdb with template = productiondb?\nEspecially since you take it offline anyway.\n\n> that there were triggers/rules/RI or (better) requiring a FORCE\nparameter to\n> truncate when there are might make others feel safe, though.\n\nFORCE doesn't really solve the issue for me. I want to remove the\nability to unexpectedly mess up the database. They're usually good\nenough to know that drop database is a bad thing. But some of the\nother commands have interesting seemingly non-related failures.\nTruncate was one, object inter-dependence (what pg_depend covers) was\nanother area.\n\nAnyway, I'm willing to wait until I (or someone else) can remove the\nadvantages of truncate over other methods :)\n\n",
"msg_date": "Mon, 13 May 2002 20:31:31 -0400",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "Re: TRUNCATE "
}
] |
[
{
"msg_contents": "During some testing of pgAdmin's internals whilst adding schema support\nI noticed that altering or setting a comment on an operator actually\nsets the comment on the operator function.\n\nIn other words, change the comment on testschema.+(int4, int4) and the\ncomment is actually set on the function pg_catalog.int4pl(int4, int4).\n\nIs this behaviour correct? I would have expected the pg_description\nentry for the comment to reference the oid of the operator itself, so\neach operator and int4pl(int4, int4) can all have distinct comments.\n\nRegards Dave.\n",
"msg_date": "Sun, 12 May 2002 22:03:07 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Operator Comments"
},
{
"msg_contents": "Indeed...\n\nComment on operator adds the comment to the procedures, and drop\noperator removes comments from pg_operator, leaving left over entries\nin pg_description.\n\nLooks like CommentOperator goes to quite a bit of work (5 lines) to\naccomplish fetching the procedure and states specifically it's not a\nbug. In which case RemoveOperator needs to drop comments by the\nprocID as well.\n--\nRod\n----- Original Message -----\nFrom: \"Dave Page\" <dpage@vale-housing.co.uk>\nTo: <pgsql-hackers@postgresql.org>\nSent: Sunday, May 12, 2002 5:03 PM\nSubject: [HACKERS] Operator Comments\n\n\n> During some testing of pgAdmin's internals whilst adding schema\nsupport\n> I noticed that altering or setting a comment on an operator actually\n> sets the comment on the operator function.\n>\n> In other words, change the comment on testschema.+(int4, int4) and\nthe\n> comment is actually set on the function pg_catalog.int4pl(int4,\nint4).\n>\n> Is this behaviour correct? I would have expected the pg_description\n> entry for the comment to reference the oid of the operator itself,\nso\n> each operator and int4pl(int4, int4) can all have distinct comments.\n>\n> Regards Dave.\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Sun, 12 May 2002 18:50:09 -0400",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Operator Comments"
},
{
"msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> Looks like CommentOperator goes to quite a bit of work (5 lines) to\n> accomplish fetching the procedure and states specifically it's not a\n> bug.\n\nYeah, someone once thought it was a good idea, but I was wondering about\nthe wisdom of it just the other day. Currently this \"feature\" presents\na hole in the security of comments on functions: anyone can make an\noperator referencing a function, and then they'll be allowed to set the\nfunction's comment :-(.\n\nI can see the value in having the function comment shown when there is\nno comment specifically for the operator ... but perhaps that ought to\nbe implemented in the client requesters, rather than wired into the\ncatalog representation.\n\n> In which case RemoveOperator needs to drop comments by the\n> procID as well.\n\nNo, because the comment really belongs to the function and should go\naway only when the function does. But I'd vote for giving operators\ntheir own comments.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 12 May 2002 19:25:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Operator Comments "
},
{
"msg_contents": "> \"Rod Taylor\" <rbt@zort.ca> writes:\n> > Looks like CommentOperator goes to quite a bit of work (5 lines)\nto\n> > accomplish fetching the procedure and states specifically it's not\na\n> > bug.\n>\n> I can see the value in having the function comment shown when there\nis\n> no comment specifically for the operator ... but perhaps that ought\nto\n> be implemented in the client requesters, rather than wired into the\n> catalog representation.\n\nAgreed. If no-one objects, I'll submit a patch which makes comment on\noperator actually comment on the operator.\n\nIt'll also coalesce(operator comment, function comment) in psql.\n\n",
"msg_date": "Sun, 12 May 2002 21:53:53 -0400",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Operator Comments "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> \"Rod Taylor\" <rbt@zort.ca> writes:\n> > Looks like CommentOperator goes to quite a bit of work (5 lines) to\n> > accomplish fetching the procedure and states specifically it's not a\n> > bug.\n> \n> Yeah, someone once thought it was a good idea, but I was wondering about\n> the wisdom of it just the other day. Currently this \"feature\" presents\n> a hole in the security of comments on functions: anyone can make an\n> operator referencing a function, and then they'll be allowed to set the\n> function's comment :-(.\n> \n> I can see the value in having the function comment shown when there is\n> no comment specifically for the operator ... but perhaps that ought to\n> be implemented in the client requesters, rather than wired into the\n> catalog representation.\n> \n> > In which case RemoveOperator needs to drop comments by the\n> > procID as well.\n> \n> No, because the comment really belongs to the function and should go\n> away only when the function does. But I'd vote for giving operators\n> their own comments.\n\nHere's the history, FWIW:\n\nI implemented COMMENT ON for just TABLES and COLUMNS, like Oracle.\n\nBruce requested it for all objects\n\nI extended for all objects - including databases (my bad) ;-)\n\nPeter E. was rewriting psql and wanted the COMMENT on operators to\nreflect a COMMENT on the underlying function\n\nI submitted a patch to do that - I just do what I'm told ;-)\n\nMike Mascari\nmascarm@mascari.com\n",
"msg_date": "Mon, 13 May 2002 07:42:43 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": false,
"msg_subject": "Re: Operator Comments"
},
{
"msg_contents": "Mike Mascari wrote:\n> Here's the history, FWIW:\n> \n> I implemented COMMENT ON for just TABLES and COLUMNS, like Oracle.\n> \n> Bruce requested it for all objects\n> \n> I extended for all objects - including databases (my bad) ;-)\n> \n> Peter E. was rewriting psql and wanted the COMMENT on operators to\n> reflect a COMMENT on the underlying function\n> \n> I submitted a patch to do that - I just do what I'm told ;-)\n\nActually, the use of function comments for operators goes back to when I\nadded comments to system tables in include/catalog. I wanted to avoid\nduplication of comments so I placed them only on the functions and let\nthe operators display the function comments. Were there cases where we\ndon't want the function comments for certain operators? I never\nanticipated that.\n\nAnyway, I looked at the new psql code and it works fine, tries\npg_operator description first, then pg_proc if missing.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 5 Jun 2002 16:00:10 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Operator Comments"
}
] |
[
{
"msg_contents": "\nHello,\n\nPer the suggestion in the README.rmp-dist document with the 7.2.1 rpm I\nwould like to officially vote for an easy upgrade. I personally think it's\nridiculous that there's no way to go directly from 7.1.3 to 7.2.1.\n\nOh well :) Other than that and a few other minor issues which don't even\nwarrant mentioning at the moment, you guys do good work!\n\nDoug Hughes\n\n",
"msg_date": "Sun, 12 May 2002 18:03:49 -0400",
"msg_from": "\"Doug Hughes\" <dhughes@alagad.com>",
"msg_from_op": true,
"msg_subject": "Easy upgrade"
}
] |
[
{
"msg_contents": "Hi,\n\n\nI've got performance problem and while I'dont ready to describe it\nI'd like to ask about strange explain:\n\ntour=# explain analyze select * from tours where\n ( operator_id in (2,3,4,5,7) and type_id = 2 ) or\n ( operator_id = 8 and type_id=4 );\n\nNOTICE: QUERY PLAN:\n\nIndex Scan using type_idx, type_idx, type_idx, type_idx, type_idx, type_idx on tours (cost=0.00..12.25 rows=1 width=1091) (actual time=0.26..0.26 rows=0 loops=1)\nTotal runtime: 0.45 msec\n\nEXPLAIN\n\nWhat does many 'type_idx' means ?\n\nTheare are 2 indices - operator_idx and type_idx.\n\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Mon, 13 May 2002 16:42:10 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "strange explain"
},
{
"msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> tour=# explain analyze select * from tours where\n> ( operator_id in (2,3,4,5,7) and type_id = 2 ) or\n> ( operator_id = 8 and type_id=4 );\n\n> Index Scan using type_idx, type_idx, type_idx, type_idx, type_idx, type_idx on tours (cost=0.00..12.25 rows=1 width=1091) (actual time=0.26..0.26 rows=0 loops=1)\n\n> What does many 'type_idx' means ?\n\nMultiple indexscans.\n\nIt looks to me like your WHERE clause is being flattened into\n\n ( operator_id = 2 and type_id=2 ) or\n ( operator_id = 3 and type_id=2 ) or\n ( operator_id = 4 and type_id=2 ) or\n ( operator_id = 5 and type_id=2 ) or\n ( operator_id = 7 and type_id=2 ) or\n ( operator_id = 8 and type_id=4 )\n\nand then it has a choice of repeated indexscans on operator_id or\ntype_id. Depending on the selectivity stats it might pick either.\nYou might find that a 2-column index on both would be a win.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 May 2002 10:04:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: strange explain "
},
{
"msg_contents": "It appears it scanes the type_idx once per opereator. IN gets broken\ndown into ORs\n\nIs this what the TODO entry 'Make IN / NOT IN have similar performance\nas EXISTS' means?\n--\nRod\n----- Original Message -----\nFrom: \"Oleg Bartunov\" <oleg@sai.msu.su>\nTo: \"Pgsql Hackers\" <pgsql-hackers@postgresql.org>; \"Tom Lane\"\n<tgl@sss.pgh.pa.us>\nSent: Monday, May 13, 2002 9:42 AM\nSubject: [HACKERS] strange explain\n\n\n> Hi,\n>\n>\n> I've got performance problem and while I'dont ready to describe it\n> I'd like to ask about strange explain:\n>\n> tour=# explain analyze select * from tours where\n> ( operator_id in (2,3,4,5,7) and type_id = 2 ) or\n> ( operator_id = 8 and type_id=4 );\n>\n> NOTICE: QUERY PLAN:\n>\n> Index Scan using type_idx, type_idx, type_idx, type_idx, type_idx,\ntype_idx on tours (cost=0.00..12.25 rows=1 width=1091) (actual\ntime=0.26..0.26 rows=0 loops=1)\n> Total runtime: 0.45 msec\n>\n> EXPLAIN\n>\n> What does many 'type_idx' means ?\n>\n> Theare are 2 indices - operator_idx and type_idx.\n>\n>\n> Regards,\n> Oleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n>\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to\nmajordomo@postgresql.org)\n>\n\n",
"msg_date": "Mon, 13 May 2002 11:17:27 -0400",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: strange explain"
},
{
"msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> Is this what the TODO entry 'Make IN / NOT IN have similar performance\n> as EXISTS' means?\n\nNo. The TODO item is talking about IN with a sub-SELECT, which is not\noptimized at all at the moment. IN with a list of scalar values is\nconverted to ((x = value1) OR (x = value2) OR ...), which we can do\nsomething with.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 May 2002 11:24:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: strange explain "
},
{
"msg_contents": "Thanks Tom,\n\n\nOn Mon, 13 May 2002, Tom Lane wrote:\n\n> Oleg Bartunov <oleg@sai.msu.su> writes:\n> > tour=# explain analyze select * from tours where\n> > ( operator_id in (2,3,4,5,7) and type_id = 2 ) or\n> > ( operator_id = 8 and type_id=4 );\n>\n> > Index Scan using type_idx, type_idx, type_idx, type_idx, type_idx, type_idx on tours (cost=0.00..12.25 rows=1 width=1091) (actual time=0.26..0.26 rows=0 loops=1)\n>\n> > What does many 'type_idx' means ?\n>\n> Multiple indexscans.\n>\n> It looks to me like your WHERE clause is being flattened into\n>\n> ( operator_id = 2 and type_id=2 ) or\n> ( operator_id = 3 and type_id=2 ) or\n> ( operator_id = 4 and type_id=2 ) or\n> ( operator_id = 5 and type_id=2 ) or\n> ( operator_id = 7 and type_id=2 ) or\n> ( operator_id = 8 and type_id=4 )\n>\n\nthis is what I assume.\n\n> and then it has a choice of repeated indexscans on operator_id or\n> type_id. Depending on the selectivity stats it might pick either.\n> You might find that a 2-column index on both would be a win.\n>\n\nYes, we've went exactly this way.\n\nI'm very exited how planner could be smart. When I played with the query\nand specify different values of type_id I notice it's chose plans depends\non is value exists or not.\n\n\n> \t\t\tregards, tom lane\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Mon, 13 May 2002 19:08:36 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "Re: strange explain "
},
{
"msg_contents": "Tom,\n\none more question.\n\nWhat's the difference for planner between 2 queries ?\nFor the first query I have plain index scan, but multiple\nindex scan for second.\n\ntour=# explain analyze select * from tours where\n ( operator_id in (2,3,4,5,7) and type_id = 2 );\nNOTICE: QUERY PLAN:\n\nIndex Scan using type_idx on tours (cost=0.00..2.03 rows=1 width=1091) (actual time=0.03..0.03 rows=0 loops=1)\nTotal runtime: 0.16 msec\n\nEXPLAIN\ntour=# explain analyze select * from tours where\n ( operator_id in (2,3,4,5,7) and type_id = 4 ) or\n ( operator_id = 8 and type_id = 3);\nNOTICE: QUERY PLAN:\n\nIndex Scan using type_idx, type_idx, type_idx, type_idx, type_idx, type_idx on tours (cost=0.00..12.25 rows=1 width=1091) (actual time=0.27..0.27 rows=0 loops=1)\nTotal runtime: 0.44 msec\n\nEXPLAIN\n\n\n\n\nOn Mon, 13 May 2002, Tom Lane wrote:\n\n> Oleg Bartunov <oleg@sai.msu.su> writes:\n> > tour=# explain analyze select * from tours where\n> > ( operator_id in (2,3,4,5,7) and type_id = 2 ) or\n> > ( operator_id = 8 and type_id=4 );\n>\n> > Index Scan using type_idx, type_idx, type_idx, type_idx, type_idx, type_idx on tours (cost=0.00..12.25 rows=1 width=1091) (actual time=0.26..0.26 rows=0 loops=1)\n>\n> > What does many 'type_idx' means ?\n>\n> Multiple indexscans.\n>\n> It looks to me like your WHERE clause is being flattened into\n>\n> ( operator_id = 2 and type_id=2 ) or\n> ( operator_id = 3 and type_id=2 ) or\n> ( operator_id = 4 and type_id=2 ) or\n> ( operator_id = 5 and type_id=2 ) or\n> ( operator_id = 7 and type_id=2 ) or\n> ( operator_id = 8 and type_id=4 )\n>\n> and then it has a choice of repeated indexscans on operator_id or\n> type_id. Depending on the selectivity stats it might pick either.\n> You might find that a 2-column index on both would be a win.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Mon, 13 May 2002 19:30:17 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "Re: strange explain "
},
{
"msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> What's the difference for planner between 2 queries ?\n\n> tour=# explain analyze select * from tours where\n> ( operator_id in (2,3,4,5,7) and type_id = 2 );\n\n> tour=# explain analyze select * from tours where\n> ( operator_id in (2,3,4,5,7) and type_id = 4 ) or\n> ( operator_id = 8 and type_id = 3);\n\nThe first one's already in normal form and doesn't need any more\nflattening. I believe the system will consider a multiple indexscan\non operator_idx for it, but probably the cost estimator is concluding\nthat that's a loser compared to one indexscan using type_id = 2.\nWithout any info on the selectivity of these conditions it's hard to say\nwhether that's a correct choice or not.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 May 2002 13:10:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: strange explain "
}
] |
[
{
"msg_contents": "I'm working on cleaning up loose ends in pg_dump, and in particular\ntrying to ensure that objects in user schemas can be named the same\nas system objects without conflicts. Most of this works now, thanks\nto Peter's idea about explicitly setting the search path to include\njust the current target schema. But there is a problem with pg_dump's\noption to issue explicit DROP commands. Right now, with that option\npg_dump will produce output like\n\n\tset search_path = my_schema;\n\n\tdrop table my_table;\n\n\tcreate table my_table (...);\n\nThis works fine unless the object name duplicates a system object;\nin that case, since the effective search path is really \"pg_catalog,\nmy_schema\", the DROP will find and try to drop the system object.\n\nI can think of two workable solutions to this:\n\n1. Explicitly qualify target-object names in the DROP commands,\nie, we'd emit\n\n\tset search_path = my_schema;\n\n\tdrop table my_schema.my_table;\n\n\tcreate table my_table (...);\n\n2. Modify the backend so that DROP has a different behavior from\nother commands: it only searches the explicitly named search path\nelements (and the TEMP table schema, if any). If pg_catalog is\nbeing searched implicitly then DROP does not look there.\n\nChoice #1 is logically cleaner but would clutter the dump script with\nmany more explicit schema references than I'd like to have. Choice #2\nis awfully ugly at first glance but might prove a good idea in the long\nrun. It'd certainly reduce the odds of mistakenly dropping a predefined\nobject.\n\nNot sure which way to go. Comments anyone?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 May 2002 14:58:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "pg_dump DROP commands and implicit search paths"
},
{
"msg_contents": "> set search_path = my_schema;\n\n> This works fine unless the object name duplicates a system object;\n> in that case, since the effective search path is really \"pg_catalog,\n> my_schema\", the DROP will find and try to drop the system object.\n\nI must have missed that day. Why is that exactly? Clients like psql\nshould probably explicitly specify pg_catalog path anyway.\n\nAfterall, if you create a my_schema.pg_class table (for whatever\nreason), and used my search path as my_schema, I'd expect my own to be\nhit with my queries. Likewise, postgresql internals should specifiy\nthe specific namespace -- which they generally do through knowledge of\nthe pg_class oid.\n\nIs this a temporary thing to tide clients over for a release without\nbreaking too much?\n\n> 1. Explicitly qualify target-object names in the DROP commands,\n> ie, we'd emit\n\nAnyway, question at hand. How about a modification of #1. If the\ntable begins in 'pg_' explicitly name it my_schema.pg_????. If users\nare creating stuff in pg_catalog they should be ready for weird\nthings -- especially given the 'overriding' state it takes in the\nsearch path.\n\n",
"msg_date": "Mon, 13 May 2002 19:42:27 -0400",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump DROP commands and implicit search paths"
},
{
"msg_contents": "On Mon, May 13, 2002 at 02:58:08PM -0400, Tom Lane wrote:\n> 1. Explicitly qualify target-object names in the DROP commands,\n\n> 2. Modify the backend so that DROP has a different behavior from\n> other commands\n\n> Choice #1 is logically cleaner but would clutter the dump script with\n> many more explicit schema references than I'd like to have.\n\nI'd prefer this method -- IMHO the readibility of dump scripts isn't\na top priority (or if it is, we're not doing very well in that regard\nany). I think dump scripts should be as verbose as is necessary to\nensure that they can't be misinterpreted.\n\n> Choice #2 is awfully ugly at first glance but might prove a good\n> idea in the long run.\n\nIt's certainly ugly, and I'm skeptical as to its long term benefits\n(I would think that the cleaner solution would be more maintainable\nin the long run). Am I missing something?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n",
"msg_date": "Mon, 13 May 2002 20:33:58 -0400",
"msg_from": "nconway@klamath.dyndns.org (Neil Conway)",
"msg_from_op": false,
"msg_subject": "Re: pg_dump DROP commands and implicit search paths"
},
{
"msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> Afterall, if you create a my_schema.pg_class table (for whatever\n> reason), and used my search path as my_schema, I'd expect my own to be\n> hit with my queries.\n\nIf you want that behavior, you can set the search path as\n\"my_schema, pg_catalog\". This does not solve pg_dump's DROP\nproblem, however, since an unqualified reference to say pg_class\nmight still be taken as an attempt to drop pg_catalog.pg_class.\nThere's no guarantee that my_schema.pg_class exists beforehand.\n\n> Is this a temporary thing to tide clients over for a release without\n> breaking too much?\n\nNo, it's a necessary thing to comply with the SQL standard.\nThe standard thinks all the predefined names are keywords and\nshould override user names. Therefore there *must* be a mode\nwherein pg_catalog is searched first (but is not the target for\ncreate operations, so path = \"pg_catalog, my_schema\" is not right\neither).\n\n> Anyway, question at hand. How about a modification of #1. If the\n> table begins in 'pg_' explicitly name it my_schema.pg_????.\n\nTables are not really the problem. Think about types, functions,\noperators. There's no handy rule to know which names conflict\n(or, even more to the point, might conflict a release or two from\nnow).\n\nI am currently thinking that explicitly setting path = my_schema,\npg_catalog might solve some of the corner cases for pg_dump ... but\nit does not fix the DROP problem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 May 2002 20:37:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump DROP commands and implicit search paths "
},
{
"msg_contents": "nconway@klamath.dyndns.org (Neil Conway) writes:\n> I'd prefer this method -- IMHO the readibility of dump scripts isn't\n> a top priority (or if it is, we're not doing very well in that regard\n> any). I think dump scripts should be as verbose as is necessary to\n> ensure that they can't be misinterpreted.\n\nPerhaps instead of \"readability\" I should have said \"editability\".\nThe thought that is lurking behind this is that you might want to\nretarget a dump script to be reloaded in some other schema. If the\ndump is cluttered with umpteen thousand copies of the schema name\nthat's going to be difficult.\n\nIdeally I'd like the dumped object definitions to contain *no* explicit\nreferences to their containing schema. This would allow, for example,\na pg_restore mode that loads the objects into a different schema.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 May 2002 20:42:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump DROP commands and implicit search paths "
},
{
"msg_contents": "\n> No, it's a necessary thing to comply with the SQL standard.\n> The standard thinks all the predefined names are keywords and\n> should override user names. Therefore there *must* be a mode\n\nHmm.. I'm not fond of this part of the standard in this case -- though\nit's got to be there for good reason.\n\nI think I understand the problem better, which may have an easy\nsolution. Based on the assumption a DROP SCHEMA statement will also\nbe issued.\n\nIf pg_dump issues a DROP of all user objects at the top, as per user\nrequest, does it really need to issue a DROP of all the objects?\n\nIf you DROP the schema, all of the objects contained within the schema\nwill go with it. So technically you don't need to drop types, tables,\nfunctions which belong to a given schema. You just drop that schema.\n\nSo we're left with public and pg_catalog. How about using a qualified\nname in all cases of DROP, BUT only issuing drops other than drop\nschema schema for public and pg_catalog contents?\n\nPerhaps public could be treated like any other schema as well -- which\nreally only leaves pg_catalog or no problem since thats what will be\nhit by default.\n\n",
"msg_date": "Mon, 13 May 2002 21:08:53 -0400",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump DROP commands and implicit search paths "
},
{
"msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> ... Based on the assumption a DROP SCHEMA statement will also\n> be issued.\n\nDoesn't seem very workable for the public schema. I suspect pg_dump\nhas to special-case public anyway, to some extent, but this doesn't\nreally get us around the DROP problem for individual objects AFAICS.\n\nI agree that if we issue a drop for the schema there's no need to\ndrop the individual objects ... but we aren't going to be issuing\nany drops for public IMHO ... so we still need a solution that\nsupports dropping individual objects.\n\nIf we assume that schema retargeting is something that should be\ndone by a pg_restore option, then it'd probably be workable for\npg_restore to modify the qualified DROP commands as it issues them.\nThe main thing is to keep the explicit schema references out of the\nCREATE commands, and that part I think is doable.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 May 2002 21:24:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump DROP commands and implicit search paths "
},
{
"msg_contents": "> Doesn't seem very workable for the public schema. I suspect pg_dump\n> has to special-case public anyway, to some extent, but this doesn't\n> really get us around the DROP problem for individual objects AFAICS.\n\nMost people in the know will probably never use public due to\nportability issues between other databases. A dump without create\npublic won't work anywhere but Postgresql -- which is fine.\n\nSo fully qualifying public entries isn't so bad -- especially if it's\nonly public entries. After a few more releases (7.[56])public may be\nable to be treated as a standard user created schema where its\ncreation is prepended from dumps from before 7.3.\n\nHow did you intend on dealing with the case where a user removes\npublic or otherwise changes the permissions / ownership on it? Is the\nassumption going to be made that it always exists and is world\nwritable? Obviously restoring dumps from 7.2 or earlier will require\npublic in that state, but will 7.3 and later require it as well where\nthere are objects stored in it?\n\n",
"msg_date": "Mon, 13 May 2002 21:43:18 -0400",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump DROP commands and implicit search paths "
},
{
"msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> How did you intend on dealing with the case where a user removes\n> public or otherwise changes the permissions / ownership on it?\n\nI have that as one of my \"to think about\" items. The best idea I have\nat the moment is to assume it exists, but include GRANT/REVOKE commands\nto change its permissions if we see they're not at factory default\nsettings.\n\nIn the case where it's not there in the source database, should pg_dump\nhave special-case logic to detect that fact and issue a DROP in the\nscript? I'm leaning against, but an argument could be made that we\nshould do that.\n\n> Is the assumption going to be made that it always exists and is world\n> writable? Obviously restoring dumps from 7.2 or earlier will require\n> public in that state, but will 7.3 and later require it as well where\n> there are objects stored in it?\n\nThere are some philosophical issues here about what exactly pg_dump\nis supposed to do. Is it supposed to try to cause the target database\nto look exactly like the source, regardless of the initial state of the\ntarget? I don't think so; for example, we've never expected it to drop\nobjects that exist in the target but not in the source. I think it is\nreasonable to assume that loading a pg_dump script into a\nfactory-default empty database will reproduce the source, modulo\nnecessary version-to-version differences. If the target is not in a\nfactory-default condition then we probably ought to be thinking in terms\nof merging multiple sets of objects, and so gratuituous DROPs don't seem\nlike a good idea. But these considerations give conflicting answers as\nto whether to DROP PUBLIC if it's not there in the source.\n\nAs for whether we will deprecate PUBLIC a few releases out --- I can't\nsee that far ahead. I can imagine that if we do, there'll come a time\nwhen the release notes may say \"if you still have any objects in PUBLIC\nin your old database, you'll need to manually CREATE SCHEMA PUBLIC\nbefore reloading your dump script\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 May 2002 22:33:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump DROP commands and implicit search paths "
},
{
"msg_contents": "On Tue, 2002-05-14 at 01:42, Tom Lane wrote:\n> nconway@klamath.dyndns.org (Neil Conway) writes:\n> > I'd prefer this method -- IMHO the readibility of dump scripts isn't\n> > a top priority (or if it is, we're not doing very well in that regard\n> > any). I think dump scripts should be as verbose as is necessary to\n> > ensure that they can't be misinterpreted.\n\nI agree with Neil on this.\n\n> Perhaps instead of \"readability\" I should have said \"editability\".\n> The thought that is lurking behind this is that you might want to\n> retarget a dump script to be reloaded in some other schema. If the\n> dump is cluttered with umpteen thousand copies of the schema name\n> that's going to be difficult.\n\nsed -e 's/ old_schema\\./ new_schema./g' \n\nI don't think you should allow the dump to be ambiguous for the sake of\nmaking rarely used actions slightly more convenient.\n\n> Ideally I'd like the dumped object definitions to contain *no* explicit\n> references to their containing schema. This would allow, for example,\n> a pg_restore mode that loads the objects into a different schema.\n\nProvide a command line option to pg_restore to do an automatic edit of\nthe schema name (-E old_schema,new_schema). People using \"psql <dump\"\nwould have to edit the dump.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n\n \"Yea, though I walk through the valley of the shadow of\n death, I will fear no evil, for thou art with me; \n thy rod and thy staff they comfort me.\" Psalms 23:4",
"msg_date": "14 May 2002 06:58:41 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump DROP commands and implicit search paths"
},
{
"msg_contents": "Oliver Elphick <olly@lfix.co.uk> writes:\n>> Perhaps instead of \"readability\" I should have said \"editability\".\n>> The thought that is lurking behind this is that you might want to\n>> retarget a dump script to be reloaded in some other schema. If the\n>> dump is cluttered with umpteen thousand copies of the schema name\n>> that's going to be difficult.\n\n> sed -e 's/ old_schema\\./ new_schema./g'=20\n\n> I don't think you should allow the dump to be ambiguous for the sake of\n> making rarely used actions slightly more convenient.\n\nYou have no fear that that \"sed\" will substitute some places it\nshouldn't have? Also, what makes you think this'll be a \"rarely\nused\" feature? I'd guess that people load dumps every day into\ndatabases that have different names than the ones they dumped from.\nDon't see why the same is not likely to be true at the schema level.\n\nNo, I'm not going to \"allow the dump to be ambiguous\". But I'm hoping\nfor a solution that doesn't get in the way of retargeting, either.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 May 2002 02:08:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump DROP commands and implicit search paths "
},
{
"msg_contents": "On Tue, 2002-05-14 at 07:08, Tom Lane wrote:\n> You have no fear that that \"sed\" will substitute some places it\n> shouldn't have? Also, what makes you think this'll be a \"rarely\n> used\" feature? I'd guess that people load dumps every day into\n> databases that have different names than the ones they dumped from.\n> Don't see why the same is not likely to be true at the schema level.\n\nA pg_restore option would presumably be more reliable than sed. \n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n\n \"Yea, though I walk through the valley of the shadow of\n death, I will fear no evil, for thou art with me; \n thy rod and thy staff they comfort me.\" Psalms 23:4",
"msg_date": "14 May 2002 07:29:53 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump DROP commands and implicit search paths"
},
{
"msg_contents": "At 06:58 AM 5/14/02 +0100, Oliver Elphick wrote:\n> > retarget a dump script to be reloaded in some other schema. If the\n> > dump is cluttered with umpteen thousand copies of the schema name\n> > that's going to be difficult.\n>\n>sed -e 's/ old_schema\\./ new_schema./g'\n>\n>I don't think you should allow the dump to be ambiguous for the sake of\n>making rarely used actions slightly more convenient.\n\nErm, from what I see on this list, people regularly dump and reload, often \nfor performance reasons. There's also dev|backup<->production|live.\n\nSo I don't think dumping and reloading into another schema would be that \nrare nor should it be difficult.\n\nsed can screw up the data. I suppose we could do schema and data dumps \nseparately but :(. Would that actually work tho? Might come in handy one \nnot so fine day ;)...\n\nRegards,\nLink.\n\n",
"msg_date": "Tue, 14 May 2002 18:42:07 +0800",
"msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump DROP commands and implicit search paths"
}
] |
[
{
"msg_contents": "In current sources (compiled without --enable-integer-datetimes) I get\n\nregression=# select interval(0) '1 day 23:44:55.667677' ;\n interval\n-----------------------\n 1 day 23:44:55.667677\n(1 row)\n\nI was expecting it to round off ... I think there's something wrong with\nthe arithmetic in AdjustIntervalForTypmod.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 May 2002 18:11:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Interval precision busted?"
},
{
"msg_contents": "> I was expecting it to round off ... I think there's something wrong with\n> the arithmetic in AdjustIntervalForTypmod.\n\nYup. I've now updated the lookup tables used for the calculation for the\n--disable-integer-datetimes case. The --enable-integer-datetimes case\nwas already calculated correctly. Regression tests don't pass, but that\nis a locale problem.\n\nThanks for noticing the problem.\n\n - Thomas\n",
"msg_date": "Tue, 14 May 2002 06:39:02 -0700",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Interval precision busted?"
}
] |
[
{
"msg_contents": "Hi all,\n\nI see the following for a view a_view;\n\n# select ctid, xmin, cmin, xmax, cmax, tableoid from a_view;\n ctid | xmin | cmin | xmax | cmax | tableoid\n------+------+------+------+------+----------\n | | | | |\n | | | | |\n | | | | |\n | | | | |\n | | | | |\n | | | | |\n | | | | |\n | | | | |\n(8 rows)\n\nAll system columns are null and seem to have no \nmeaning to me.\nIn addition it's also annoying to me e.g.\n\n# create view aview as select ctid, * from a_table;\nERROR: name of column \"ctid\" conflicts with an existing system column\n\nIf there's no objection I would remove system columns \nfrom views and allow the second example.\n\nregards, \nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n",
"msg_date": "Tue, 14 May 2002 09:34:59 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": true,
"msg_subject": "What's the meaning of system column in views"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> # create view aview as select ctid, * from a_table;\n> ERROR: name of column \"ctid\" conflicts with an existing system column\n\n> If there's no objection I would remove system columns \n> from views and allow the second example.\n\nI seem to recall having looked at this awhile back and deciding that it\nwas harder than it was worth. But if you can do it I have no objection.\n(It might be easier now than it was then; I think there are fewer places\nnow that have hardwired knowledge about system columns than before.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 May 2002 22:39:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: What's the meaning of system column in views "
}
] |
[
{
"msg_contents": "\n> EXPLAIN\n> tour=# explain analyze select * from tours where\n> ( operator_id in (2,3,4,5,7) and type_id = 4 ) or\n> ( operator_id = 8 and type_id = 3);\n> NOTICE: QUERY PLAN:\n> \n> Index Scan using type_idx, type_idx, type_idx, type_idx, type_idx, type_idx on tours (cost=>\n> 0.00..12.25 rows=1 width=1091) (actual time=0.27..0.27 rows=0 loops=1)\n> Total runtime: 0.44 msec\n\nActually this plan looks very strange to me. One would expect it to only use \ntype_idx twice (in lack of a better index (type_id, operator_id)). \nFetch all rows with type_id=4 and then filter the result on operator_id in (...), \nthen do type_id=3 and filter operator_id=8.\nSeems there is room for another performance improvement here :-)\n\nAndreas\n",
"msg_date": "Tue, 14 May 2002 08:37:13 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: strange explain "
},
{
"msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n>> tour=# explain analyze select * from tours where\n>> ( operator_id in (2,3,4,5,7) and type_id = 4 ) or\n>> ( operator_id = 8 and type_id = 3);\n\n> Actually this plan looks very strange to me. One would expect it to only use \n> type_idx twice (in lack of a better index (type_id, operator_id)). \n> Seems there is room for another performance improvement here :-)\n\nYeah, this demonstrates that reducing the quals to canonical form isn't\nalways the best thing to do.\n\nOr maybe we could just look for duplicate indexqual conditions at the\nend of the process?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 May 2002 09:39:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: strange explain "
}
] |
[
{
"msg_contents": "Current anon cvs does not generate GNUmakefile and src/Makefile.global. \nThe same applies with both \"make distclean\" and a fresh checkout:\n\n olly@linda:.../pgsql$ ./configure --enable-locale --enable-recode\n --enable-multibyte --enable-nls --with-pgport=9631 --with-CXX\n --with-perl --with-python --with-tcl --enable-odbc--with-unixodbc\n --with-openssl --with-pam --enable-syslog --enable-debug\n --enable-cassert --enable-depend --with-tkconfig=/usr/lib/tk8.3\n --with-tclconfig=/usr/lib/tcl8.3 --with-includes=/usr/include/tcl8.3\n --no-create --no-recursion\n ...\n checking for sgmlspl... sgmlspl\n configure: creating ./config.status\n olly@linda:.../pgsql$ make\n You need to run the 'configure' program first. See the file\n 'INSTALL' for installation instructions.\n make: *** [all] Error 1\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n\n \"Yea, though I walk through the valley of the shadow of\n death, I will fear no evil, for thou art with me; \n thy rod and thy staff they comfort me.\" Psalms 23:4",
"msg_date": "14 May 2002 13:06:53 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": true,
"msg_subject": "Current anon cvs does not generate GNUmakefile"
},
{
"msg_contents": "On Tue, 2002-05-14 at 13:06, Oliver Elphick wrote:\n> Current anon cvs does not generate GNUmakefile and src/Makefile.global. \n> The same applies with both \"make distclean\" and a fresh checkout:\n> \n> olly@linda:.../pgsql$ ./configure --enable-locale --enable-recode\n> --enable-multibyte --enable-nls --with-pgport=9631 --with-CXX\n> --with-perl --with-python --with-tcl --enable-odbc--with-unixodbc\n> --with-openssl --with-pam --enable-syslog --enable-debug\n> --enable-cassert --enable-depend --with-tkconfig=/usr/lib/tk8.3\n> --with-tclconfig=/usr/lib/tcl8.3 --with-includes=/usr/include/tcl8.3\n> --no-create --no-recursion\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nOh cancel that. I copied this out of config.status and didn't notice it\nhad added these options.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n\n \"Yea, though I walk through the valley of the shadow of\n death, I will fear no evil, for thou art with me; \n thy rod and thy staff they comfort me.\" Psalms 23:4",
"msg_date": "14 May 2002 13:49:00 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: Current anon cvs does not generate GNUmakefile"
}
] |
[
{
"msg_contents": "I've been working on merging the last few variable.c variables into GUC.\nHere are some notes on what I'm planning to modify in GUC to make that\npossible.\n\n1. The variable.c routines have quite a few specialized error reports\n(eg, rejecting intervals larger than months in SET TIME ZONE). While\nwe could drop all of 'em and just fall back to GUC's generic \"invalid\nvalue for this variable\" message, that seems fairly user-unfriendly.\nI am also dissatisfied with the separation between parse_hook and\nassign_hook for a GUC variable: in every case, that is resulting in\nhaving to do the parsing of the variable twice, leading to cluttered\nor duplicate code. What I'm thinking of doing is getting rid of the\nseparate parse_hook and instead defining assign_hooks to have the\nsignature\n\n\tbool assign_hook (const char *newvalue, bool interactive)\n\nIf interactive is TRUE it's okay to throw an elog rather than returning;\nif interactive is FALSE (ie, we're reading postgresql.conf) then do not\nthrow an elog, just return false on bad input. (Possibly the routines\ncould emit helpful messages via elog(LOG) instead of elog(ERROR) when\ninteractive is false.) Returning TRUE indicates successful assignment,\nreturning FALSE indicates unsuccessful; in the interactive case GUC will\nthrow a generic elog message on FALSE return.\n\nAFAICT the main reason for separating parse_hook and assign_hook in\nthe current GUC code is to allow failure to be detected before storing\nthe new value into the state array, rather than after --- but given\nthe changes we intend to make to allow rollback of SET after error,\nthis is no longer essential.\n\n2. As previously suggested, I will add a show_hook (signature\nchar *show_hook(void) seems sufficient) to allow variable-specific\ncode to override computation of what is to be displayed for SHOW.\n\n3. We also appear to need a reset_hook to override the default\naction of just calling the assign_hook with the stored default value.\nHere I'm envisioning void reset_hook(const char *newvalue, bool isAll)\nwhere the stored default value is passed (in case needed) and isAll\ndistinguishes resetting the individual variable vs RESET ALL. Per\nprevious discussion, some variables may want to ignore RESET ALL\nand this gives them an opportunity to do it. An alternative possibility\nthat may prove simpler is to still use assign_hook for resetting, but to\npass it an additional \"context\" parameter to let it know whether it's\nbeing called for SET, RESET, or RESET ALL.\n\n4. Per previous discussion, I have written code that \"flattens\" the\nnode-list output of gram.y into a string. Experimenting with this,\nI find that those variables that actually want to read their input\nvalue as a list (eg, search_path, datestyle) would like the list\nelements to be quoted when not simple identifiers. But if the\nflattening code always does this, then lots of other variable-parsing\ncode has to be prepared to cope with quotes around its input. I am\nthinking of adding a flag value to the GUC arrays that tells the\nflattening code whether to quote list elements or not --- we'd need\nto look up the target variable before flattening the input, but that\nseems no big problem. When the flag is not set, we could probably\nreject multiple elements in the input list out-of-hand. (The current\nCVS tip has this behavior hard-wired for all GUC variables, but that\nwill not do for search_path.)\n\n\nThomas had suggested that we think about further extensibility of GUC,\nsuch as letting loadable modules add variables to the set of known\nvariables. That seems like a good idea to me but I've not got time to\npursue it right now. We'd have to settle some interesting questions\nfirst, anyway (like do we want loadable modules to get loaded into the\npostmaster, and if not how do we deal with postgresql.conf entries\nintended for loadable modules).\n\nComments?\n\n\t\t\tregards, tom lane\n\nPS: once I get done with this, I will get started on the SET SESSION/\nSET LOCAL rollback behavior that I think we agreed to in the last thread.\n",
"msg_date": "Tue, 14 May 2002 13:04:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Final(?) proposal on GUC hook extensions"
}
] |
[
{
"msg_contents": "Mac OSX, postgresql 7.2.1\n\nwhat's the reasoning behind not being able to cast a varchar as \ninteger? this seems very weird to me:\n\nLEDEV=# create table test (foo as varchar(5), bar as text);\nERROR: parser: parse error at or near \"as\"\nLEDEV=# create table test (foo varchar(5), bar text);\nCREATE\nLEDEV=# insert into test (foo, bar) values ('123', '123');\nINSERT 409490 1\nLEDEV=# select * from test;\n foo | bar\n-----+-----\n 123 | 123\n(1 row)\n\nLEDEV=# select cast(foo as integer) from test;\nERROR: Cannot cast type 'character varying' to 'integer'\nLEDEV=# select cast(bar as integer) from test;\n bar\n-----\n 123\n(1 row)\n\n",
"msg_date": "Tue, 14 May 2002 13:56:33 -0500",
"msg_from": "Scott Royston <scroyston@mac.com>",
"msg_from_op": true,
"msg_subject": "can't cast varchar as integer?"
},
{
"msg_contents": "> what's the reasoning behind not being able to cast a varchar as \n> integer? this seems very weird to me:\n> \n> LEDEV=# select cast(foo as integer) from test;\n> ERROR: Cannot cast type 'character varying' to 'integer'\n> LEDEV=# select cast(bar as integer) from test;\n> bar\n> -----\n> 123\n> (1 row)\n\nInteresting. You can have an intermediate to-text cast:\n\nselect cast ( cast ( cast ('123' as varchar) as text) as integer);\n",
"msg_date": "Tue, 14 May 2002 17:20:20 -0400",
"msg_from": "\"Joel Burton\" <joel@joelburton.com>",
"msg_from_op": false,
"msg_subject": "Re: can't cast varchar as integer?"
},
{
"msg_contents": "On Tue, 2002-05-14 at 13:56, Scott Royston wrote:\n> Mac OSX, postgresql 7.2.1\n> \n> what's the reasoning behind not being able to cast a varchar as \n> integer? this seems very weird to me:\n> \n> LEDEV=# create table test (foo varchar(5), bar text);\n> LEDEV=# insert into test (foo, bar) values ('123', '123');\n> LEDEV=# select cast(foo as integer) from test;\n> ERROR: Cannot cast type 'character varying' to 'integer'\n> LEDEV=# select cast(bar as integer) from test;\n> bar\n> -----\n> 123\n> (1 row)\n\n\nTry this: \n\nscratch=# select foo::text::integer from test;\n foo \n-----\n 123\n(1 row)\n\n\nOr: \n\nscratch=# select int4(foo) from test;\n int4 \n------\n 123\n(1 row)\n\n\n\n--\nDavid Stanaway",
"msg_date": "14 May 2002 16:47:39 -0500",
"msg_from": "David Stanaway <david@stanaway.net>",
"msg_from_op": false,
"msg_subject": "Re: can't cast varchar as integer?"
},
{
"msg_contents": "Thanks for the replies so far. I had been using cast(foo::text as \ninteger).\n\nTo clarify my question, does anyone know *why* I can't cast from varchar \nto integer? Why should I have to cast to text first?\n\nthanks\n\n\nOn Tuesday, May 14, 2002, at 04:47 PM, David Stanaway wrote:\n\n> On Tue, 2002-05-14 at 13:56, Scott Royston wrote:\n>> Mac OSX, postgresql 7.2.1\n>>\n>> what's the reasoning behind not being able to cast a varchar as\n>> integer? this seems very weird to me:\n>>\n>> LEDEV=# create table test (foo varchar(5), bar text);\n>> LEDEV=# insert into test (foo, bar) values ('123', '123');\n>> LEDEV=# select cast(foo as integer) from test;\n>> ERROR: Cannot cast type 'character varying' to 'integer'\n>> LEDEV=# select cast(bar as integer) from test;\n>> bar\n>> -----\n>> 123\n>> (1 row)\n>\n>\n> Try this:\n>\n> scratch=# select foo::text::integer from test;\n> foo\n> -----\n> 123\n> (1 row)\n>\n>\n> Or:\n>\n> scratch=# select int4(foo) from test;\n> int4\n> ------\n> 123\n> (1 row)\n>\n>\n>\n> --\n> David Stanaway\n>\n\n",
"msg_date": "Tue, 14 May 2002 18:02:41 -0500",
"msg_from": "Scott Royston <scroyston@mac.com>",
"msg_from_op": true,
"msg_subject": "Re: [SQL] can't cast varchar as integer?"
},
{
"msg_contents": "David Stanaway <david@stanaway.net> writes:\n>> LEDEV=3D# select cast(foo as integer) from test;\n>> ERROR: Cannot cast type 'character varying' to 'integer'\n\n> scratch=3D# select foo::text::integer from test;\n> [works]\n\n> scratch=3D# select int4(foo) from test;\n> [works]\n\nFor reasons that I don't entirely recall at the moment (but they seemed\ngood to the pghackers list at the time), cast notations only work if\nthere is a cast function *exactly* matching the requested cast.\nOn the other hand, the functional form is laxer because there's an\nallowed step of implicit coercion before the function call.\n\nIn the case at hand, there's a text->int4 cast function (look in\npg_proc, you'll see int4(text)) but there's no int4(varchar) function.\nAlso, varchar can be cast to text implicitly --- this is actually\na \"binary equivalent\" cast requiring no run-time effort. So\n\tselect foo::text::integer from test;\nworks: it's a binary-equivalent cast from varchar to text, followed\nby application of int4(text). And\n\t select int4(foo) from test;\nworks because the same function is found and implicit coercion of\nits argument to text succeeds. But\n\tselect cast(foo as integer) from test;\ndoesn't work because there's no declared function int4(varchar).\n\nThere's probably not any good reason why there's not int4(varchar),\njust that no one got around to making one.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 May 2002 19:29:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: can't cast varchar as integer? "
}
] |
[
{
"msg_contents": "We have one patch for contrib/rtree_gist ( thanks Chris Hodgson for\nspotting bug and test suite ). Should we submit patch for 7.2.2 and\n7.3 ?\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n",
"msg_date": "Tue, 14 May 2002 22:27:51 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "7.2.2 ?"
},
{
"msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> We have one patch for contrib/rtree_gist ( thanks Chris Hodgson for\n> spotting bug and test suite ). Should we submit patch for 7.2.2 and\n> 7.3 ?\n\nI don't know whether we will bother with a 7.2.2 release --- but if it's\na high-confidence bug fix, sure, might as well patch it in the REL7_2\nbranch as well as current.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 May 2002 17:50:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.2.2 ? "
},
{
"msg_contents": "On Tue, 14 May 2002, Tom Lane wrote:\n\n> Oleg Bartunov <oleg@sai.msu.su> writes:\n> > We have one patch for contrib/rtree_gist ( thanks Chris Hodgson for\n> > spotting bug and test suite ). Should we submit patch for 7.2.2 and\n> > 7.3 ?\n>\n> I don't know whether we will bother with a 7.2.2 release --- but if it's\n> a high-confidence bug fix, sure, might as well patch it in the REL7_2\n> branch as well as current.\n\nWe could do up a 7.2.2 ... Tatsuo just made some changes to it also, and\nits not like it takes alot to make a new minor point release, not like\ndoing a major one :)\n\n\n",
"msg_date": "Wed, 15 May 2002 00:46:55 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: 7.2.2 ? "
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> On Tue, 14 May 2002, Tom Lane wrote:\n>> I don't know whether we will bother with a 7.2.2 release ---\n\n> We could do up a 7.2.2 ...\n\nIf ya wanna do one, no objection here. But let's see if we can't get\nsome resolution of that command-tags-and-rules business first.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 15 May 2002 00:11:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.2.2 ? "
},
{
"msg_contents": "On Wed, 15 May 2002, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > On Tue, 14 May 2002, Tom Lane wrote:\n> >> I don't know whether we will bother with a 7.2.2 release ---\n>\n> > We could do up a 7.2.2 ...\n>\n> If ya wanna do one, no objection here. But let's see if we can't get\n> some resolution of that command-tags-and-rules business first.\n\nWell, how about we scheduale a quick v7.2.2 for June 1st, and give the\nPgAccess folk a chance to submit some updated patches if they want? From\nthat whole thread, it sounded like they were sitting on some stuff that\nmight be appropriate for v7.2 ... ?\n\nI say June 1st namely because any day now, the baby is going to be born,\nwhich means doing a release won't be high on my list of priorities while\nwe get it settled ...\n\n",
"msg_date": "Wed, 15 May 2002 01:16:27 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: 7.2.2 ? "
}
] |
[
{
"msg_contents": "I really like array iterators, and am interested in simplifying them\nsomewhat. Currently they required too many operator entries and\nfunctions to be truely useful in the base.\n\nPlease fill in holes ;)\n\nOk, first step. The ability to create an operator that will work on a\nvast number of data types without having to specify each one. This\nrequires creation of functions which can accept type unknown, and more\nimportantly an array of unknown.\n\nI don't think much (any?) modifcation will be required to arrayin() /\narrayout() for this as unknown is basically a varchar.\n\nStep 2 is to teach oper_select_candidate() about functions which\naccept unknown and _unknown that can be used as a last resort. We\nwill allow coercions to type unknown and _unknown from any type to\naccomplish this but only in this specific case. Why unknown and not\ntext? We simply don't want to allow this special method for\neverything. Just a few special cases where comparing Datums is close\nenough. Array iterators and perhaps a generic '=' operator where one\ndoesn't exist with a specific type tied to it that we can use instead.\n\nStep 3 is to have the iter operators use datumIsEqual(). The special\ncases that datumIsEqual doesn't work will get it's own set of array\niterator functions. A list of these needs to be compiled, although it\nshould be a short one.\n\n\nDoesn't seem like a great solution because I don't like coercing\nstuff back to unknown. Making a special generic type or using text in\nthis case isn't much better as it's still arbirarily selecting when to\ncoerce to it or not.\n\nPerhaps all thats needed is an interator function set per type group\n(ints, chars, bool, ...)\n\nIt'll be a while before I do anything -- but perhaps for 7.4.\n\n--\nRod\n\n",
"msg_date": "Tue, 14 May 2002 21:15:31 -0400",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "Array iterators"
},
{
"msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> Step 2 is to teach oper_select_candidate() about functions which\n> accept unknown and _unknown that can be used as a last resort. We\n> will allow coercions to type unknown and _unknown from any type to\n> accomplish this but only in this specific case. Why unknown and not\n> text?\n\nRather than unknown, how about some new special pseudo-type?\n\nWe've already shot ourselves in the feet enough times by overloading\ntype OID 0 to mean so many slightly-different things. I have ranted\non this topic before, see the archives; but even past midnight I can\nname several distinct meanings of \"type 0\" without stopping for breath:\nplain old invalid placeholder, as in pg_proc.proargtypes entries beyond\nthe pronargs'th one; C string for I/O functions; takes/returns an\ninternal type, as in the various cost estimation functions for the\noptimizer and the various index access method functions (note there are\nseveral *different* internal types involved there), not to mention\ntrigger functions which really really ought to have a distinguishable\nsignature; \"returns void\"; \"takes any type at all\" (COUNT(*)); \"takes\nany array type\" (array_dims); ... okay, I'm out of breath.\n\nMeanwhile in the type resolver's domain we have \"unknown\" literals,\nand we've speculated many times about inventing \"unknown numeric\"\nliterals to clean up the problems with assignment of types to numeric\nconstants. Let's *not* make the mistake of overloading \"unknown\" to\nhave some array-specific meaning.\n\nI believe that the best ultimate resolution of these problems will\ninvolve creating a spectrum of \"pseudo-types\" with different, sharply\ndefined meanings. Breaking up \"opaque/type 0\" is going to cause a\nlot of backward-compatibility pain, so I have not been in a big hurry\nto do it --- but let's get it right the first time when dealing with\nshades of \"unknown\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 15 May 2002 00:48:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Array iterators "
},
{
"msg_contents": "Just a small remark, type name \"any\" would be more meaningfull than\n\"unknown\". Also it's widely used (CORBA for example) and may enter in the\nSTL someday.\n\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Rod Taylor\" <rbt@zort.ca>\nCc: \"Hackers List\" <pgsql-hackers@postgresql.org>\nSent: Wednesday, May 15, 2002 2:48 PM\nSubject: Re: [HACKERS] Array iterators\n\n\n> \"Rod Taylor\" <rbt@zort.ca> writes:\n> > Step 2 is to teach oper_select_candidate() about functions which\n> > accept unknown and _unknown that can be used as a last resort. We\n> > will allow coercions to type unknown and _unknown from any type to\n> > accomplish this but only in this specific case. Why unknown and not\n> > text?\n>\n> Rather than unknown, how about some new special pseudo-type?\n>\n> We've already shot ourselves in the feet enough times by overloading\n> type OID 0 to mean so many slightly-different things. I have ranted\n> on this topic before, see the archives; but even past midnight I can\n> name several distinct meanings of \"type 0\" without stopping for breath:\n> plain old invalid placeholder, as in pg_proc.proargtypes entries beyond\n> the pronargs'th one; C string for I/O functions; takes/returns an\n> internal type, as in the various cost estimation functions for the\n> optimizer and the various index access method functions (note there are\n> several *different* internal types involved there), not to mention\n> trigger functions which really really ought to have a distinguishable\n> signature; \"returns void\"; \"takes any type at all\" (COUNT(*)); \"takes\n> any array type\" (array_dims); ... okay, I'm out of breath.\n>\n> Meanwhile in the type resolver's domain we have \"unknown\" literals,\n> and we've speculated many times about inventing \"unknown numeric\"\n> literals to clean up the problems with assignment of types to numeric\n> constants. Let's *not* make the mistake of overloading \"unknown\" to\n> have some array-specific meaning.\n>\n> I believe that the best ultimate resolution of these problems will\n> involve creating a spectrum of \"pseudo-types\" with different, sharply\n> defined meanings. Breaking up \"opaque/type 0\" is going to cause a\n> lot of backward-compatibility pain, so I have not been in a big hurry\n> to do it --- but let's get it right the first time when dealing with\n> shades of \"unknown\".\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n>\n\n\n",
"msg_date": "Wed, 15 May 2002 17:10:39 +1000",
"msg_from": "\"Nicolas Bazin\" <nbazin@ingenico.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Array iterators"
},
{
"msg_contents": "Sounds great to me. Mostly why I made the remark that I wasn't happy\nwith the solution due to using type unknown.\n\nany and _any it is.\n--\nRod\n----- Original Message -----\nFrom: \"Nicolas Bazin\" <nbazin@ingenico.com.au>\nTo: \"Rod Taylor\" <rbt@zort.ca>; \"Tom Lane\" <tgl@sss.pgh.pa.us>\nCc: \"Hackers List\" <pgsql-hackers@postgresql.org>\nSent: Wednesday, May 15, 2002 3:10 AM\nSubject: Re: [HACKERS] Array iterators\n\n\n> Just a small remark, type name \"any\" would be more meaningfull than\n> \"unknown\". Also it's widely used (CORBA for example) and may enter\nin the\n> STL someday.\n>\n> ----- Original Message -----\n> From: \"Tom Lane\" <tgl@sss.pgh.pa.us>\n> To: \"Rod Taylor\" <rbt@zort.ca>\n> Cc: \"Hackers List\" <pgsql-hackers@postgresql.org>\n> Sent: Wednesday, May 15, 2002 2:48 PM\n> Subject: Re: [HACKERS] Array iterators\n\n> > I believe that the best ultimate resolution of these problems will\n> > involve creating a spectrum of \"pseudo-types\" with different,\nsharply\n> > defined meanings. Breaking up \"opaque/type 0\" is going to cause a\n> > lot of backward-compatibility pain, so I have not been in a big\nhurry\n> > to do it --- but let's get it right the first time when dealing\nwith\n> > shades of \"unknown\".\n\n\n",
"msg_date": "Wed, 15 May 2002 07:55:03 -0400",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "Re: Array iterators"
}
] |
[
{
"msg_contents": "Hi All,\n\nI can't see that there's any way to reset the stats collector without HUPing\nthe postmaster? Is there? Should there be?\n\nChris\n\n",
"msg_date": "Wed, 15 May 2002 10:25:24 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "resetting stats on the fly"
}
] |
[
{
"msg_contents": "The current implementation of the kerberos 5\nauthentification in backend/libpq/auth.c truncates the\nprincipal after the first '/' or failing that, after\nthe first '@', assuming the result to be the database\nusername. This implicitly allows crossrealm\nautentification which is not good in many instances.\nEven more seriously, it discards parts following any\n'/' which is definatelly very bad in many instances.\n\nThis is not satisfactory for some (I would think most)\napplications. A solution to this would be mapping\nkerberos principals to usernames in the database. (As\ne.g ~username/.k5login determines which principals are\nauthorized to login as username.) Idealy this mapping\ntable should be a system table in the database (and\nnot a specialized file like the current implementation\nof pg_ident.conf). Is this a stupid idea? Any\ncomments?\n\nI do have a few my questions regarding an\nimplementation of this.\n\nIs there any existing way of making queries from\npostmaster (other than setting up a client connection\nfrom it)?\n\nIs there a reason pg_ident.conf and pg_hba.conf are\nfiles rather than tables?\n\nIs there any reason not doing authentification of both\nthe client and the server?\n\nGrateful for answers and comments\nDaniel\n\n\n__________________________________________________\nDo You Yahoo!?\nEverything you'll ever need on one web page\nfrom News and Sport to Email and Music Charts\nhttp://uk.my.yahoo.com\n",
"msg_date": "Wed, 15 May 2002 13:43:32 +0100 (BST)",
"msg_from": "=?iso-8859-1?q?Daniel?= <dah00002000@yahoo.co.uk>",
"msg_from_op": true,
"msg_subject": "Kerberos principal to dbuser mapping"
},
{
"msg_contents": "=?iso-8859-1?q?Daniel?= <dah00002000@yahoo.co.uk> writes:\n> The current implementation of the kerberos 5\n> authentification in backend/libpq/auth.c truncates the\n> principal after the first '/' or failing that, after\n> the first '@', assuming the result to be the database\n> username. This implicitly allows crossrealm\n> autentification which is not good in many instances.\n\nI agree, that's probably not a good idea.\n\n> This is not satisfactory for some (I would think most)\n> applications. A solution to this would be mapping\n> kerberos principals to usernames in the database. (As\n> e.g ~username/.k5login determines which principals are\n> authorized to login as username.) Idealy this mapping\n> table should be a system table in the database (and\n> not a specialized file like the current implementation\n> of pg_ident.conf). Is this a stupid idea?\n\nAfraid so. The postmaster cannot use system tables because it's\nnot really connected to the database.\n\nYou could possibly add a column to pg_shadow that gets dumped into\nthe \"flat password file\" for use by the postmaster.\n\nOffhand, though, that seems like overkill. Why not just add a\npostgresql.conf parameter for realm name, and if it's set, only accept\nKerberos principal names from that realm? Or even simpler, a boolean\nthat says to accept only names from the same realm as our own ticket?\nThese would be much simpler to implement and probably solve 99.44% of\nthe problem. In the boolean form, I'd even favor setting it to \"on\"\nby default, so that the default configuration becomes more secure.\nWith anything else, security can only be improved if the admin takes\nspecial action to insert the correct information.\n\n> Is there any existing way of making queries from\n> postmaster (other than setting up a client connection\n> from it)?\n\nThere is no existing way, and none will be added in the future either.\nThere are good system-reliability reasons for keeping the postmaster\naway from the database.\n\n> Is there any reason not doing authentification of both\n> the client and the server?\n\nSay again?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 15 May 2002 10:13:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos principal to dbuser mapping "
},
{
"msg_contents": " --- Tom Lane <tgl@sss.pgh.pa.us> wrote: > > \n> You could possibly add a column to pg_shadow that\n> gets dumped into\n> the \"flat password file\" for use by the postmaster.\n\nYes, I had the thought of creating a trigger on insert\ninto pg_shadow, which would issue a COPY TO statement\nand a custom c-function to send the signals to\npostmaster. But that is inelegant to say the least.\n \n> Offhand, though, that seems like overkill. Why not\n> just add a\n> postgresql.conf parameter for realm name, and if\n> it's set, only accept\n> Kerberos principal names from that realm? Or even\n> simpler, a boolean\n> that says to accept only names from the same realm\n> as our own ticket?\n> These would be much simpler to implement and\n> probably solve 99.44% of\n> the problem.\n\nIt would only solve the crossrealm authentification\nproblem. I would call that the minor problem of the\ntwo. The more important one is that the current code\nthrows away any components following a '/'. That means\nthat two distinct principals, say webserver/www1@b.com\nand webserver/webmail@b.com is regarded as equivalent\nfor database authenification purposes. That is not\ncorrect. Since the usernames in Postgres has\nrestrictions regarding valid characters, it is not\nrecomendable (though probably possible) to have\nusernames matching the entire principal.\n\nA table, matching principal against username, would\nsolve both problems. (While still allowing crossrealm\nauthentification on a per-user basis.)\n\nThis is actually what pg_an_to_ln should do, but the\nkerberos implementation is not suitable in this\ncontext regardless of if it \"punts\" (what is that?) or\nnot. This is due to postgresql maintaining its own\nuser database, separate from the local machines. One\ncant expect that the kerberos implementation should be\nable to perform this translation for postgres. The\ndatabase must do that itself and, preferably, in a\ncorrect manner.\n\n> With anything else, security can only be improved if\n> the admin takes\n> special action to insert the correct information.\n\nI do not understand your last statement in this\ncontext.\n\n> > Is there any existing way of making queries from\n> > postmaster (other than setting up a client\n> connection\n> > from it)?\n> \n> There is no existing way, and none will be added in\n> the future either.\n> There are good system-reliability reasons for\n> keeping the postmaster\n> away from the database.\n\nOk, but it seems wasteful to build primitive database\nfunctionality in parallell to the real database.\n\nThe way I see it there is one main problem. We have a\nkrb principal with a structure we need not assume we\nknow anything about. We should certainly not then\ndiscard bits and pieces of it. In order to not loose\nfunctionality we would like several principals to be\nauthorized to use a given username and several\nusernames to be accessible by a given principal. The\nway to solve this is to use a translation method from\nprincipal to database users, i. e. a table.\nAs the number of users of the database grows, using a\npreprocessed flat file to manage this becomes more and\nmore of a problem. At that point one usually begins to\nlook for the functionality of a database, and one is\ncertainly close at hand :).\n \n> > Is there any reason not doing authentification of\n> both\n> > the client and the server?\n> \n> Say again?\n\nSorry, I was jumping between subjects.\nWhy is not simply AP_OPTS_MUTUAL_REQUIRED specified in\n\nthe krb5_recvauth call in auth.c and in the\nkrb5_sendauth in fe-auth.c? This would not only ensure\nthe server that it is talking to the right client but\nalso ensure the client it is talking to the right\nserver.\n\nRegards\nDaniel\n\n\n\n__________________________________________________\nDo You Yahoo!?\nEverything you'll ever need on one web page\nfrom News and Sport to Email and Music Charts\nhttp://uk.my.yahoo.com\n",
"msg_date": "Wed, 15 May 2002 19:38:20 +0100 (BST)",
"msg_from": "=?iso-8859-1?q?Daniel?= <dah00002000@yahoo.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: Kerberos principal to dbuser mapping "
},
{
"msg_contents": "> > > Is there any existing way of making queries from\n> > > postmaster (other than setting up a client\n> > > connection from it)?\n> > \n> > There is no existing way, and none will be added in\n> > the future either.\n> > There are good system-reliability reasons for\n> > keeping the postmaster\n> > away from the database.\n> \n> Ok, but it seems wasteful to build primitive database\n> functionality in parallell to the real database.\n\nThis issue affects mutual SSL authentication and PKIX in \naddition to Kerberos. See a followup post for details....\nBottom line: we should identify and document a canonical\nsolution.\n\nP.S., in the case of PKIX, there's a well-defined interface\nand there's no conceptual problem with maintaining the database\nvia the regular client interface. Bootstrapping the system may\nbe another matter.\n\nBear\n",
"msg_date": "Thu, 16 May 2002 09:09:11 -0600 (MDT)",
"msg_from": "Bear Giles <bgiles@coyotesong.com>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos principal to dbuser mapping"
},
{
"msg_contents": "Daniel writes:\n\n> way to solve this is to use a translation method from\n> principal to database users, i. e. a table.\n> As the number of users of the database grows, using a\n> preprocessed flat file to manage this becomes more and\n> more of a problem. At that point one usually begins to\n> look for the functionality of a database, and one is\n> certainly close at hand :).\n\nThe server cannot access the database before you're authenticated to do\nso, plus if the authentication setup is contained in the database and you\nmess it up, how do you get back in? These are the two reasons why the\ninformation is kept in flat files. One might come up with ways to edit\nthese files from within the SQL environment, which indeed is a frequently\nrequested feature, but for solving the problem at hand, namely the\nKerberos principal to PostgreSQL user mapping, use a flat file. You can\nprobably use most of the ident.conf code.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 16 May 2002 19:21:56 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos principal to dbuser mapping "
}
] |
[
{
"msg_contents": "Considering this schema:\n\n-- Table: cnx_ds_sis_bill_detl_tb\nCREATE TABLE \"cnx_ds_sis_bill_detl_tb\" (\n \"extr_stu_id\" char(10), \n \"term_cyt\" char(5), \n \"subcode\" char(5), \n \"tran_seq\" int2, \n \"crc\" int8, \n CONSTRAINT \"pk_cnx_ds_sis_bill_detl_tb\" UNIQUE (\"extr_stu_id\",\n\"term_cyt\", \"subcode\", \"tran_seq\")\n);\n\n-- Index: pk_cnx_ds_sis_bill_detl_tb\nCREATE UNIQUE INDEX pk_cnx_ds_sis_bill_detl_tb ON\ncnx_ds_sis_bill_detl_tb USING btree (extr_stu_id bpchar_ops, term_cyt\nbpchar_ops, subcode bpchar_ops, tran_seq int2_ops);\n\nHere is a PSQL session, where I did some simple queries:\n\nconnxdatasync=# select count(*) from cnx_ds_sis_bill_detl_tb;\n count\n---------\n 1607823\n(1 row)\n\nconnxdatasync=# select min(extr_stu_id) from cnx_ds_sis_bill_detl_tb;\n min\n------------\n 000251681\n(1 row)\n\nconnxdatasync=# select max(extr_stu_id) from cnx_ds_sis_bill_detl_tb;\n max\n------------\n 999999999\n(1 row)\n\n\nThe select(min) and select(max) took as long as the table scan to find\nthe count. It seems logical if a btree type index is available (such\nas pk_cnx_ds_sis_bill_detl_tb) where the most significant bit of the\nindex is the column requested, it should be little more than a seek\nfirst or seek last in the btree. Obviously, it won't work with a hashed\nindex (which is neither here nor there).\n",
"msg_date": "Wed, 15 May 2002 11:23:26 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "A fairly obvious optimization?"
},
{
"msg_contents": "On Wed, 2002-05-15 at 23:23, Dann Corbit wrote:\n> The select(min) and select(max) took as long as the table scan to find\n> the count. It seems logical if a btree type index is available (such\n> as pk_cnx_ds_sis_bill_detl_tb) where the most significant bit of the\n> index is the column requested, it should be little more than a seek\n> first or seek last in the btree. Obviously, it won't work with a hashed\n> index (which is neither here nor there).\n\nThe problem is postgres' extensibility -there is no hard-wired\nconnection between max() and b-tree indexes - you can define an\naggregate max() that returns something completely diffrent, say the\nlongest string length or the \"best\" optimisation techniqe which may or\nmay not be able to use an index.\n\n------------\nHannu\n\n\n",
"msg_date": "16 May 2002 11:21:40 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: A fairly obvious optimization?"
}
] |
[
{
"msg_contents": "\n> The select(min) and select(max) took as long as the table scan to find\n> the count. It seems logical if a btree type index is available (such\n> as pk_cnx_ds_sis_bill_detl_tb) where the most significant bit of the\n> index is the column requested, it should be little more than a seek\n> first or seek last in the btree. Obviously, it won't work with a hashed\n> index (which is neither here nor there).\n\nIn the meantime you can use:\nselect extr_stu_id from cnx_ds_sis_bill_detl_tb order by 1 desc limit 1; -- max\nselect extr_stu_id from cnx_ds_sis_bill_detl_tb order by 1 asc limit 1; -- min\n\nI guess that is the reason why nobody felt really motivated to implement\nthis optimization. Besides these statements are more powerful, since they can fetch \nother columns from this min/max row. The down side is, that this syntax varies across\ndb vendors, but most (all?) have a corresponding feature nowadays.\n\nselect first 1\nselect top 1 ...\n\nThis is actually becoming a FAQ :-)\n\nAndreas\n",
"msg_date": "Thu, 16 May 2002 10:07:01 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: A fairly obvious optimization?"
},
{
"msg_contents": "\nFAQ updated in section 4.8: My queries are slow or don't make use of the\nindexes. Why?\n\n is returned. In fact, though MAX() and MIN() don't use indexes, \n it is possible to retrieve such values using an index with ORDER BY\n and LIMIT:\n<PRE>\n SELECT col\n FROM tab\n ORDER BY col\n LIMIT 1\n</PRE>\n\n---------------------------------------------------------------------------\n\nZeugswetter Andreas SB SD wrote:\n> \n> > The select(min) and select(max) took as long as the table scan to find\n> > the count. It seems logical if a btree type index is available (such\n> > as pk_cnx_ds_sis_bill_detl_tb) where the most significant bit of the\n> > index is the column requested, it should be little more than a seek\n> > first or seek last in the btree. Obviously, it won't work with a hashed\n> > index (which is neither here nor there).\n> \n> In the meantime you can use:\n> select extr_stu_id from cnx_ds_sis_bill_detl_tb order by 1 desc limit 1; -- max\n> select extr_stu_id from cnx_ds_sis_bill_detl_tb order by 1 asc limit 1; -- min\n> \n> I guess that is the reason why nobody felt really motivated to implement\n> this optimization. Besides these statements are more powerful, since they can fetch \n> other columns from this min/max row. The down side is, that this syntax varies across\n> db vendors, but most (all?) have a corresponding feature nowadays.\n> \n> select first 1\n> select top 1 ...\n> \n> This is actually becoming a FAQ :-)\n> \n> Andreas\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n",
"msg_date": "Sun, 23 Jun 2002 17:16:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: A fairly obvious optimization?"
},
{
"msg_contents": "On Sun, 23 Jun 2002 17:16:09 EDT, the world broke into rejoicing as\nBruce Momjian <pgman@candle.pha.pa.us> said:\n> FAQ updated in section 4.8: My queries are slow or don't make use of the\n> indexes. Why?\n> \n> is returned. In fact, though MAX() and MIN() don't use indexes, \n> it is possible to retrieve such values using an index with ORDER BY\n> and LIMIT:\n> <PRE>\n> SELECT col\n> FROM tab\n> ORDER BY col\n> LIMIT 1\n> </PRE>\n\nThis sounds like the sort of thing that would be really nice to be able\nto automate into the query optimizer...\n--\n(reverse (concatenate 'string \"moc.enworbbc@\" \"sirhc\"))\nhttp://www3.sympatico.ca/cbbrowne/spreadsheets.html\n\"I decry the current tendency to seek patents on algorithms. There\nare better ways to earn a living than to prevent other people from\nmaking use of one's contributions to computer science.\"\n-- D. E. Knuth\n\n\n",
"msg_date": "Mon, 24 Jun 2002 14:03:48 -0400",
"msg_from": "cbbrowne@cbbrowne.com",
"msg_from_op": false,
"msg_subject": "Re: A fairly obvious optimization? "
}
] |
[
{
"msg_contents": "I know that the money type is supposed to be deprecated but I think that \nthere is still some benefit to it. It is small and fast. There are some \nproblems and I would like to address them.\n\nThe output has a dollar sign attached. This is NA centric and we said years \nago that we were going to drop it. I think that that is enough warning. \nUnless someone has a problem with that I will just go in and get rid of it.\n\nAlso somewhat NA centric is the two decimal places. This was originally \nmeant to be locale driven but that is a problem for other reasons. What \nabout defaulting it to two decimal places but allowing it to be redefined at \ntable creation time? How hard would it be to make it accept an optional \nprecision?\n\nIt doesn't cast to other types. If it simply cast to float that would allow \nit to be more flexible. Do I need to add a float return function for that to \nwork?\n\nLimited precision. This can be fixed by going to a 64 bit integer for the \nunderlying type. Are we at a point where we can do that yet? I am afraid \nthat there are still systems that don't have a native 64 bit type. This is \nnot as critical as the other items I think.\n\nAs the original author of the type I naturally have some bias but I still \nthink that it is a good type for all the reasons we thought it was a good \nidea before. There is a definite advantage to being able to do integer \narithmetic right on the CPU in large financial applications.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Thu, 16 May 2002 06:11:43 -0400",
"msg_from": "\"D'Arcy J.M. Cain\" <darcy@druid.net>",
"msg_from_op": true,
"msg_subject": "Money type"
},
{
"msg_contents": "On Thu, May 16, 2002 at 06:11:43AM -0400, D'Arcy J.M. Cain wrote:\n> I know that the money type is supposed to be deprecated but I think that \n\n Right.\n\n> there is still some benefit to it. It is small and fast. There are some \n> problems and I would like to address them.\n> \n> The output has a dollar sign attached. This is NA centric and we said years \n> ago that we were going to drop it. I think that that is enough warning. \n> Unless someone has a problem with that I will just go in and get rid of it.\n> \n> Also somewhat NA centric is the two decimal places. This was originally \n> meant to be locale driven but that is a problem for other reasons. What \n> about defaulting it to two decimal places but allowing it to be redefined at \n> table creation time? How hard would it be to make it accept an optional \n> precision?\n>\n> It doesn't cast to other types. If it simply cast to float that would allow \n> it to be more flexible. Do I need to add a float return function for that to \n> work?\n> \n> Limited precision. This can be fixed by going to a 64 bit integer for the \n> underlying type. Are we at a point where we can do that yet? I am afraid \n> that there are still systems that don't have a native 64 bit type. This is \n> not as critical as the other items I think.\n\n I think right is use numeric and to_char() for currency symbol and \n common and locales correct number formatting. IMHO it's better than\n use dagerous float and hard coded currency symbol.\n\n For example in my country (and a lot of others) is the current money\n datetype total useless. We have currency symbol after number, etc.\n\n Sorry but _IMHO_ is better a less good supported types than more \n bad datetypes.\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Thu, 16 May 2002 12:51:54 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: Money type"
},
{
"msg_contents": "\"D'Arcy J.M. Cain\" <darcy@druid.net> writes:\n> Also somewhat NA centric is the two decimal places. This was originally \n> meant to be locale driven but that is a problem for other reasons. What \n> about defaulting it to two decimal places but allowing it to be redefined at \n> table creation time? How hard would it be to make it accept an optional \n> precision?\n\nPossible, but in 32 bits you don't really have room to offer more\nprecision. Another objection is that (AFAIK) there's no way to handle\nprecision specs without wiring them into quite a number of places in the\nparser, format_type, etc. I'd object to doing that for a nonstandard\ntype like money.\n\n> Limited precision. This can be fixed by going to a 64 bit integer for the \n> underlying type. Are we at a point where we can do that yet? I am afraid \n> that there are still systems that don't have a native 64 bit type.\n\nYou could possibly use the same sort of hacks as are in the int8 support\n--- type int8 is still functional on int64-less platforms, it just has\nthe same range as int4. I guess this would be no loss of functionality\ncompared to where money is now.\n\n> As the original author of the type I naturally have some bias but I still \n> think that it is a good type for all the reasons we thought it was a good \n> idea before. There is a definite advantage to being able to do integer \n> arithmetic right on the CPU in large financial applications.\n\nI'd rather see the effort invested in making type 'numeric' faster.\nEven with a 64-bit width, money would still be subject to silent\noverflow, which I find uncool for financial applications...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 May 2002 10:07:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Money type "
}
] |
[
{
"msg_contents": "As per earlier vague hint, I'm bringing the CREATE FUNCTION syntax in line\nwith SQL99. Everything is fully backward compatible. Here is the new\nsynopsis:\n\nCREATE [OR REPLACE] FUNCTION name (args) RETURNS type\n option [ option... ] [WITH (...)];\n\nwhere option is any of these in any order:\n\nAS string [,string]\nLANGUAGE name\nIMMUTABLE\nSTABLE\nVOLATILE\nCALLED ON NULL INPUT\t\t-- SQL spelling of not \"strict\"\nRETURNS NULL ON NULL INPUT\t-- SQL spelling of \"strict\"\nSTRICT\n[EXTERNAL] SECURITY DEFINER\t-- SQL spelling of \"setuid\"\n[EXTERNAL] SECURITY INVOKER\t-- SQL spelling of not \"setuid\"\nIMPLICIT CAST\n\n(The SECURITY options are noops right now, but I'm planning to implement\nthem next.)\n\nThe WITH (...) options are still there, but sort of less encouraged, I\nguess.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 16 May 2002 19:21:32 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Updated CREATE FUNCTION syntax"
},
{
"msg_contents": "> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Peter Eisentraut\n> Sent: Thursday, May 16, 2002 1:22 PM\n> To: PostgreSQL Development\n> Subject: [HACKERS] Updated CREATE FUNCTION syntax\n>\n>\n> As per earlier vague hint, I'm bringing the CREATE FUNCTION syntax in line\n> with SQL99. Everything is fully backward compatible. Here is the new\n> synopsis:\n>\n> CREATE [OR REPLACE] FUNCTION name (args) RETURNS type\n> option [ option... ] [WITH (...)];\n>\n> where option is any of these in any order:\n>\n> AS string [,string]\n> LANGUAGE name\n> IMMUTABLE\n> STABLE\n> VOLATILE\n> CALLED ON NULL INPUT\t\t-- SQL spelling of not \"strict\"\n> RETURNS NULL ON NULL INPUT\t-- SQL spelling of \"strict\"\n> STRICT\n> [EXTERNAL] SECURITY DEFINER\t-- SQL spelling of \"setuid\"\n> [EXTERNAL] SECURITY INVOKER\t-- SQL spelling of not \"setuid\"\n> IMPLICIT CAST\n>\n> (The SECURITY options are noops right now, but I'm planning to implement\n> them next.)\n>\n> The WITH (...) options are still there, but sort of less encouraged, I\n> guess.\n\nIs there any standardized way of handling the single-quotes within function\ndefinition? Rather than doubling them up (which can make for very messy code\nwhen your scripting language uses single quotes!), allowing another symbol\nto be used, with that symbol be declared in the CREATE FUNCTION line?\nInterbase uses a system like this: you can set the delimiter to anything you\nwant and use that instead of '.\n\n- J.\n\nJoel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton\nKnowledge Management & Technology Consultant\n\n",
"msg_date": "Thu, 16 May 2002 14:00:05 -0400",
"msg_from": "\"Joel Burton\" <joel@joelburton.com>",
"msg_from_op": false,
"msg_subject": "Re: Updated CREATE FUNCTION syntax"
},
{
"msg_contents": "Joel Burton wrote:\n> \n> > As per earlier vague hint, I'm bringing the CREATE FUNCTION syntax in line\n> > with SQL99. Everything is fully backward compatible. Here is the new\n> > synopsis:\n> >\n> > CREATE [OR REPLACE] FUNCTION name (args) RETURNS type\n> > option [ option... ] [WITH (...)];\n> >\n> > where option is any of these in any order:\n> >\n> > AS string [,string]\n> > LANGUAGE name\n> > IMMUTABLE\n> > STABLE\n> > VOLATILE\n> > CALLED ON NULL INPUT -- SQL spelling of not \"strict\"\n> > RETURNS NULL ON NULL INPUT -- SQL spelling of \"strict\"\n> > STRICT\n> > [EXTERNAL] SECURITY DEFINER -- SQL spelling of \"setuid\"\n> > [EXTERNAL] SECURITY INVOKER -- SQL spelling of not \"setuid\"\n> > IMPLICIT CAST\n> >\n> > (The SECURITY options are noops right now, but I'm planning to implement\n> > them next.)\n> >\n> > The WITH (...) options are still there, but sort of less encouraged, I\n> > guess.\n> \n> Is there any standardized way of handling the single-quotes within function\n> definition? Rather than doubling them up (which can make for very messy code\n> when your scripting language uses single quotes!), allowing another symbol\n> to be used, with that symbol be declared in the CREATE FUNCTION line?\n> Interbase uses a system like this: you can set the delimiter to anything you\n> want and use that instead of '.\n\nThat would be great! The quoting makes pl/pgsql a major pain. It, and\ndependency tracking. Of course, with PL/SQL, Oracle doesn't even require\na delimiter:\n\nCREATE PROCEDURE foo(x INTEGER) AS\n...\nEND;\n\nSomehow they manage to get that past their parser, even if the procedure\nhas \"Compilation Errors\". It would be sweet...\n\nMike Mascari\nmascarm@mascari.com\n",
"msg_date": "Thu, 16 May 2002 15:56:28 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": false,
"msg_subject": "Re: Updated CREATE FUNCTION syntax"
},
{
"msg_contents": "Joel Burton writes:\n\n> Is there any standardized way of handling the single-quotes within function\n> definition? Rather than doubling them up (which can make for very messy code\n> when your scripting language uses single quotes!), allowing another symbol\n> to be used, with that symbol be declared in the CREATE FUNCTION line?\n> Interbase uses a system like this: you can set the delimiter to anything you\n> want and use that instead of '.\n\nI think we need something like that. How exactly does Interbase \"set\" the\ndelimiter? Keep in mind that our lexer and parser are static.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 17 May 2002 15:36:43 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Updated CREATE FUNCTION syntax"
},
{
"msg_contents": "> -----Original Message-----\n> From: Peter Eisentraut [mailto:peter_e@gmx.net]\n> Sent: Friday, May 17, 2002 9:37 AM\n> To: Joel Burton\n> Cc: PostgreSQL Development\n> Subject: RE: [HACKERS] Updated CREATE FUNCTION syntax\n>\n>\n> Joel Burton writes:\n>\n> > Is there any standardized way of handling the single-quotes\n> within function\n> > definition? Rather than doubling them up (which can make for\n> very messy code\n> > when your scripting language uses single quotes!), allowing\n> another symbol\n> > to be used, with that symbol be declared in the CREATE FUNCTION line?\n> > Interbase uses a system like this: you can set the delimiter to\n> anything you\n> > want and use that instead of '.\n>\n> I think we need something like that. How exactly does Interbase \"set\" the\n> delimiter? Keep in mind that our lexer and parser are static.\n\nActually, now that I've thought about it for a moment, Interbase doesn't use\na different delimiter, it allows a different end-of-line character.\n\nI've forgotten the exact syntax, but it's something like (Interbase doesn't\nallow functions like this, it uses these for stored procedures, but the\nbasic idea is here):\n\nSELECT * FROM SOMETHING;\n\nSET EOL TO &;\n\nCREATE FUNCTION() RETURNS ... AS\n BEGIN;\n END;\n LANGUAGE plpgsql &\n\nSET EOL TO ;&\n\nSELECT * FROM SOMETHING;\n\nSo that it's legal to use ; in the function, since the parser is looking for\na different character to end the complete statement.\n\nI think it would be more straightforward to see something like:\n\nCREATE FUNCTION XXX() RETURNS ... AS #\n BEGIN;\n END; #\nLANGUAGE plpgsql DELIMITER #;\n\nBut, with a static lexer/parser, that would be tricky, wouldn't it?\n\nWould it work to allow, rather than free choice of delimiters, to allow\nsomething other than single quote? Probably 95% of functions contain single\nquotes (and many scripting languages/development environments treat them\nspecially), guaranteeing that you'll almost always have to double (or quad-\nor oct- or whatever!) your single quotes.\n\nIf it's not too offensive, would something like\n\nCREATE FUNCTION XXX() RETURNS AS [[\n BEGIN;\n END; ]]\nLANGUAGE plpgsql DELIMITED BY BRACES;\n\nwork? Without the \"delimited by braces\", the functions would be parsed the\nsame (single quotes), with this, it would allow [[ and ]]. Someone who used\n[[ or ]] in their functions (perhaps as a custom operator or in a text\nstring) would have to quote these (\\[\\[ and \\]\\]), but this would be\n__much__ less frequent than having to deal with single quotes. Nothing\nshould break, since they have to choose to use the 'delimited by braces'\noption.\n\nIt's not as nice as getting to choose your own delimiter, but it would solve\nthe problem for most of us just fine and wouldn't seem too hard to\nimplement.\n\nFunctions are in SQL99, aren't they? Does the standard suggest anything\nhere?\n\n- J.\n\nJoel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton\nKnowledge Management & Technology Consultant\n\n",
"msg_date": "Fri, 17 May 2002 09:57:39 -0400",
"msg_from": "\"Joel Burton\" <joel@joelburton.com>",
"msg_from_op": false,
"msg_subject": "Re: Updated CREATE FUNCTION syntax"
},
{
"msg_contents": "On Fri, 17 May 2002 09:57:39 -0400, \"Joel Burton\"\n<joel@joelburton.com> wrote:\n>> -----Original Message-----\n>> From: Peter Eisentraut [mailto:peter_e@gmx.net]\n>> Sent: Friday, May 17, 2002 9:37 AM\n>> To: Joel Burton\n>> Cc: PostgreSQL Development\n>> Subject: RE: [HACKERS] Updated CREATE FUNCTION syntax\n>>\n>> I think we need something like that. How exactly does Interbase \"set\" the\n>> delimiter? Keep in mind that our lexer and parser are static.\n>\n>Actually, now that I've thought about it for a moment, Interbase doesn't use\n>a different delimiter, it allows a different end-of-line character.\n\nActually it's the end-of-command delimiter, called terminator in\nInterbase speech. And it doesn`t have to be a single character, e.g.\n\nSET TERM !! ;\n\n>SELECT * FROM SOMETHING;\n>\n>SET EOL TO &;\n>\n>CREATE FUNCTION() RETURNS ... AS\n> BEGIN;\n> END;\n> LANGUAGE plpgsql &\n\nYou could even enter any number of commands here, each terminated by\nthe current terminator:\nSELECT * FROM MYTABLE &\nDROP TABLE MYTABLE &\nSET TERM ! &\nSELECT * FROM ANOTHERTABLE !\n\n... before you eventually return to the standard terminator:\nSET TERM ; !\nSELECT * FROM WHATEVER ;\n\nServus\n Manfred\n",
"msg_date": "Fri, 17 May 2002 21:21:00 +0200",
"msg_from": "Manfred Koizar <mkoi-pg@aon.at>",
"msg_from_op": false,
"msg_subject": "Re: Updated CREATE FUNCTION syntax"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Joel Burton writes:\n>> Is there any standardized way of handling the single-quotes within function\n>> definition? Rather than doubling them up (which can make for very messy code\n>> when your scripting language uses single quotes!), allowing another symbol\n>> to be used, with that symbol be declared in the CREATE FUNCTION line?\n>> Interbase uses a system like this: you can set the delimiter to anything you\n>> want and use that instead of '.\n\n> I think we need something like that. How exactly does Interbase \"set\" the\n> delimiter? Keep in mind that our lexer and parser are static.\n\nSeems like the only way to do that in the backend would be to find a way\nof slipping the function text past the lexer/parser entirely. While I\ncan imagine ways of doing that, I think it'd be a *whole* lot cleaner\nto fix things on the client side.\n\nHow do you feel about a psql hack that provides a \"function definition\"\nmode? More generally it could be a mode to enter random text and have\nit be converted to an SQL literal string. Perhaps\n\n\tpsql=> create function foo (int) returns int as\n\tpsql-> \\beginliteral\n\tpsql-LIT> begin\n\tpsql-LIT> x := $1;\n\tpsql-LIT> ...\n\tpsql-LIT> end;\n\tpsql-LIT> \\endliteral\n\tpsql-> language plpgsql;\n\nEssentially, \\beginliteral and \\endliteral each convert to a quote\nmark, and everywhere in between quotes and backslashes get doubled.\nWe might want to specify that the leading and trailing newlines get\ndropped, too, though for function-definition applications that would\nnot matter.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 17 May 2002 19:22:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Updated CREATE FUNCTION syntax "
},
{
"msg_contents": "\"Joel Burton\" <joel@joelburton.com> writes:\n> Given that 98% of my function defining is done is psql, this would be\n> fine for me and solve my frustrations. It wouldn't help people that\n> build functions in scripting languages or non-psql environments,\n> however, but I don't know how common this is.\n\nTrue, but I'm thinking that other development environments could provide\nequivalent features. (I seem to recall that pgAdmin already does, for\nexample.)\n\nISTM the reason we've not addressed this for so long is that no one\ncould think of a reasonable way to solve it on the backend side.\nMaybe we just have to shift our focus.\n\nAnother point worth considering is that because psql has its own\nsmarts about locating query boundaries, it'd be very difficult to\nbuild a function-definition mode without making psql changes, anyway.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 18 May 2002 11:22:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Updated CREATE FUNCTION syntax "
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> said:\n\n> Seems like the only way to do that in the backend would be to find a way\n> of slipping the function text past the lexer/parser entirely. While I\n> can imagine ways of doing that, I think it'd be a *whole* lot cleaner\n> to fix things on the client side.\n> \n> How do you feel about a psql hack that provides a \"function definition\"\n> mode? More generally it could be a mode to enter random text and have\n> it be converted to an SQL literal string. Perhaps\n> \n> \tpsql=> create function foo (int) returns int as\n> \tpsql-> \\beginliteral\n> \tpsql-LIT> begin\n> \tpsql-LIT> x := $1;\n> \tpsql-LIT> ...\n> \tpsql-LIT> end;\n> \tpsql-LIT> \\endliteral\n> \tpsql-> language plpgsql;\n> \n> Essentially, \\beginliteral and \\endliteral each convert to a quote\n> mark, and everywhere in between quotes and backslashes get doubled.\n> We might want to specify that the leading and trailing newlines get\n> dropped, too, though for function-definition applications that would\n> not matter.\n\nTom --\n\nGiven that 98% of my function defining is done is psql, this would be fine for me and solve my frustrations. It wouldn't help people that build functions in scripting languages or non-psql environments, however, but I don't know how common this is.\n\nWhat do others think?\n\nThanks!\n-- \n\nJoel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton\nKnowledge Management & Technology Consultant \n\n\n",
"msg_date": "Sat, 18 May 2002 16:34:06 -0000",
"msg_from": "\"Joel Burton\" <joel@joelburton.com>",
"msg_from_op": false,
"msg_subject": "Re: Updated CREATE FUNCTION syntax "
},
{
"msg_contents": "\"Joel Burton\" <joel@joelburton.com> writes:\n> Tom Lane <tgl@sss.pgh.pa.us> said:\n>> ISTM the reason we've not addressed this for so long is that no one\n>> could think of a reasonable way to solve it on the backend side.\n>> Maybe we just have to shift our focus.\n\n> Out of curiosity, Tom, why the preference for a solution like this\n> rather than allowing for a much-less-common-than-' delimiter for the\n> create function syntax? (Such as the \"[[\" and \"]]\" I suggested a few\n> posts ago?)\n\nThat's not a solution for psql, unless you also teach psql about these\ndelimiters --- else it'll still terminate the query shipped to the\nbackend too soon. That being the case, you might as well just implement\nthe delimiters in psql. Seems like [[ and ]] are isomorphic to what I\nsuggested. I'd have a preference for \\[ and \\] though.\n\nNote that I did not mean to suggest that \"\\beginliteral\" and\n\"\\endliteral\" were actually the names I'd want to use; that was just\nfor clarity of exposition. Something shorter would be more practical.\nIt might be reasonable to use \\' for example, or if that seems a little\ntoo brief, \\lit and \\eol (end literal), or \\lit ... \\til if you remember\nAlgol68.\n\n> That would have the advantage of being consistent as users switched\n> from writing functions in psql to writing function-writing functions,\n> to writing functions in other environments, etc.\n\nI would expect script-ish environments to follow psql's lead. For\nGUI-ish environments this is probably a complete nonissue; I'd pretty\nmuch expect the function body to pop up in a separate editing window\nto start with, so that the user really has no need to think about\nseparating the function body from the rest of the CREATE FUNCTION\ncommand.\n\nIn any case I do not think it's likely that client-side programming\nenvironments would be able to take advantage of such a feature without\nrework, just as psql couldn't. Any backend-side solution we might put\nin would really amount to a protocol change, whether you wanted to call\nit one or not. So the notion of \"fix it once in the backend, not once\nper client\" seems illusory to me for this particular problem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 18 May 2002 15:26:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Updated CREATE FUNCTION syntax "
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> said:\n\n> \"Joel Burton\" <joel@joelburton.com> writes:\n> > Given that 98% of my function defining is done is psql, this would be\n> > fine for me and solve my frustrations. It wouldn't help people that\n> > build functions in scripting languages or non-psql environments,\n> > however, but I don't know how common this is.\n> \n> True, but I'm thinking that other development environments could provide\n> equivalent features. (I seem to recall that pgAdmin already does, for\n> example.)\n> \n> ISTM the reason we've not addressed this for so long is that no one\n> could think of a reasonable way to solve it on the backend side.\n> Maybe we just have to shift our focus.\n\nOut of curiosity, Tom, why the preference for a solution like this rather than allowing for a much-less-common-than-' delimiter for the create function syntax? (Such as the \"[[\" and \"]]\" I suggested a few posts ago?) This would seem like something that wouldn't seem too difficult to do, and would work in all environments.\n\nThat would have the advantage of being consistent as users switched from writing functions in psql to writing function-writing functions, to writing functions in other environments, etc.\n\nThanks,\n\n- J.\n\n-- \n\nJoel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton\nKnowledge Management & Technology Consultant \n\n\n",
"msg_date": "Sat, 18 May 2002 20:48:07 -0000",
"msg_from": "\"Joel Burton\" <joel@joelburton.com>",
"msg_from_op": false,
"msg_subject": "Re: Updated CREATE FUNCTION syntax "
}
] |
[
{
"msg_contents": "I've attached a patch for libpgtcl which adds access to backend version\nnumbers.\n\nThis is via a new command:\n\npg_version <db channel> <major varname> ?<minor varname>? ?<patch varname>?\n\nUsing readonly variables rather than a command was my first choice but I\ndecided that it was inappropiate for the library to start assigning global\nvariable(s) when that's really the applications job and the command interface\nis consistent with the rest of the interface.\n\nObviously, backend version numbers are specific to a particular connection. So\nI've created a new data structure, to keep the information as a distinct unit,\nand added an instance of the new structure to the Pg_ConnectionId type. The\nversion information is retrieved from the given connection on first use of\npg_version and cached in the new data structure for subsequent accesses.\n\nIn addition to filling the named variables in the callers scope with version\nnumbers/strings the command returns the complete string as returned by\nversion(). It's not possible to turn this return off at the moment but I don't\nsee it as a problem since normal methods of stopping unwanted values returned\nfrom procedures can be applied in the application if required.\n\nPerhaps the most significant change is that I've increased the package's\nversion number from 1.3 to 1.4. This will adversly effect anyone using an\napplication that requires a specific version of the package where their\npostgres installation is updated but their application has not been. I can't\nimagine there are many applications out there using the package management\nfeatures of TCL though.\n\nI envisage this patch applied to 7.3 tip and to 7.2 for the 7.2.2 release\nmentioned a couple of days ago. The only problem with doing this for 7.2 that I\ncan see is where people doing the 'package -exact require Pgtcl 1.x' thing, and\nhow many of those are there? Even PgAccess doesn't use that.\n\n\nNote for commiter et al,, this patch also includes one change made in 7.3devel\nand not 7.2.1. That is where a test of the return value from a Tcl_SetVar call\nhas been corrected from a test against TCL_OK to NULL. This is correct and\nshould be applied to the 7.2 branch in my view, however, I do not know if this\nhas already been applied there so something to watch out for.\n\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants",
"msg_date": "Thu, 16 May 2002 23:49:18 +0100 (BST)",
"msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>",
"msg_from_op": true,
"msg_subject": "libpgtcl - backend version information patch"
},
{
"msg_contents": "Nigel J. Andrews writes:\n\n> I've attached a patch for libpgtcl which adds access to backend version\n> numbers.\n>\n> This is via a new command:\n>\n> pg_version <db channel> <major varname> ?<minor varname>? ?<patch varname>?\n\nThis doesn't truly reflect the way PostgreSQL version numbers are handled.\nSay for 7.2.1, the \"major\" is really \"7.2\" and the minor is \"1\". With the\ninterface you proposed, the information major == 7 doesn't really convey\nany useful information.\n\n> I envisage this patch applied to 7.3 tip and to 7.2 for the 7.2.2\n> release mentioned a couple of days ago. The only problem with doing this\n> for 7.2 that I can see is where people doing the 'package -exact require\n> Pgtcl 1.x' thing, and how many of those are there? Even PgAccess doesn't\n> use that.\n\nNormally we only put bug fixes in minor releases. PgAccess may get an\nexception, but bumping the version number of a library is stretching it a\nlittle. If you're intending to use the function for PgAccess, why not\nmake it internal to PgAccess? That way you can tune the major/minor thing\nexactly how you need it.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 17 May 2002 15:37:05 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] libpgtcl - backend version information patch"
}
] |
[
{
"msg_contents": "Just noticed this a few minutes ago on build from cvs tip:\n\nmake -C preproc all\nmake[4]: Entering directory `/opt/src/pgsql/src/interfaces/ecpg/preproc'\nbison -y -d preproc.y\nconflicts: 2 reduce/reduce\n\nJoe\n\n\n",
"msg_date": "Thu, 16 May 2002 16:51:28 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "interfaces/ecpg/preproc reduce/reduce conflicts"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Just noticed this a few minutes ago on build from cvs tip:\n> make -C preproc all\n> make[4]: Entering directory `/opt/src/pgsql/src/interfaces/ecpg/preproc'\n> bison -y -d preproc.y\n> conflicts: 2 reduce/reduce\n\nYeah, the ECPG grammar has been broken for awhile. I'm expecting\nMichael to do something about it sooner or later ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 May 2002 22:10:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: interfaces/ecpg/preproc reduce/reduce conflicts "
},
{
"msg_contents": "On Thu, 16 May 2002, Tom Lane wrote:\n\n> Joe Conway <mail@joeconway.com> writes:\n> > Just noticed this a few minutes ago on build from cvs tip:\n> > make -C preproc all\n> > make[4]: Entering directory `/opt/src/pgsql/src/interfaces/ecpg/preproc'\n> > bison -y -d preproc.y\n> > conflicts: 2 reduce/reduce\n> \n> Yeah, the ECPG grammar has been broken for awhile. I'm expecting\n> Michael to do something about it sooner or later ...\n\nIt's not just the grammar. Last time I tried to compile OSDB to get some\nbenchmarking done, ecpg segfaulted on it (before having any reduce\nconflict). I tried to do some investigation, but my knowledge was too\nlimited and couldn't even generate a decent bug report.\n\n-- \nAlvaro Herrera (<alvherre[@]dcc.uchile.cl>)\n\"Nunca se desea ardientemente lo que solo se desea por razon\" (F. Alexandre)\n\n",
"msg_date": "Fri, 17 May 2002 10:47:54 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: interfaces/ecpg/preproc reduce/reduce conflicts "
},
{
"msg_contents": "On Thu, May 16, 2002 at 04:51:28PM -0700, Joe Conway wrote:\n> make -C preproc all\n> make[4]: Entering directory `/opt/src/pgsql/src/interfaces/ecpg/preproc'\n> bison -y -d preproc.y\n> conflicts: 2 reduce/reduce\n\nDidn't notice this before. Fix was quite easy, but I also did sync\necpg's parser with the backend one and now I cannot compile it anymore.\nHaven't found time to dig into this. So be careful, current CVS WILL NOT\nCOMPILE!\n\nThis is what happens:\n\npreproc.y:5330: fatal error: maximum table size (32767) exceeded\n\nI never before saw that and didn't find it in the docs with a quick\ngrep. Maybe someone knows this. Else I will check in more detail.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Sun, 19 May 2002 22:04:48 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: interfaces/ecpg/preproc reduce/reduce conflicts"
},
{
"msg_contents": "On Fri, May 17, 2002 at 10:47:54AM -0400, Alvaro Herrera wrote:\n> It's not just the grammar. Last time I tried to compile OSDB to get some\n> benchmarking done, ecpg segfaulted on it (before having any reduce\n> conflict). I tried to do some investigation, but my knowledge was too\n> limited and couldn't even generate a decent bug report.\n\nBut at least you could send me a mail telling me something fails. :-)\n\nI won't debug this if I don't know it fails.\n\nMichael\n\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Sun, 19 May 2002 22:06:46 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: interfaces/ecpg/preproc reduce/reduce conflicts"
},
{
"msg_contents": "Michael Meskes <meskes@postgresql.org> writes:\n> Haven't found time to dig into this. So be careful, current CVS WILL NOT\n> COMPILE!\n\nIf you're not going to fix it right away, would you mind reverting the\ncommit?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 19 May 2002 16:19:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: interfaces/ecpg/preproc reduce/reduce conflicts "
},
{
"msg_contents": "On Sun, May 19, 2002 at 04:19:58PM -0400, Tom Lane wrote:\n> Michael Meskes <meskes@postgresql.org> writes:\n> > Haven't found time to dig into this. So be careful, current CVS WILL NOT\n> > COMPILE!\n> \n> If you're not going to fix it right away, would you mind reverting the\n> commit?\n\nSorry, I just found this mail, no idea where it stuck since sunday.\nDon't worry, it's already fixed.\n\nAlso I'm in contact with some bison developers.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Wed, 22 May 2002 11:54:33 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: interfaces/ecpg/preproc reduce/reduce conflicts"
}
] |
[
{
"msg_contents": "I'm seeing this with the current CVS code:\n\n[nconway:/home/nconway/pgsql]% initdb -D /data/pgsql/pgdata\nThe files belonging to this database system will be owned by user \"nconway\".\nThis user must also own the server process.\n\n/data/pgsql/bin/initdb: test: =: unary operator expected\nThe database cluster will be initialized with locales:\n COLLATE: C\tCTYPE: \tMESSAGES: C\n MONETARY: C\tNUMERIC: C\tTIME: C\n\n<snip>\n\nNamely, the \"unary operator expected\" warning.\n\nBTW, does that \"CTYPE:\" element look correct? Just from a visual\npoint of view, I'd expect it to have a value (e.g. C).\n\nCheers,\n\nNeil\n",
"msg_date": "Thu, 16 May 2002 22:00:10 -0400",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": true,
"msg_subject": "minor CVS regression"
},
{
"msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> I'm seeing this with the current CVS code:\n> [nconway:/home/nconway/pgsql]% initdb -D /data/pgsql/pgdata\n> The files belonging to this database system will be owned by user \"nconway\".\n> This user must also own the server process.\n\n> /data/pgsql/bin/initdb: test: =: unary operator expected\n> The database cluster will be initialized with locales:\n> COLLATE: C\tCTYPE: \tMESSAGES: C\n> MONETARY: C\tNUMERIC: C\tTIME: C\n\n> <snip>\n\n> Namely, the \"unary operator expected\" warning.\n\nFixed.\n\n> BTW, does that \"CTYPE:\" element look correct? Just from a visual\n> point of view, I'd expect it to have a value (e.g. C).\n\nApparently you are running with LC_CTYPE explicitly set to \"\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 May 2002 22:21:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: minor CVS regression "
}
] |
[
{
"msg_contents": "I just submitted a patch to support SSL client certificates.\nWith this patch the Port structure is extended to include a\nnew field, 'peer', that contains the client certificate if \noffered.\n\nThis patch also cleans up the SSL code. Most of this should\nbe invisible to users, with the exception of a new requirement\nthat private keys be regular files without world- or group-access,\na standard requirement for private keys. The patch should also\nbe much more secure with the addition of support for empheral DH\nkeys.\n\nTo use it, you must create a new client cert, e.g., with\n\n openssl req -new -x509 -newkey rsa:1024 -keyout key.pem \\\n -nodes -out cert.pem -days 365 \n\n chmod go-rwx key.pem\n\nthen specify the location of these files with two environment\nvariables:\n\n set PGCLIENTCERT=cert.pem; export PGCLIENTCERT\n set PGCLIENTKEY=key.pem; export PGCLIENTKEY\n\n(or maybe libpq should just look in $HOME/.postgresql/..., similar\nto how ssh(1) works.) The postmaster log should show something like\n\n DEBUG: SSL connection from /DC=com/DC=example/CN=BearGiles/Email=bgiles@example.com with cipher EDH-RSA-DES-CBC3-SHA\n\n(after restarting postmaster, obviously).\n\nThe patch description contains a brief discussion of other\nissues (TLSv1, renegotiation, mapping client certs to users).\n\nBear\n",
"msg_date": "Fri, 17 May 2002 00:00:53 -0600 (MDT)",
"msg_from": "Bear Giles <bgiles@coyotesong.com>",
"msg_from_op": true,
"msg_subject": "SSL client cert patch submitted"
}
] |
[
{
"msg_contents": "Hi,\n\nI've been developing a program with the postgres jdbc 2 driver, jdk-1.3.0 and\npostgres 6.5.\n\nWhen I start my program up it bombs like so:\n\nSomething unusual has occured to cause the driver to fail. Please report\nthis exception: Exception: java.sql.SQLException: ERROR: No such\nfunction 'pg_encoding_to_char' with the specified attributes\n\nStack Trace:\n\njava.sql.SQLException: ERROR: No such function 'pg_encoding_to_char'\nwith the specified attributes\n\n at\norg.postgresql.core.QueryExecutor.execute(QueryExecutor.java:94)\n at org.postgresql.Connection.ExecSQL(Connection.java:398)\n at org.postgresql.Connection.ExecSQL(Connection.java:381)\u0018\n at org.postgresql.Connection.openConnection(Connection.java:314)\n at org.postgresql.Driver.connect(Driver.java:149)\n\nDoes anyone know what any of this means...?\n\nRegards,\nYouenn\n\n--------------------------------------------------------------------------------\nUniversit� de Bretagne sud http://www.univ-ubs.fr/\n\n",
"msg_date": "Fri, 17 May 2002 10:02:18 +0200",
"msg_from": "youenn.ballouard2@etud.univ-ubs.fr",
"msg_from_op": true,
"msg_subject": "Trouble with pg_encoding_to_char"
},
{
"msg_contents": "It means you are running a jdbc driver from 7.2 (perhaps 7.1, but I \nthink 7.2) against a 6.5 database. While we try to make the jdbc driver \nbackwardly compatable, we don't go back that far. You really should \nconsider upgrading your database to something remotely current.\n\nthanks,\n--Barry\n\nyouenn.ballouard2@etud.univ-ubs.fr wrote:\n> Hi,\n> \n> I've been developing a program with the postgres jdbc 2 driver, jdk-1.3.0 and\n> postgres 6.5.\n> \n> When I start my program up it bombs like so:\n> \n> Something unusual has occured to cause the driver to fail. Please report\n> this exception: Exception: java.sql.SQLException: ERROR: No such\n> function 'pg_encoding_to_char' with the specified attributes\n> \n> Stack Trace:\n> \n> java.sql.SQLException: ERROR: No such function 'pg_encoding_to_char'\n> with the specified attributes\n> \n> at\n> org.postgresql.core.QueryExecutor.execute(QueryExecutor.java:94)\n> at org.postgresql.Connection.ExecSQL(Connection.java:398)\n> at org.postgresql.Connection.ExecSQL(Connection.java:381)\u0018\n> at org.postgresql.Connection.openConnection(Connection.java:314)\n> at org.postgresql.Driver.connect(Driver.java:149)\n> \n> Does anyone know what any of this means...?\n> \n> Regards,\n> Youenn\n> \n> --------------------------------------------------------------------------------\n> Universit� de Bretagne sud http://www.univ-ubs.fr/\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n\n",
"msg_date": "Fri, 17 May 2002 18:54:32 -0700",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: Trouble with pg_encoding_to_char"
}
] |
[
{
"msg_contents": "Hi,\n\nwhat do we have planned for the next release? In the TODO file there are\nquite a lot of points but I don't like to talk about things we will\neventually do, but would like to present what we will implement for 7.3\nresp. 8.0. \n\nIMO the most important stuff seems to be:\n\n- replication (which is listed as urgent anyway)\n- alter table drop column (for which I get asked about once a week)\n- recursive views (you know, I wanted to implement this when I started\n my work on PostgreSQL, but never found the time)\n\nMichael\n\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Fri, 17 May 2002 13:58:41 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Future plans"
},
{
"msg_contents": "On Fri, 2002-05-17 at 16:58, Michael Meskes wrote:\n> Hi,\n> \n> IMO the most important stuff seems to be:\n> \n...\n> - recursive views (you know, I wanted to implement this when I started\n> my work on PostgreSQL, but never found the time)\n\nA good start would be to make the parser recognize the full sql99 syntax\nfor it. Its quite big - see attached gif I generated from grammar\nextracted from the standard:",
"msg_date": "21 May 2002 00:35:20 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Future plans"
},
{
"msg_contents": "On Tue, May 21, 2002 at 12:35:20AM +0500, Hannu Krosing wrote:\n> > - recursive views (you know, I wanted to implement this when I started\n> > my work on PostgreSQL, but never found the time)\n> \n> A good start would be to make the parser recognize the full sql99 syntax\n> for it. Its quite big - see attached gif I generated from grammar\n> extracted from the standard:\n\nWell, the parser seems to be the easier part. :-)\n\nMichael\n\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Tue, 21 May 2002 10:18:33 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: Future plans"
},
{
"msg_contents": "On Tue, 2002-05-21 at 10:18, Michael Meskes wrote:\n> On Tue, May 21, 2002 at 12:35:20AM +0500, Hannu Krosing wrote:\n> > > - recursive views (you know, I wanted to implement this when I started\n> > > my work on PostgreSQL, but never found the time)\n> > \n> > A good start would be to make the parser recognize the full sql99 syntax\n> > for it. Its quite big - see attached gif I generated from grammar\n> > extracted from the standard:\n> \n> Well, the parser seems to be the easier part. :-)\n\nSure.\n\nMy point was that we should put in the _full_ syntax in one shot and\nthen we could implement in smaller pieces.\n\n------------------\nHannu\n\n\n",
"msg_date": "21 May 2002 14:00:22 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Future plans"
}
] |
[
{
"msg_contents": "Hi,\n\nsince we will show PostgreSQL related stuff on Linuxtag in Germany next\nmonth, I'd like to get some PostgreSQL posters for the booth. But I have\nno idea where to find some. \n\nDo we have that kind of stuff? Or where could I get it? Preferable of course as file so I can print it myself.\n\nThanks in advance\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Fri, 17 May 2002 14:00:35 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Poster(s) needed"
},
{
"msg_contents": "\nNot that I'm aware of anyone making ...\n\nOn Fri, 17 May 2002, Michael Meskes wrote:\n\n> Hi,\n>\n> since we will show PostgreSQL related stuff on Linuxtag in Germany next\n> month, I'd like to get some PostgreSQL posters for the booth. But I have\n> no idea where to find some.\n>\n> Do we have that kind of stuff? Or where could I get it? Preferable of course as file so I can print it myself.\n>\n> Thanks in advance\n>\n> Michael\n> --\n> Michael Meskes\n> Michael@Fam-Meskes.De\n> Go SF 49ers! Go Rhein Fire!\n> Use Debian GNU/Linux! Use PostgreSQL!\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n",
"msg_date": "Sat, 18 May 2002 02:56:23 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Poster(s) needed"
},
{
"msg_contents": "How about the postgresql logo - is there a source vector/postscript of it \nso that he can blow it up without res loss and print it? The logo designer \nmay still have the source files.\n\nCheerio,\nLink.\n\nAt 02:56 AM 5/18/02 -0300, Marc G. Fournier wrote:\n\n>Not that I'm aware of anyone making ...\n>\n>On Fri, 17 May 2002, Michael Meskes wrote:\n> > month, I'd like to get some PostgreSQL posters for the booth. But I have\n> > no idea where to find some.\n> >\n> > Do we have that kind of stuff? Or where could I get it? Preferable of \n> course as file so I can print it myself.\n\n\n",
"msg_date": "Sat, 18 May 2002 18:51:58 +0800",
"msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>",
"msg_from_op": false,
"msg_subject": "Re: Poster(s) needed"
},
{
"msg_contents": "Lincoln Yeoh <lyeoh@pop.jaring.my> writes:\n> How about the postgresql logo - is there a source vector/postscript of it \n> so that he can blow it up without res loss and print it?\n\nI have EPS versions of both the elephant-in-crystal and cartoon-elephant\nlogos. I'm pretty sure both are up on the website someplace, 'cause I\ndidn't make either one.\n\nBTW, Marc will correct me if I'm wrong, but I think officially the\ncrystal one is the PG project logo while the cartoon is more associated\nwith PostgreSQL Inc. I tend to ignore this distinction though, since\nthe crystal logo renders beautifully on screen but is nearly unusable\nfor black-and-white printouts. So I like to use whichever fits the\nneed at hand.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 18 May 2002 11:46:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Poster(s) needed "
}
] |
[
{
"msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>> Michael seems to feel that the tuple count should be nonzero if any\n>> of the replacement operations did anything at all.\n\n> Here we usually add triggers, for replication, accounting, setting of \n> calculated rows ... In all of our cases we want the addition of a trigger\n> (or rule on a table) to be transparent to the client.\n\nYeah. Triggers wouldn't affect this anyway, unless they tell the system\nto suppress insertion/update/deletion of some tuples, in which case I\nthink it is correct not to count those tuples (certainly that's how the\ncode has always acted). As far as rules go, the last proposal that I\nmade would return the tuple count of the original query as long as there\nwere no INSTEAD rules --- if you have only actions *added* by rules then\nthey are transparent.\n\nThe hard case is where the original query is not executed because of an\nINSTEAD rule. As the code presently stands, you get \"UPDATE 0\" (or\nINSERT or DELETE 0) in that case, regardless of what else was done\ninstead by the rule. I thought that was OK when we put the change in,\nbut it seems clear that people do not like that behavior. The notion\nof \"keep it transparent\" doesn't seem to help here.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 17 May 2002 09:31:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Queries using rules show no rows modified? "
},
{
"msg_contents": "\nHas this been resolved and patched?\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> \"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> > Tom Lane <tgl@sss.pgh.pa.us> writes:\n> >> Michael seems to feel that the tuple count should be nonzero if any\n> >> of the replacement operations did anything at all.\n> \n> > Here we usually add triggers, for replication, accounting, setting of \n> > calculated rows ... In all of our cases we want the addition of a trigger\n> > (or rule on a table) to be transparent to the client.\n> \n> Yeah. Triggers wouldn't affect this anyway, unless they tell the system\n> to suppress insertion/update/deletion of some tuples, in which case I\n> think it is correct not to count those tuples (certainly that's how the\n> code has always acted). As far as rules go, the last proposal that I\n> made would return the tuple count of the original query as long as there\n> were no INSTEAD rules --- if you have only actions *added* by rules then\n> they are transparent.\n> \n> The hard case is where the original query is not executed because of an\n> INSTEAD rule. As the code presently stands, you get \"UPDATE 0\" (or\n> INSERT or DELETE 0) in that case, regardless of what else was done\n> instead by the rule. I thought that was OK when we put the change in,\n> but it seems clear that people do not like that behavior. The notion\n> of \"keep it transparent\" doesn't seem to help here.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 15 Jun 2002 01:32:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Queries using rules show no rows modified?"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Has this been resolved and patched?\n\nNo, I think we were still debating how it should work when the\ndiscussion died off...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 15 Jun 2002 12:08:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Queries using rules show no rows modified? "
}
] |
[
{
"msg_contents": "Hi,\n\nI have some schema queries/thoughts that I would appreciate some\nhelp/insights/fixes with/for please!\n\n(Apologies if these have been asked before or have been addressed in a\nrecent snapshot - my ISP's been having routing problems recently & I\ncan't reach postgresql.org via http right now).\n\n1) All the system views are currently part of the public namespace. Not\na problem for me, but shouldn't they be in pg_catalog?\n\n2) pgAdmin needs to be able to find out the namespace search path for\nthe current connection through an SQL query - is this possible yet or\ncan/will a suitable function be written?\n\nThere were more than that when I started typing this but I had a flash\nof inspiration and they went away :-)\n\nTIA,\n\nRegards, Dave.\n",
"msg_date": "Fri, 17 May 2002 16:32:21 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "More schema queries"
},
{
"msg_contents": "\"Dave Page\" <dpage@vale-housing.co.uk> writes:\n> 1) All the system views are currently part of the public namespace. Not\n> a problem for me, but shouldn't they be in pg_catalog?\n\nSay what? They *are* in pg_catalog. initdb creates nothing in public.\n\n> 2) pgAdmin needs to be able to find out the namespace search path for\n> the current connection through an SQL query - is this possible yet or\n> can/will a suitable function be written?\n\nEither 'show search_path' or 'select current_schemas()' might do what\nyou want; or perhaps not. Why do you want to know the search path?\nWhat's the scenario in which pgAdmin wouldn't set the search path\nfor itself?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 17 May 2002 16:25:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: More schema queries "
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us] \n> Sent: 17 May 2002 21:26\n> To: Dave Page\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] More schema queries \n> \n> \n> \"Dave Page\" <dpage@vale-housing.co.uk> writes:\n> > 1) All the system views are currently part of the public namespace. \n> > Not a problem for me, but shouldn't they be in pg_catalog?\n> \n> Say what? They *are* in pg_catalog. initdb creates nothing \n> in public.\n\nYou'll have to take my word for it that I haven't played with pg_class -\nis it possible I got a snapshot that was built at precisely the wrong\nmoment?\n\nhelpdesk=# select * from pg_namespace;\n\n oid | nspname | nspowner | nspacl\n-------+-------------+----------+-------------------\n 11 | pg_catalog | 1 | {=U}\n 99 | pg_toast | 1 | {=}\n 2200 | public | 1 | {=UC}\n 16563 | pg_temp_1 | 1 |\n 40071 | Test Schema | 1 |\n 48273 | flurb | 1 |\n 40072 | test | 1 | {=UC,postgres=UC}\n 48276 | dave2 | 1 |\n 48277 | Gulp | 1 | {=UC,postgres=UC}\n(9 rows)\n\nhelpdesk=# select relnamespace, relname from pg_class where relname like\n'pg_%';\n\n relnamespace | relname\n--------------+---------------------------------\n 11 | pg_largeobject\n 11 | pg_aggregate\n 11 | pg_trigger\n 11 | pg_listener\n 11 | pg_namespace\n 11 | pg_attrdef\n 11 | pg_database\n 11 | pg_xactlock\n 11 | pg_description\n 11 | pg_group\n 11 | pg_proc\n 11 | pg_relcheck\n 11 | pg_rewrite\n 2200 | pg_user\n 2200 | pg_rules\n 2200 | pg_views\n 2200 | pg_tables\n 2200 | pg_indexes\n 2200 | pg_stats\n 2200 | pg_stat_all_tables\n 2200 | pg_stat_sys_tables\n 11 | pg_aggregate_fnoid_index\n 11 | pg_am_name_index\n 11 | pg_am_oid_index\n 11 | pg_amop_opc_opr_index\n 11 | pg_amop_opc_strategy_index\n 11 | pg_amproc_opc_procnum_index\n 11 | pg_attrdef_adrelid_adnum_index\n 11 | pg_attribute_relid_attnam_index\n 11 | pg_attribute_relid_attnum_index\n 11 | pg_class_oid_index\n 11 | pg_class_relname_nsp_index\n 11 | pg_database_datname_index\n 11 | pg_database_oid_index\n 11 | pg_description_o_c_o_index\n 11 | pg_group_name_index\n 11 | pg_group_sysid_index\n 11 | pg_index_indrelid_index\n 11 | pg_index_indexrelid_index\n 11 | pg_inherits_relid_seqno_index\n 11 | pg_language_name_index\n 11 | pg_language_oid_index\n 11 | pg_largeobject_loid_pn_index\n 11 | pg_namespace_nspname_index\n 11 | pg_namespace_oid_index\n 11 | pg_opclass_am_name_nsp_index\n 11 | pg_opclass_oid_index\n 11 | pg_operator_oid_index\n 11 | pg_operator_oprname_l_r_n_index\n 11 | pg_proc_oid_index\n 11 | pg_proc_proname_args_nsp_index\n 11 | pg_relcheck_rcrelid_index\n 11 | pg_rewrite_oid_index\n 11 | pg_rewrite_rel_rulename_index\n 11 | pg_shadow_usename_index\n 11 | pg_shadow_usesysid_index\n 11 | pg_statistic_relid_att_index\n 11 | pg_trigger_tgconstrname_index\n 11 | pg_trigger_tgconstrrelid_index\n 11 | pg_trigger_tgrelid_tgname_index\n 11 | pg_trigger_oid_index\n 11 | pg_type_oid_index\n 11 | pg_type_typname_nsp_index\n 2200 | pg_stat_user_tables\n 2200 | pg_statio_all_tables\n 2200 | pg_statio_sys_tables\n 2200 | pg_statio_user_tables\n 2200 | pg_stat_all_indexes\n 2200 | pg_stat_sys_indexes\n 99 | pg_toast_16384_index\n 99 | pg_toast_16384\n 2200 | pg_stat_user_indexes\n 2200 | pg_statio_all_indexes\n 2200 | pg_statio_sys_indexes\n 2200 | pg_statio_user_indexes\n 99 | pg_toast_1262_index\n 99 | pg_toast_1262\n 2200 | pg_statio_all_sequences\n 2200 | pg_statio_sys_sequences\n 2200 | pg_statio_user_sequences\n 2200 | pg_stat_activity\n 99 | pg_toast_16416_index\n 99 | pg_toast_16416\n 2200 | pg_stat_database\n 11 | pg_statistic\n 11 | pg_type\n 11 | pg_attribute\n 99 | pg_toast_1261_index\n 99 | pg_toast_1261\n 11 | pg_class\n 11 | pg_inherits\n 11 | pg_index\n 11 | pg_operator\n 99 | pg_toast_1255_index\n...\n\n> > 2) pgAdmin needs to be able to find out the namespace \n> search path for \n> > the current connection through an SQL query - is this \n> possible yet or \n> > can/will a suitable function be written?\n> \n> Either 'show search_path' or 'select current_schemas()' might \n> do what you want; or perhaps not. Why do you want to know \n> the search path? What's the scenario in which pgAdmin \n> wouldn't set the search path for itself?\n\npgAdmin works 99% of the time in pg_catalog. When it creates objects, it\nalways specifies an absolute name (CREATE TABLE public.tablename...).\n\nHowever, one of the features is the ability to use the wizard, or just\ntype in an SQL query and output the results to either a plugin exporter\n(such as MS Excel, ACSII file etc) or to a screen grid. If the user\nselects the screen grid, then some parsing of the query is done to\nfigure out if we can generate queries to add/delete/update rows and\ntherefore enable or disable the relevant buttons. One of the tests is to\nfigure out if one of the base datasources in the query is a view -\ncurrently this is easy, but in 7.3 we could have a table & a view with\nthe same name in different schemas, hence by using the path we can\nfigure out what object we're actually using.\n\nIncidently if you're interested at the moment, you may remember that in\n7.2 beta there was a problem with slow startup under Cygwin which was\ndown to a few seconds by release... The last 2 snapshots I've run take\nwell over a minute for postmaster startup on a P3M 1.13GHz/512Mb under\nlittle load. There is virtually no disk activity during this time.\n\nRegards, Dave.\n\n",
"msg_date": "Fri, 17 May 2002 23:23:32 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: +AFs-HACKERS+AF0- More schema queries "
}
] |
[
{
"msg_contents": "\"Dave Page\" <dpage@vale-housing.co.uk> writes:\n> helpdesk=# select relnamespace, relname from pg_class where relname like\n> 'pg_%';\n\n> relnamespace | relname\n> --------------+---------------------------------\n> ...\n> 2200 | pg_user\n> 2200 | pg_rules\n> 2200 | pg_views\n> 2200 | pg_tables\n> 2200 | pg_indexes\n> 2200 | pg_stats\n> 2200 | pg_stat_all_tables\n> 2200 | pg_stat_sys_tables\n\nBizarre. It's not that way here. Would you mind updating to CVS tip,\nrebuilding, and seeing if you can duplicate that? Also, make sure\nyou're using the right initdb script ...\n\n\n> ... One of the tests is to\n> figure out if one of the base datasources in the query is a view -\n> currently this is easy, but in 7.3 we could have a table & a view with\n> the same name in different schemas, hence by using the path we can\n> figure out what object we're actually using.\n\nActually, I'd venture that you do *not* want to do namespace search\nresolution for yourself; have you thought about how messy the SQL query\nwould be? The new datatypes regclass, etc are intended to handle it\nfor you. For example\n\nselect 'foo'::regclass::oid;\t-- get OID of table foo in search path\n\nselect 'foo.bar'::regclass::oid; -- get OID of table foo.bar\n\nselect relkind from pg_class where oid = 'foo'::regclass; -- is foo a view?\n\n> Incidently if you're interested at the moment, you may remember that in\n> 7.2 beta there was a problem with slow startup under Cygwin which was\n> down to a few seconds by release... The last 2 snapshots I've run take\n> well over a minute for postmaster startup on a P3M 1.13GHz/512Mb under\n> little load. There is virtually no disk activity during this time.\n\nCurious. I have not noticed much of any change in postmaster startup\ntime on Unix. Can you run a profile or something to see where the\ntime is going?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 17 May 2002 18:23:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: More schema queries "
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us] \n> Sent: 17 May 2002 23:24\n> To: Dave Page\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] More schema queries \n> \n> \n> \"Dave Page\" <dpage@vale-housing.co.uk> writes:\n> > helpdesk=# select relnamespace, relname from pg_class where relname \n> > like 'pg_%';\n> \n> > relnamespace | relname\n> > --------------+---------------------------------\n> > ...\n> > 2200 | pg_user\n> > 2200 | pg_rules\n> > 2200 | pg_views\n> > 2200 | pg_tables\n> > 2200 | pg_indexes\n> > 2200 | pg_stats\n> > 2200 | pg_stat_all_tables\n> > 2200 | pg_stat_sys_tables\n> \n> Bizarre. It's not that way here. Would you mind updating to \n> CVS tip, rebuilding, and seeing if you can duplicate that? \n> Also, make sure you're using the right initdb script ...\n\nNo problem, but it won't be until Monday now. I'll let you know what I\nfind.\n\n> > ... One of the tests is to\n> > figure out if one of the base datasources in the query is a view - \n> > currently this is easy, but in 7.3 we could have a table & \n> a view with \n> > the same name in different schemas, hence by using the path we can \n> > figure out what object we're actually using.\n> \n> Actually, I'd venture that you do *not* want to do namespace \n> search resolution for yourself; have you thought about how \n> messy the SQL query would be? The new datatypes regclass, \n> etc are intended to handle it for you. For example\n> \n> select 'foo'::regclass::oid;\t-- get OID of table foo in search path\n> \n> select 'foo.bar'::regclass::oid; -- get OID of table foo.bar\n> \n> select relkind from pg_class where oid = 'foo'::regclass; -- \n> is foo a view?\n\nIt doesn't work quite like that anyway. pgAdmin has a base library\n(pgSchema) which is a hierarchy of collections of objects which\nrepresent an entire server. It populates itself on demand, so the first\ntime you access a collection of views (for example), pgSchema queries\nthe database to build the collection of views in that database (now\nschema of course as there's an extra level in the hierarchy). Future\naccesses to that part of the hierarchy are *very* quick (not that\ninitial ones are particularly slow). The only downside is that you may\nnot notice new objects from other developers immediately (though the\nuser can manually refresh any part of the hierarchy).\n\nAnyway, long story short, once I know the search path is\ntestschema,public I'll just do:\n\nIf\nsvr.Databases(\"dbname\").Namespaces(\"testschema\").Views.Exists(\"viewname\"\n) Then ...\nIf svr.Databases(\"dbname\").Namespaces(\"public\").Views.Exists(\"viewname\")\nThen ...\n\nAnyway, current_schemas() seems ideal, thanks.\n\n> > Incidently if you're interested at the moment, you may \n> remember that \n> > in 7.2 beta there was a problem with slow startup under \n> Cygwin which \n> > was down to a few seconds by release... The last 2 \n> snapshots I've run \n> > take well over a minute for postmaster startup on a P3M \n> 1.13GHz/512Mb \n> > under little load. There is virtually no disk activity during this \n> > time.\n> \n> Curious. I have not noticed much of any change in postmaster \n> startup time on Unix. Can you run a profile or something to \n> see where the time is going?\n\nProbably, but I'd need hand-holding as I don't have a clue how to do\nthat. If you can send some instructions I'll give it a go though it'll\nprobably be tomorrow now as I'm starting to fall asleep.\n\nRegards, Dave.\n",
"msg_date": "Fri, 17 May 2002 23:38:04 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: More schema queries "
},
{
"msg_contents": "\"Dave Page\" <dpage@vale-housing.co.uk> writes:\n> It doesn't work quite like that anyway.\n\nOh, so essentially you want to simulate the namespace search on the\napplication side. I see.\n\n> Anyway, current_schemas() seems ideal, thanks.\n\nIt may not be exactly what you need, because it doesn't tell you about\nimplicitly searched schemas --- which always includes pg_catalog and\nwill include a temp namespace if you've activated one. For instance,\nif current_schemas claims the search path is\n\nregression=> select current_schemas();\n current_schemas\n-----------------\n {tgl,public}\n(1 row)\n\nthen the real path is effectively {pg_catalog,tgl,public}, or possibly\n{pg_temp_NNN,pg_catalog,tgl,public}.\n\nThere was already some discussion about making a variant version of\ncurrent_schemas() that would tell you the Whole Truth, including the\nimplicitly searched schemas. Seems like we'd better do that; otherwise\nwe'll find people hardwiring knowledge of these implicit search rules\ninto their apps, which is probably a bad idea.\n\nAnyone have a preference about what to call it? I could see making a\nversion of current_schemas() that takes a boolean parameter, or we\ncould choose another function name for the implicit-schemas-too version.\n\n\n>> Curious. I have not noticed much of any change in postmaster \n>> startup time on Unix. Can you run a profile or something to \n>> see where the time is going?\n\n> Probably, but I'd need hand-holding as I don't have a clue how to do\n> that.\n\nI'm not sure how to do it on Cygwin, either. On Unix you'd build a\nprofilable backend executable using\n\tcd pgsql/src/backend\n\tgmake clean\n\tgmake PROFILE=\"-pg\" all\ninstall same, run it, and then use gprof on the gmon.out file dumped\nat postmaster termination. Dunno if it has to be done differently\non Cygwin.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 17 May 2002 19:01:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: More schema queries "
},
{
"msg_contents": "On Sat, 2002-05-18 at 01:01, Tom Lane wrote:\n> \"Dave Page\" <dpage@vale-housing.co.uk> writes:\n> > It doesn't work quite like that anyway.\n> \n> Oh, so essentially you want to simulate the namespace search on the\n> application side. I see.\n> \n> > Anyway, current_schemas() seems ideal, thanks.\n> \n> It may not be exactly what you need, because it doesn't tell you about\n> implicitly searched schemas --- which always includes pg_catalog and\n> will include a temp namespace if you've activated one. For instance,\n> if current_schemas claims the search path is\n> \n> regression=> select current_schemas();\n> current_schemas\n> -----------------\n> {tgl,public}\n> (1 row)\n> \n> then the real path is effectively {pg_catalog,tgl,public}, or possibly\n> {pg_temp_NNN,pg_catalog,tgl,public}.\n> \n> There was already some discussion about making a variant version of\n> current_schemas() that would tell you the Whole Truth, including the\n> implicitly searched schemas. Seems like we'd better do that; otherwise\n> we'll find people hardwiring knowledge of these implicit search rules\n> into their apps, which is probably a bad idea.\n> \n> Anyone have a preference about what to call it? I could see making a\n> version of current_schemas() that takes a boolean parameter, or we\n> could choose another function name for the implicit-schemas-too version.\n\nor we could make another function with the same name :)\n\ncurrent_schemas('full')\n\n--------------\nHannu\n\n\n",
"msg_date": "20 May 2002 17:17:52 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: More schema queries"
}
] |
[
{
"msg_contents": "The contents of the error message are:\n\nconn->errorMessage.data\t0x00312440 \"pqFlush() -- couldn't send data:\nerrno=0\nNo error A non-blocking socket operation could not be completed\nimmediately.\n\nfor this:\n\n if (PQputline(conn, pszBCPdata[i++]) == EOF)\n\t\t\t printf(\"Error inserting data on row %d\\n\",\ni-1);\n\nWhat is the correct recovery action? Do I send the same buffer again?\n",
"msg_date": "Fri, 17 May 2002 16:00:18 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Error on PQputline()"
},
{
"msg_contents": "\"Dann Corbit\" <DCorbit@connx.com> writes:\n> The contents of the error message are:\n> conn->errorMessage.data\t0x00312440 \"pqFlush() -- couldn't send data:\n> errno=0\n> No error A non-blocking socket operation could not be completed\n> immediately.\n\nYou're running libpq with the nonblocking mode selected?\n\n> What is the correct recovery action?\n\nRedesign libpq's nonblock mode :-(. It's a mess; a quick hack that\ndoesn't even try to cover all cases, and is unreliable in the ones it\ndoes cover. You can find my previous rants on the subject in the\narchives from a couple years back (around Jan '00 I believe). IMHO\nwe should never have accepted that patch at all.\n\nShort of that, don't use the COPY code with nonblock.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 17 May 2002 19:10:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Error on PQputline() "
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> Sent: Friday, May 17, 2002 4:10 PM\n> To: Dann Corbit\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] Error on PQputline() \n> \n> \n> \"Dann Corbit\" <DCorbit@connx.com> writes:\n> > The contents of the error message are:\n> > conn->errorMessage.data\t0x00312440 \"pqFlush() -- \n> couldn't send data:\n> > errno=0\n> > No error A non-blocking socket operation could not be completed\n> > immediately.\n> \n> You're running libpq with the nonblocking mode selected?\n\nActually no. It should be the default mode for a connection made by\nPQconnectdb(). That's what made the error so puzzling.\n\n> > What is the correct recovery action?\n> \n> Redesign libpq's nonblock mode :-(. It's a mess; a quick hack that\n> doesn't even try to cover all cases, and is unreliable in the ones it\n> does cover. You can find my previous rants on the subject in the\n> archives from a couple years back (around Jan '00 I believe). IMHO\n> we should never have accepted that patch at all.\n> \n> Short of that, don't use the COPY code with nonblock.\n\nI am trying to figure out if it is faster to bulk copy from a file on\nthe server or using an API from the client. It boils down to this:\n\n\"Would it be faster to write a file to disk and read it again on the\nlocal host for the server or to send the calls via libpq client\nmessages?\"\n\nIt could be that the TCP/IP overhead exceeds the overhead of writing the\nfile to disk and reading it again.\n\nI have a data statement (in test.h) that consists of 1.6 million rows of\ndata to spin into the database.\n\nHere is the complete program:\n\n#include <windows.h>\n#include <stdlib.h>\n#include <time.h>\n#include \"libpq-fe.h\"\n#include \"glob.h\" /* member variables in the objects */\n\n#include \"test.h\"\n\nint init_comm(void)\n{\n WORD wVersionRequested;\n WSADATA wsaData;\n int err;\n\n wVersionRequested = MAKEWORD(2, 2);\n\n err = WSAStartup(wVersionRequested, &wsaData);\n if (err != 0) {\n /* Tell the user that we could not find a usable */\n /* WinSock DLL. */\n return 0;\n }\n return 1;\n}\n\nvoid ProcessTuples(void);\n\nint ExecuteImmediate(char *command, Qtype q_t)\n{\n int problem = 0;\n#ifdef _DEBUG\n printf(\"%s\\n\", command);\n#endif\n result = PQexec(conn, command);\n switch (rc = PQresultStatus(result)) {\n\n /* We should never actually call this. Left in for debugging...\n*/\n /* All tuple processing is handled low-level to pass data back\nto\n * CONNX */\n\n case PGRES_TUPLES_OK: /* Data set successfully created */\n#ifdef _DEBUG\n printf(\"#rows affected %s\\n\", PQcmdTuples(result));\n#endif\n ProcessTuples();\n break;\n case PGRES_EMPTY_QUERY: /* Empty query supplied -- do nothing...\n*/\n case PGRES_COMMAND_OK: /* Query succeeds, but returns no\nresults */\n /* If we did a select, we should (at least) have a result set of\n * empty tuples. */\n if (q_t == QUERY_TYPE_SELECT)\n problem = 1;\n break;\n case PGRES_BAD_RESPONSE:\n case PGRES_NONFATAL_ERROR:\n case PGRES_FATAL_ERROR:\n {\n problem = 1;\n }\n }\n if (q_t == QUERY_TYPE_INSERT) {\n InsertedOID = PQoidValue(result);\n#ifdef _DEBUG\n printf(\"OID of inserted row is %lu\\n\", (unsigned long)\nInsertedOID);\n#endif\n }\n PQclear(result);\n return problem;\n}\n\nvoid HandleProblem(void)\n{\n const char *m1 = PQresStatus(rc);\n const char *m2 = PQresultErrorMessage(result);\n#ifdef __cplusplus\n String err = m1;\n err = err + m2;\n throw Mcnew CPOSTGRESQLException(conn, rc, (LPCSTR) err,\nszSQLState);\n#endif\n#ifdef _DEBUG\n printf(\"status is %s\\n\", m1);\n printf(\"result message: %s\\n\", m2);\n#endif\n}\n\nvoid BeginTrans(void)\n{\n int problem;\n problem = ExecuteImmediate(\"BEGIN work\", QUERY_TYPE_TRANSACT);\n if (problem)\n HandleProblem();\n}\n\nvoid CommitTrans(void)\n{\n int problem;\n\n problem = ExecuteImmediate(\"COMMIT work\", QUERY_TYPE_TRANSACT);\n if (problem)\n HandleProblem();\n}\n\nvoid RollbackTrans(void)\n{\n int problem;\n\n problem = ExecuteImmediate(\"ROLLBACK work\", QUERY_TYPE_TRANSACT);\n if (problem)\n HandleProblem();\n}\n\nvoid ProcessTuples()\n{\n nrows = PQntuples(result);\n nfields = PQnfields(result);\n#ifdef _DEBUG\n printf(\"number of rows returned = %d\\n\", nrows);\n printf(\"number of fields returned = %d\\n\", nfields);\n#endif\n for (r = 0; r < nrows; r++) {\n for (n = 0; n < nfields; n++)\n printf(\" %s = %s(%d),\",\n PQfname(result, n),\n PQgetvalue(result, r, n),\n PQgetlength(result, r, n));\n printf(\"\\n\");\n }\n}\n\nstatic long cursor_number = 0;\n\nint main(void)\n{\n int problem;\n int i = 0;\n\n struct tm *newtime;\n time_t aclock;\n\n if (init_comm()) {\n conn = PQconnectdb(\"dbname=connxdatasync host=dannfast\");\n if (PQstatus(conn) == CONNECTION_OK) {\n char insert_sql[256];\n printf(\"connection made\\n\");\n } else {\n printf(\"connection failed\\n\");\n return EXIT_FAILURE;\n }\n\n puts(\"DROP TABLE cnx_ds_sis_bill_detl_tb started\");\n problem = ExecuteImmediate(\"DROP TABLE cnx_ds_sis_bill_detl_tb\",\nQUERY_TYPE_OTHER);\n if (problem)\n HandleProblem();\n\n puts(\"DROP TABLE cnx_ds_sis_bill_detl_tb finished\");\n puts(\"CREATE TABLE cnx_ds_sis_bill_detl_tb started\");\n problem = ExecuteImmediate(\"CREATE TABLE cnx_ds_sis_bill_detl_tb\n( extr_stu_id char(10), term_cyt char(5), subcode char(5), tran_seq\nint2, crc int8)\", QUERY_TYPE_OTHER);\n if (problem)\n HandleProblem();\n\n puts(\"CREATE TABLE cnx_ds_sis_bill_detl_tb finished\");\n puts(\"going to start bulk copy...\");\n\n\n time(&aclock);\n newtime = localtime(&aclock);\n puts(asctime(newtime));\n result = PQexec(conn, \"COPY cnx_ds_sis_bill_detl_tb FROM STDIN\nDELIMITERS '|'\");\n problem = 0;\n switch (rc = PQresultStatus(result)) {\n case PGRES_BAD_RESPONSE:\n case PGRES_NONFATAL_ERROR:\n case PGRES_FATAL_ERROR:\n {\n problem = 1;\n }\n }\n\n if (problem)\n HandleProblem();\n\n puts(\"done with initialization...\");\n\n while (pszBCPdata[i])\n\t\t{\n if (PQputline(conn, pszBCPdata[i++]) == EOF)\n\t\t\t printf(\"Error inserting data on row %d\\n\",\ni-1);\n\t\t}\n\n PQputline(conn, \"\\\\.\\n\");\n\n PQendcopy(conn);\n\n puts(\"finished with bulk copy...\");\n\n time(&aclock);\n newtime = localtime(&aclock);\n puts(asctime(newtime));\n\n return EXIT_SUCCESS;\n }\n puts(\"initialization of winsock failed.\");\n return EXIT_FAILURE;\n}\n",
"msg_date": "Fri, 17 May 2002 16:18:16 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: Error on PQputline() "
},
{
"msg_contents": "\"Dann Corbit\" <DCorbit@connx.com> writes:\n>> You're running libpq with the nonblocking mode selected?\n\n> Actually no. It should be the default mode for a connection made by\n> PQconnectdb(). That's what made the error so puzzling.\n\nI'm confused too. For starters, I cannot find that error message\nstring about 'A non-blocking socket operation could not be completed\nimmediately' anywhere. Got any idea what's producing that? Exactly\nwhich version of libpq are you using, anyway?\n\n> \"Would it be faster to write a file to disk and read it again on the\n> local host for the server or to send the calls via libpq client\n> messages?\"\n\nGood question. I'd recommend the messaging approach since it eliminates\nlots of headaches about file access privileges and so forth. But on\nsome platforms the overhead could be high.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 17 May 2002 19:37:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Error on PQputline() "
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> Sent: Friday, May 17, 2002 4:38 PM\n> To: Dann Corbit\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] Error on PQputline() \n> \n> \n> \"Dann Corbit\" <DCorbit@connx.com> writes:\n> >> You're running libpq with the nonblocking mode selected?\n> \n> > Actually no. It should be the default mode for a connection made by\n> > PQconnectdb(). That's what made the error so puzzling.\n> \n> I'm confused too. For starters, I cannot find that error message\n> string about 'A non-blocking socket operation could not be completed\n> immediately' anywhere. Got any idea what's producing that? Exactly\n> which version of libpq are you using, anyway?\n\n7.1.3. Sorry for running on fossil PostgreSQL.\n\n/* ---------------------------------------------------------------------\n*/\n/* pqFlush: send any data waiting in the output buffer\n */\nint\npqFlush(PGconn *conn)\n{\n\tchar\t *ptr = conn->outBuffer;\n\tint\t\t\tlen = conn->outCount;\n\n\tif (conn->sock < 0)\n\t{\n\t\tprintfPQExpBuffer(&conn->errorMessage,\n\t\t\t\t\t\t \"pqFlush() --\nconnection not open\\n\");\n\t\treturn EOF;\n\t}\n\n\t/*\n\t * don't try to send zero data, allows us to use this function\nwithout\n\t * too much worry about overhead\n\t */\n\tif (len == 0)\n\t\treturn (0);\n\n\t/* while there's still data to send */\n\twhile (len > 0)\n\t{\n\t\t/* Prevent being SIGPIPEd if backend has closed the\nconnection. */\n#ifndef WIN32\n\t\tpqsigfunc\toldsighandler = pqsignal(SIGPIPE,\nSIG_IGN);\n\n#endif\n\n\t\tint\t\t\tsent;\n\n#ifdef USE_SSL\n\t\tif (conn->ssl)\n\t\t\tsent = SSL_write(conn->ssl, ptr, len);\n\t\telse\n#endif\n\t\t\tsent = send(conn->sock, ptr, len, 0);\n\n#ifndef WIN32\n\t\tpqsignal(SIGPIPE, oldsighandler);\n#endif\n\n\t\tif (sent < 0)\n\t\t{\n\n\t\t\t/*\n\t\t\t * Anything except EAGAIN or EWOULDBLOCK is\ntrouble. If it's\n\t\t\t * EPIPE or ECONNRESET, assume we've lost the\nbackend\n\t\t\t * connection permanently.\n\t\t\t */\n\t\t\tswitch (errno)\n\t\t\t{\n#ifdef EAGAIN\n\t\t\t\tcase EAGAIN:\n\t\t\t\t\tbreak;\n#endif\n#if defined(EWOULDBLOCK) && (!defined(EAGAIN) || (EWOULDBLOCK !=\nEAGAIN))\n\t\t\t\tcase EWOULDBLOCK:\n\t\t\t\t\tbreak;\n#endif\n\t\t\t\tcase EINTR:\n\t\t\t\t\tcontinue;\n\n\t\t\t\tcase EPIPE:\n#ifdef ECONNRESET\n\t\t\t\tcase ECONNRESET:\n#endif\n\t\nprintfPQExpBuffer(&conn->errorMessage,\n\t\n\"pqFlush() -- backend closed the channel unexpectedly.\\n\"\n\t\n\"\\tThis probably means the backend terminated abnormally\"\n\t\t\t\t\t\t \" before or while\nprocessing the request.\\n\");\n\n\t\t\t\t\t/*\n\t\t\t\t\t * We used to close the socket\nhere, but that's a bad\n\t\t\t\t\t * idea since there might be\nunread data waiting\n\t\t\t\t\t * (typically, a NOTICE message\nfrom the backend\n\t\t\t\t\t * telling us it's committing\nhara-kiri...). Leave\n\t\t\t\t\t * the socket open until\npqReadData finds no more data\n\t\t\t\t\t * can be read.\n\t\t\t\t\t */\n\t\t\t\t\treturn EOF;\n/*\nvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv\nvvvvvvv\n!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n!!!!!!!\n*/\n\t\t\t\tdefault:\n\t\nprintfPQExpBuffer(&conn->errorMessage,\n\t\t\t\t\t \"pqFlush() -- couldn't send\ndata: errno=%d\\n%s\\n\",\n\t\nerrno, strerror(errno));\n\t\t\t\t\t/* We don't assume it's a fatal\nerror... */\n\t\t\t\t\treturn EOF;\n/*\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n^^^^^^^\n!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n!!!!!!!\n*/\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tptr += sent;\n\t\t\tlen -= sent;\n\t\t}\n\n\t\tif (len > 0)\n\t\t{\n\t\t\t/* We didn't send it all, wait till we can send\nmore */\n\n\t\t\t/*\n\t\t\t * if the socket is in non-blocking mode we may\nneed to abort\n\t\t\t * here\n\t\t\t */\n#ifdef USE_SSL\n\t\t\t/* can't do anything for our SSL users yet */\n\t\t\tif (conn->ssl == NULL)\n\t\t\t{\n#endif\n\t\t\t\tif (pqIsnonblocking(conn))\n\t\t\t\t{\n\t\t\t\t\t/* shift the contents of the\nbuffer */\n\t\t\t\t\tmemmove(conn->outBuffer, ptr,\nlen);\n\t\t\t\t\tconn->outCount = len;\n\t\t\t\t\treturn EOF;\n\t\t\t\t}\n#ifdef USE_SSL\n\t\t\t}\n#endif\n\n\t\t\tif (pqWait(FALSE, TRUE, conn))\n\t\t\t\treturn EOF;\n\t\t}\n\t}\n\n\tconn->outCount = 0;\n\n\tif (conn->Pfdebug)\n\t\tfflush(conn->Pfdebug);\n\n\treturn 0;\n}\n \n> > \"Would it be faster to write a file to disk and read it again on the\n> > local host for the server or to send the calls via libpq client\n> > messages?\"\n> \n> Good question. I'd recommend the messaging approach since it \n> eliminates\n> lots of headaches about file access privileges and so forth. But on\n> some platforms the overhead could be high.\n> \n> \t\t\tregards, tom lane\n> \n",
"msg_date": "Fri, 17 May 2002 16:41:34 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: Error on PQputline() "
},
{
"msg_contents": "\"Dann Corbit\" <DCorbit@connx.com> writes:\n>> I'm confused too. For starters, I cannot find that error message\n>> string about 'A non-blocking socket operation could not be completed\n>> immediately' anywhere. Got any idea what's producing that? Exactly\n>> which version of libpq are you using, anyway?\n\n> 7.1.3. Sorry for running on fossil PostgreSQL.\n\nNo such string in 7.1.3 either.\n\n> printfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\t \"pqFlush() -- couldn't send\n> data: errno=%d\\n%s\\n\",\n\t\n> errno, strerror(errno));\n> \t\t\t\t\t/* We don't assume it's a fatal\n> error... */\n> \t\t\t\t\treturn EOF;\n> /*\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> ^^^^^^^\n> !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n> !!!!!!!\n> */\n\nUnless your strerror is really weird, that message is only going to have\nproduced \"pqFlush() -- couldn't send data: errno=0\\nNo error\\n\".\nThe bit about a non-blocking socket could not have come from strerror\nAFAICS; it hasn't got enough context to know that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 17 May 2002 19:46:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Error on PQputline() "
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us] \n> Sent: 18 May 2002 00:01\n> To: Dave Page\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: More schema queries \n> \n> There was already some discussion about making a variant version of\n> current_schemas() that would tell you the Whole Truth, \n> including the implicitly searched schemas. Seems like we'd \n> better do that; otherwise we'll find people hardwiring \n> knowledge of these implicit search rules into their apps, \n> which is probably a bad idea.\n> \n> Anyone have a preference about what to call it? I could see \n> making a version of current_schemas() that takes a boolean \n> parameter, or we could choose another function name for the \n> implicit-schemas-too version.\n\nUse of a parameter seems fine to me. Save having Yet Another Function\n:-) and trying to figure out a sensible name for it!\n\n> >> Curious. I have not noticed much of any change in postmaster\n> >> startup time on Unix. Can you run a profile or something to \n> >> see where the time is going?\n> \n> > Probably, but I'd need hand-holding as I don't have a clue \n> how to do \n> > that.\n> \n> I'm not sure how to do it on Cygwin, either. On Unix you'd \n> build a profilable backend executable using\n> \tcd pgsql/src/backend\n> \tgmake clean\n> \tgmake PROFILE=\"-pg\" all\n> install same, run it, and then use gprof on the gmon.out file \n> dumped at postmaster termination. Dunno if it has to be done \n> differently on Cygwin.\n\nWell, I have gcc & gprof so I assume it'll be pretty much the same. I'll\nhave a play tonight.\n\nThanks Tom,\n\nDave.\n",
"msg_date": "Sat, 18 May 2002 10:06:25 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: More schema queries "
}
] |
[
{
"msg_contents": "The documentation of the sequence privileges on the GRANT reference page\ndoesn't match the code.\n\nDocumented:\n\ncurrval:\tUPDATE\nnextval:\tUPDATE\nsetval:\t\tUPDATE\n\nActual:\n\ncurrval:\tSELECT\nnextval:\tUPDATE\nsetval:\t\tUPDATE\n\nBut shouldn't it more ideally be\n\ncurrval:\tSELECT\nnextval:\tSELECT + UPDATE\nsetval:\t\tUPDATE\n\nbecause nextval allows you to infer the content of the sequence? (Cf.\nUPDATE tab1 SET a = b requires SELECT + UPDATE on tab1.)\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sat, 18 May 2002 17:45:39 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Sequence privileges"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> But shouldn't it more ideally be\n\n> currval:\tSELECT\n> nextval:\tSELECT + UPDATE\n> setval:\tUPDATE\n\n> because nextval allows you to infer the content of the sequence? (Cf.\n> UPDATE tab1 SET a = b requires SELECT + UPDATE on tab1.)\n\nOne objection is that testing for both privs will require two aclcheck\ncalls (since aclcheck(SELECT|UPDATE) will check for the OR not the AND\nof the privileges). Not sure it's worth the overhead.\n\nGiven that nextval() is really the only interesting operation on\nsequences (you cannot do a real UPDATE), I don't see a problem with\ninterpreting \"UPDATE\" as \"the right to do nextval()\" for sequences.\n\nSince currval only returns to you the result of your own prior nextval,\nthere is no real point in giving it a different privilege bit.\nAccordingly I think it *should* be testing UPDATE --- the docs are right\nand the code is wrong. (If it weren't for your recent addition of\nsetuid functions, I'd question why currval bothers to make a privilege\ntest at all.)\n\n\"SELECT\" still means what it says: the ability to do a select from\nthe sequence, which lets you see the sequence parameters. So what\nwe really have is:\n\n\tSELECT: read sequence as a table\n\tUPDATE: all sequence-specific operations.\n\nYou could maybe make an argument that setval() should have a different\nprivilege than nextval(), but otherwise this seems sufficient to me.\n\nThere is now room in ACL to invent a couple of sequence-specific\nprivilege bits if it bothers you to use \"UPDATE\" for the can-invoke-\nsequence-functions privilege, but I'm not sure it's worth creating\na compatibility issue just to do that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 18 May 2002 12:24:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Sequence privileges "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> \"SELECT\" still means what it says: the ability to do a select from\n> the sequence, which lets you see the sequence parameters. So what\n> we really have is:\n> \n> \tSELECT: read sequence as a table\n> \tUPDATE: all sequence-specific operations.\n> \n\nSince the sequence-specific operations are really just function calls, \nmaybe it should be:\n\tSELECT: read sequence as a table\n\tEXECUTE: all sequence-specific operations.\n\nJoe\n\n",
"msg_date": "Sat, 18 May 2002 16:23:22 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Sequence privileges"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Tom Lane wrote:\n>> what we really have is:\n>> \n>> SELECT: read sequence as a table\n>> UPDATE: all sequence-specific operations.\n\n> Since the sequence-specific operations are really just function calls, \n> maybe it should be:\n> \tSELECT: read sequence as a table\n> \tEXECUTE: all sequence-specific operations.\n\nBut is it worth creating a compatibility problem for? Existing pg_dump\nscripts are likely to GRANT UPDATE. They certainly won't say GRANT\nEXECUTE since that doesn't even exist in current releases.\n\nI agree that EXECUTE (or some sequence-specific permission name we might\nthink of instead) would be logically cleaner, but I don't think it's\nworth the trouble of coming up with a compatibility workaround. UPDATE\ndoesn't seem unreasonably far off the mark.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 18 May 2002 19:45:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Sequence privileges "
},
{
"msg_contents": "On Sat, 18 May 2002 19:45:30 -0400\n\"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\n> Joe Conway <mail@joeconway.com> writes:\n> > Since the sequence-specific operations are really just function calls, \n> > maybe it should be:\n> > \tSELECT: read sequence as a table\n> > \tEXECUTE: all sequence-specific operations.\n> \n> But is it worth creating a compatibility problem for? Existing pg_dump\n> scripts are likely to GRANT UPDATE. They certainly won't say GRANT\n> EXECUTE since that doesn't even exist in current releases.\n> \n> I agree that EXECUTE (or some sequence-specific permission name we might\n> think of instead) would be logically cleaner, but I don't think it's\n> worth the trouble of coming up with a compatibility workaround.\n\nWell, one possible compatibility workaround would be trivial -- we could\nhack GRANT so that doing GRANT UPDATE on sequence relations is\ntranslated into GRANT EXECUTE.\n\nAs for whether it's worth the bother, I'm not sure -- neither\nsolution strikes me as particularly clean.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n",
"msg_date": "Sat, 18 May 2002 20:00:00 -0400",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: Sequence privileges"
}
] |
[
{
"msg_contents": "Dear all,\n\nI would like to transform UTF-8 strings into Java-Unicode. Example :\n- Latin1 : 'é'\n- UTF-8 : 'é' \n- Java Unicode = '\\u00233'\n\nBasically, a Unicode compatible ascii() function would be fine.\nascii('é') should return 233.\n\n1) Has anyone written an ascii UTF-8 safe wrapper to ascii() function? If yes, \nwould you be so kind to publish this function on the list.\n\n2) Are there plans to add an ascii() UTF-8 safe function to PostrgeSQL?\n\nBest regards,\nJean-Michel POURE\n",
"msg_date": "Sat, 18 May 2002 18:53:19 +0200",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "UTF-8 safe ascii() function"
},
{
"msg_contents": "Hi Jean-Michel,\n\nJean-Michel POURE <jm.poure@freesurf.fr> a ᅵcrit :\n> Dear all,\n> \n> I would like to transform UTF-8 strings into Java-Unicode. Example :\n> - Latin1 : 'ᅵ'\n> - UTF-8 : 'é' \n> - Java Unicode = '\\u00233'\n> \n> Basically, a Unicode compatible ascii() function would be fine.\n> ascii('é') should return 233.\n> \n> 1) Has anyone written an ascii UTF-8 safe wrapper to ascii() function?\n> If yes, would you be so kind to publish this function on the list.\n\nOK, I just gave it a try, see the attachment.\n\nThe function is taking the first character of a TEXT element, and\nreturns its UCS2 value. I just did some basic test (i.e. I have not\ntried with 3 or 4 bytes UTF-8 chars). The function is following the\nUnicode 3.2 spec.\n\nSELECT utf8toucs2('a'), utf8toucs2('ᅵ');\n utf8toucs2 | utf8toucs2 \n------------+------------\n 97 | 233\n(1 row)\n\nThe function returns -1 on error.\n\n> 2) Are there plans to add an ascii() UTF-8 safe function to\n> PostrgeSQL?\n\nI don't think the function I did is useful as such. It would be better\nto make a function that converts the whole string or something.\n\nBy the way, what is the encoding for Java Unicode ? is it always \"\\u\"\nfollowed by 5 hex digits (in which case your example is wrong) ? Then,\nit shouldn't be too difficult to make the relevant function, though I'm\nwondering if the Java programme would convert an incoming '\\' 'u' '0'\n'0' '2' '3' '3' to the corresponding UCS2/UTF16 character ?\n\nMaybe we should have some similar input (and output ?) functionality in\npsql, but then I would much prefer the Perl way, which is\n\\x{hex_digits}, which is unambiguous.\n\nRegards,\n\nPatrice\n\n-- \nPatrice Hᅵdᅵ\nemail: patrice hede(ᅵ)islande org\nwww : http://www.islande.org/",
"msg_date": "Sun, 19 May 2002 11:44:13 +0200",
"msg_from": "Patrice =?ISO-8859-15?B?SOlk6Q==?= <phede-ml@islande.org>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] UTF-8 safe ascii() function"
},
{
"msg_contents": "Dear Patrice,\n\nThank you very much. This will save the lives of Java users.\n\n> I don't think the function I did is useful as such. It would be better\n> to make a function that converts the whole string or something.\n\nYes, this would save the lives of some Javascript users. Java Unicode notation \nis the only Unicode understood by Javascript.\n\n> By the way, what is the encoding for Java Unicode ? is it always \"\\u\"\n> followed by 5 hex digits (in which case your example is wrong) ? Then,\n> it shouldn't be too difficult to make the relevant function, though I'm\n> wondering if the Java programme would convert an incoming '\\' 'u' '0'\n> '0' '2' '3' '3' to the corresponding UCS2/UTF16 character ?\n\nJava Unicode notation is not case sensitive ('\\u' = '\\U') and is followed by \nan hexadecimal value.\n\n> Maybe we should have some similar input (and output ?) functionality in\n> psql, but then I would much prefer the Perl way, which is\n> \\x{hex_digits}, which is unambiguous.\n\nThis would be perfect. We should also handle the HTML unicode nation :\n&#{dec_digits} and &#x{hex_digits} as it is unambiguous.\n\nCheers,\nJean-Michel\n\n\n",
"msg_date": "Sun, 19 May 2002 12:44:56 +0200",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] UTF-8 safe ascii() function"
},
{
"msg_contents": "Le Dimanche 19 Mai 2002 11:44, Patrice Hédé a écrit :\n> The function is taking the first character of a TEXT element, and\n> returns its UCS2 value. I just did some basic test (i.e. I have not\n> tried with 3 or 4 bytes UTF-8 chars). The function is following the\n> Unicode 3.2 spec.\n\nHi Patrice,\n\nI tried a Japanese character :\nSELECT utf8toucs2 ('æ¯'::text) which returns -1\n\nDo you know why it does not return the UCS-2 value?\n\nCheers,\nJean-Michel POURE\n",
"msg_date": "Sun, 19 May 2002 20:08:18 +0200",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] UTF-8 safe ascii() function"
},
{
"msg_contents": "Jean-Michel POURE <jm.poure@freesurf.fr> a ᅵcrit :\n\n> I tried a Japanese character :\n> SELECT utf8toucs2 ('ᅵ_ᅵ'::text) which returns -1\n> \n> Do you know why it does not return the UCS-2 value?\n\nOops, my mistake. I forgot to update a test after a copy-paste. Here is\na new version which should be correct this time ! :)\n\nPatrice\n\n-- \nPatrice Hᅵdᅵ\nemail: patrice hede ᅵ islande org\nwww : http://www.islande.org/",
"msg_date": "Sun, 19 May 2002 21:14:42 +0200",
"msg_from": "Patrice =?ISO-8859-15?B?SOlk6Q==?= <phede-ml@islande.org>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] UTF-8 safe ascii() function"
},
{
"msg_contents": "Le Dimanche 19 Mai 2002 21:14, Patrice Hédé a écrit :\n> Oops, my mistake. I forgot to update a test after a copy-paste. Here is\n> a new version which should be correct this time ! :)\n\nThanks Patrice, merci Patrice !\n",
"msg_date": "Mon, 20 May 2002 09:31:31 +0200",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] UTF-8 safe ascii() function"
},
{
"msg_contents": "Postgres 7.2\nI have an interval selected from a max(occurance) - min(occurance) where\nbla.\nI now want to multiply this by a rate - to create a charge...\n\nIf I use to_char( interval, 'SSSS');\nI will get a seconds conversion - but that works on seconds since midnight -\nhence\nwith a one day period.\n\nAre there any better ways of converting a timestamp to an integer?\n\n\nThanks\n\nGareth\n\n\n",
"msg_date": "Mon, 20 May 2002 12:53:01 +0100",
"msg_from": "\"Gareth Kirwan\" <gbjk@thermeoneurope.com>",
"msg_from_op": false,
"msg_subject": "Interval to number"
},
{
"msg_contents": "\nEXTRACT is your friend :)\n\nSELECT EXTRACT(EPOCH FROM max(occurrance) - min(occurrance))::integer ;\n\n- brian\n\nk=# SELECT EXTRACT(EPOCH FROM now() - '2001-01-01') ;\n date_part\n----------------\n 43583467.94995\n(1 row)\n\n\nOn Mon, 20 May 2002, Gareth Kirwan wrote:\n\n>\n> Postgres 7.2\n> I have an interval selected from a max(occurance) - min(occurance) where\n> bla.\n> I now want to multiply this by a rate - to create a charge...\n>\n> If I use to_char( interval, 'SSSS');\n> I will get a seconds conversion - but that works on seconds since midnight -\n> hence\n> with a one day period.\n>\n> Are there any better ways of converting a timestamp to an integer?\n>\n>\n> Thanks\n>\n> Gareth\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\nWm. Brian McCane | Life is full of doors that won't open\nSearch http://recall.maxbaud.net/ | when you knock, equally spaced amid those\nUsenet http://freenews.maxbaud.net/ | that open when you don't want them to.\nAuction http://www.sellit-here.com/ | - Roger Zelazny \"Blood of Amber\"\n\n",
"msg_date": "Mon, 20 May 2002 11:35:07 -0500 (CDT)",
"msg_from": "Brian McCane <bmccane@mccons.net>",
"msg_from_op": false,
"msg_subject": "Re: Interval to number"
},
{
"msg_contents": "Oh :(\n\nI'd given up waiting for a response.\nThanks though Brian ... I currently have the triggered function:\n\nCREATE FUNCTION logSession () RETURNS opaque AS '\n\tDECLARE\n\tclient_rate\tnumeric(10,2);\n\tperiod \t\tinterval;\n\tto_charge\tnumeric(10,2);\n\tBEGIN\n\t\tSELECT INTO client_rate rate from clients c where c.id=OLD.client;\n\t\tSELECT INTO period max(time) - min(time) FROM convs WHERE\nsession_id=OLD.id;\n\t\tSELECT INTO to_charge (to_number(to_char(period, ''SSSS''), ''99999D99'')\n/ 60 * client_rate);\n\n\t\tINSERT INTO previous_sessions SELECT * from current_sessions c WHERE\nc.id=OLD.id;\n\t\tINSERT INTO logged_convs SELECT * from convs c WHERE c.session_id=OLD.id;\n\n\t\tINSERT INTO session_logs (session_id, time, length, charge, paid) VALUES\n(OLD.id,OLD.time,period, to_charge, ''false'');\n\tRETURN OLD;\n\tEND;'\nlanguage 'plpgsql';\n\n\nSo I'll try to build it into that.\n\n-----Original Message-----\nFrom: Brian McCane [mailto:bmccane@mccons.net]\nSent: 20 May 2002 17:35\nTo: Gareth Kirwan\nCc: pgsql-admin@postgresql.org\nSubject: Re: [ADMIN] Interval to number\n\n\n\nEXTRACT is your friend :)\n\nSELECT EXTRACT(EPOCH FROM max(occurrance) - min(occurrance))::integer ;\n\n- brian\n\nk=# SELECT EXTRACT(EPOCH FROM now() - '2001-01-01') ;\n date_part\n----------------\n 43583467.94995\n(1 row)\n\n\nOn Mon, 20 May 2002, Gareth Kirwan wrote:\n\n>\n> Postgres 7.2\n> I have an interval selected from a max(occurance) - min(occurance) where\n> bla.\n> I now want to multiply this by a rate - to create a charge...\n>\n> If I use to_char( interval, 'SSSS');\n> I will get a seconds conversion - but that works on seconds since\nmidnight -\n> hence\n> with a one day period.\n>\n> Are there any better ways of converting a timestamp to an integer?\n>\n>\n> Thanks\n>\n> Gareth\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\nWm. Brian McCane | Life is full of doors that won't open\nSearch http://recall.maxbaud.net/ | when you knock, equally spaced amid\nthose\nUsenet http://freenews.maxbaud.net/ | that open when you don't want them to.\nAuction http://www.sellit-here.com/ | - Roger Zelazny \"Blood of Amber\"\n\n\n",
"msg_date": "Mon, 20 May 2002 17:40:46 +0100",
"msg_from": "\"Gareth Kirwan\" <gbjk@thermeoneurope.com>",
"msg_from_op": false,
"msg_subject": "Re: Interval to number"
},
{
"msg_contents": "hi..\n\n redhat 7.3 ..can't seem to get pgaccess to want to load my database\nwhich btw is created and for which a user is also..postmaster is running\nfine with -i -D /usr/local/pgsql/data..\n\nwhen I try to load my database into pgaccess it says: no pg_hba.conf\nentry for host 127.0.0.1.user lee..database handiman..\n\nthx anyone\nlee\n-====\n\n\n\n",
"msg_date": "22 May 2002 07:19:53 -0700",
"msg_from": "lee johnson <lee@imyourhandiman.com>",
"msg_from_op": false,
"msg_subject": "no pg_hba.conf"
},
{
"msg_contents": "> -----Original Message-----\n> From: pgsql-interfaces-owner@postgresql.org\n> [mailto:pgsql-interfaces-owner@postgresql.org]On Behalf Of lee johnson\n> Sent: Wednesday, May 22, 2002 10:20 AM\n> To: pgsql-interfaces@postgresql.org\n> Subject: [INTERFACES] no pg_hba.conf\n>\n>\n> hi..\n>\n> redhat 7.3 ..can't seem to get pgaccess to want to load my database\n> which btw is created and for which a user is also..postmaster is running\n> fine with -i -D /usr/local/pgsql/data..\n>\n> when I try to load my database into pgaccess it says: no pg_hba.conf\n> entry for host 127.0.0.1.user lee..database handiman..\n\nSo... is that true? Have you looked in pg_hba.conf? What did you add to\nthat?\n\n -J.\n\nJoel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton\nKnowledge Management & Technology Consultant\n\n",
"msg_date": "Thu, 23 May 2002 09:07:11 -0400",
"msg_from": "\"Joel Burton\" <joel@joelburton.com>",
"msg_from_op": false,
"msg_subject": "Re: no pg_hba.conf"
},
{
"msg_contents": ">\n>\n>So... is that true? Have you looked in pg_hba.conf? What did you add to\n>that?\n>\n> \n>\nokay its working now ...I had previously upgraded 'to'\n RH7.3 but backed out due to needing temporarily to install windows and \nupon a fresh install of 7.3 all is fine now .\n\nthx for efforts mucho\nlee\n-=\n\n",
"msg_date": "Sun, 26 May 2002 08:15:45 -0700",
"msg_from": "lee <lee@imyourhandiman.com>",
"msg_from_op": false,
"msg_subject": "Re: no pg_hba.conf"
}
] |
[
{
"msg_contents": "I've been looking at the authentication and networking code and\nwould like to float a trial balloon.\n\n1) add SASL. This is a new standards-track protocol that is often\n described as \"PAM\" for network authentication. PostgreSQL could\n remove *all* protocol-specific authentication code and use\n standard plug-in libraries instead.\n\n (It's worth noting that SSL/TLS operates at a lower level than\n SASL. This has some interesting consequences, see below.)\n\n After the black-box authentication finishes, the postmaster will\n have up to pieces of information: the peer's client cert (SSL)\n and a string containing the Kerberos principal, user name verified\n with password/one-time-password/CRAM, etc.\n\n PostgreSQL authentication would be reduced to specifying which\n authentication methods are acceptable for each database, then\n mapping that authenticated user string and/or cert to a pguser.\n\n2) add ZLIB compression.\n\nThe last point needs a bit of explanation. With SASL, the buffers\nmay be modified due to the authentication protocol selected, so the\nlow-level routines in pqcomm.c and fe-connect.c must be modified.\nBut since this is happening anyway, it would be easy to wrap\nsasl_encode with ZLIB compression and sasl_decode with ZLIB decompression,\nwith pq_flush() (and client's equivalent) doing a \"sync flush\" of\nthe compression buffer.\n\nYou obviously don't need compression on the Unix socket or a fast\nnetwork connection, but if you're on a T1 or slower the reduced\ntransmission time should more than offset the time spent in \ncompression/decompression.\n\nDrawbacks\n\nThe biggest drawback, at least initially, is that the initial\nexchange will need to be totally rewritten. One possibility\ncould use something like this:\n\n S: 220 example.com PostgreSQL 8.1\n C: HELO client.com\n S: 250-example.com\n S: 250-AUTH ANONYMOUS KERBEROS4 <list of authentication methods>\n S: 250-STARTTLS <server accepts SSL/TLS>\n S: 250-COMPRESSION <compress datastream>\n S: 250 HELP\n C: STARTTLS pq.virtual.com <allows virtual domains>\n <SSL/TLS negotation occurs *here*>\n S: 250-pq.virtual.com\n S: 250-AUTH ANONYMOUS PLAIN KERBEROS4 <note extra method>\n S: 250-COMPRESSION\n S: 250-some extract functions only available with TLS/SSL sessions\n S: 250 HELP\n C: AUTH PLAIN user password <use simple username/password>\n S: 220 OK\n C: COMPRESSION ON\n S: 220 OK\n C: OPEN database\n S: 220 OK\n\nand then the system drops back to the existing data exchange\nformat. Or it could look like something entirely different - the\nmost important thing is that the server needs to provide a list\nof authentication methods, the client chooses one, and it either\nsucceeds or the client can retry. However a protocol something\nlike this has the strong advantage of being well-tested in the \nexisting protocols.\n\nBear\n",
"msg_date": "Sat, 18 May 2002 11:39:51 -0600 (MDT)",
"msg_from": "Bear Giles <bgiles@coyotesong.com>",
"msg_from_op": true,
"msg_subject": "SASL, compression?"
},
{
"msg_contents": "On Sat, 18 May 2002 11:39:51 -0600 (MDT)\n\"Bear Giles\" <bgiles@coyotesong.com> wrote:\n> 1) add SASL. This is a new standards-track protocol that is often\n> described as \"PAM\" for network authentication. PostgreSQL could\n> remove *all* protocol-specific authentication code and use\n> standard plug-in libraries instead.\n\nI'm not that clueful about SASL -- would this mean that we could get\nrid of the PostgreSQL code that does SSL connections, plus MD5, crypt,\nident, etc. based authentication, and instead just use the SASL stuff?\nOr would SSL/TLS support need to co-exist with SASL?\n\n> 2) add ZLIB compression.\n\nThis was discussed before, and the conclusion was that compression\nis of fairly limited utility, and can be accomplished by using\nssh -- so it's not worth the bloat. But there were some dissenting\nopinions at the time, so this might merit further discussion...\n\n> The biggest drawback, at least initially, is that the initial\n> exchange will need to be totally rewritten.\n\nI'd like to see a FE/BE protocol change in 7.4, so this might be a\npossibility at that point.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n",
"msg_date": "Sat, 18 May 2002 15:18:53 -0400",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: SASL, compression?"
},
{
"msg_contents": "Bear Giles <bgiles@coyotesong.com> writes:\n> 1) add SASL. This is a new standards-track protocol that is often\n> described as \"PAM\" for network authentication. PostgreSQL could\n> remove *all* protocol-specific authentication code and use\n> standard plug-in libraries instead.\n\nTo me, \"new standards-track protocol\" translates as \"pie in the sky\".\nWhen will there be tested, portable, BSD-license libraries that we\ncould *actually* use? I'm afraid this really would end up meaning\nwriting and/or supporting our own SASL code ... and I think there\nare more important things for the project to be doing.\n\nIMHO we've got more than enough poorly-supported authentication options\nalready. Unless you can make a credible case that using SASL would\nallow us to rip out PAM, Kerberos, MD5, etc *now* (not \"in a few releases\nwhen everyone's switched to SASL\"), I think this will end up just being\nanother one :-(.\n\n(It doesn't help any that PAM support was sold to us just one release\ncycle back on the same grounds that it'd be the last authentication\nmethod we'd need to add. I'm more than a tad wary now...)\n\n\n> 2) add ZLIB compression.\n\nWhy do people keep wanting to reinvent SSH tunneling?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 18 May 2002 15:45:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SASL, compression? "
},
{
"msg_contents": "> I'm not that clueful about SASL -- would this mean that we could get\n> rid of the PostgreSQL code that does SSL connections, plus MD5, crypt,\n> ident, etc. based authentication, and instead just use the SASL stuff?\n\nWe would still need the ability to map user identities -> pgusers for\nthose methods where the client can't specify an arbitrary user name\n(e.g., Kerberos and GSSAPI), but strictly speaking that's an \"authorization\"\nproblem, not an \"authentication\" problem, and it can be handled entirely\nwithin the backend.\n\n> [W]ould SSL/TLS support need to co-exist with SASL?\n\nYes. SASL effectively works at the application layer. It's now common\npractice for one of the application commands to be STARTTLS (perhaps by\nanother name) that both sides use as a signal to negotiate a TLS/SSL\nsession.\n\nThe benefit of this approach is that it's easily migrated to Unix\nsockets, IPv6, etc.\n\n> > 2) add ZLIB compression.\n> \n> This was discussed before, and the conclusion was that compression\n> is of fairly limited utility, and can be accomplished by using\n> ssh -- so it's not worth the bloat. But there were some dissenting\n> opinions at the time, so this might merit further discussion...\n\nI agree, it wasn't worth the effort with the existing code. But\nif we rewrite the lowest level routines then the amount of bloat can\nbe minimized. \n\nBear\n",
"msg_date": "Sat, 18 May 2002 14:33:28 -0600 (MDT)",
"msg_from": "Bear Giles <bgiles@coyotesong.com>",
"msg_from_op": true,
"msg_subject": "Re: SASL, compression?"
},
{
"msg_contents": "> Bear Giles <bgiles@coyotesong.com> writes:\n> > 1) add SASL. This is a new standards-track protocol that is often\n> > described as \"PAM\" for network authentication.\n> \n> To me, \"new standards-track protocol\" translates as \"pie in the sky\".\n> When will there be tested, portable, BSD-license libraries that we\n> could *actually* use?\n\nhttp://asg.web.cmu.edu/sasl/sasl-implementations.html\n\n> Unless you can make a credible case that using SASL would\n> allow us to rip out PAM, Kerberos, MD5, etc *now* (not \"in a few releases\n> when everyone's switched to SASL\"), I think this will end up just being\n> another one :-(.\n\nhttp://asg.web.cmu.edu/sasl/sasl-projects.html\n\nIf it's being used in Sendmail, Cyrus IMAP and OpenLDAP, with preliminary\nwork (sponsored by Carnegie Mellon University) in supporting it for CVS\nand LPRng and possibly SSH I think it's safe to say it's beyond \"vaporware\"\nat this point.\n\nThe only reason I was waving my hands a bit is that I'm not sure if\nSASL 2.x is considered production-ready yet. We could support SASL 1.x,\nbut if 2.x is coming out within 6-12 months then it may make more sense\nto target 2.x instead of releasing 1.x today, then switching to 2.x in \nthe next release.\n\nIf there's a concensus that we should proceed, I would also be the \nfirst to argue that we should contact CMU for assistance in the \nconversion. Hopefully they have enough experience with their cyrus\npackage that we can really put this issue to bed. (Meanwhile PostgreSQL\nwould get more free advertising as another major project using their\nSASL library.)\n\n> (It doesn't help any that PAM support was sold to us just one release\n> cycle back on the same grounds that it'd be the last authentication\n> method we'd need to add. I'm more than a tad wary now...)\n\nUmm... I don't know what to say. This is a common misunderstanding of\nPAM (and one reason I *really* hate those PAM Kerberos modules) but people\nkeep repeating it. PAM was only designed for local use, but people keep\ntrying to use it for network authentication even though us security \nfreaks keep pointing out that using some of those modules on a network\nwill leave your system wide open. In contrast SASL was designed from the\nstart to work over an untrusted network.\n\nThis isn't to say that PAM support is totally useless - it may be a\nclean way to handle the ongoing Kerberos principal -> pguser issue, but\nit's a nonstarter for authentication purposes unless you know you're\non the Unix socket.\n\n> > 2) add ZLIB compression.\n> \n> Why do people keep wanting to reinvent SSH tunneling?\n\nOne good reason is that secure sites will prohibit them. SSH tunnels\nrequire that clients have shell accounts on the remote system, and\non a dedicated database server you may have no accounts other than the\nsysadmins who administer the box.\n\nI'm aware of the various tricks you can do - setting the shell to \n/bin/false, requiring RSA authentication and setting the no-tty flag\nin the 'known_keys' file, etc., but at the end of the day there are \nstill extra shell accounts on that system.\n\nSSH tunnels are a good stopgap measure while you add true TLS/SSL\nsupport, but they can't be considered a replacement for that support.\n\nBear\n",
"msg_date": "Sat, 18 May 2002 15:11:39 -0600 (MDT)",
"msg_from": "Bear Giles <bgiles@coyotesong.com>",
"msg_from_op": true,
"msg_subject": "Re: SASL, compression?"
},
{
"msg_contents": "What are the benefits of SASL+Postgresql compared to Postgresql over plain SSL?\n\nCoz Postgresql already supports SSL right?\n\nCheerio,\nLink.\n\nAt 03:11 PM 5/18/02 -0600, Bear Giles wrote:\n>If it's being used in Sendmail, Cyrus IMAP and OpenLDAP, with preliminary\n>work (sponsored by Carnegie Mellon University) in supporting it for CVS\n>and LPRng and possibly SSH I think it's safe to say it's beyond \"vaporware\"\n>at this point.\n\n\n>I'm aware of the various tricks you can do - setting the shell to\n>/bin/false, requiring RSA authentication and setting the no-tty flag\n>in the 'known_keys' file, etc., but at the end of the day there are\n>still extra shell accounts on that system.\n>\n>SSH tunnels are a good stopgap measure while you add true TLS/SSL\n>support, but they can't be considered a replacement for that support.\n>\n>Bear\n\n\n",
"msg_date": "Mon, 20 May 2002 14:35:09 +0800",
"msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>",
"msg_from_op": false,
"msg_subject": "Re: SASL, compression?"
},
{
"msg_contents": "> What are the benefits of SASL+Postgresql compared to Postgresql over plain SSL?\n \nSASL is orthogonal to SSL. SASL is an application-layer library\nand can be run over either regular sockets or SSL. However there\nare SASL hooks to tell it that it's running over a secure channel.\n\nThe anticipated benefit of SASL is that it would replace all of the\ncurrent authetication code with a set of standard plugins. The \nauthority problem would be reduced to a simple text mapping.\n\n(BTW, I didn't make it clear earlier but \"authentication\" is figuring\nout who the other party is, \"authority\" is figuring out what they're\nentitled to do.)\n\nPAM is *not* a solution to network authentication since it was never\ndesigned for it. One well-known nightmare scenario is the Kerberos\nPAM modules - they were designed to be used by local users logging\nonto a virtual terminal, to eliminate the need to modify login to\nacquire Kerberos credentials directly. But in the wild they've been \nseen used with Apache \"mod_pam\" modules to \"autheticate\" Kerberos\nusers. Since they require the Kerberos principal and password to be \ntransmitted across the wire in the clear, they're major security holes\nwhen used this way.\n\n> Coz Postgresql already supports SSL right?\n\nPostgresql minimally supports SSL. It contains some significant\ncoding errors, poor initialization, and no support for client\ncertificates. My recent patches should go a long way towards \nfixing that.\n\nBear\n",
"msg_date": "Mon, 20 May 2002 01:11:06 -0600 (MDT)",
"msg_from": "Bear Giles <bgiles@coyotesong.com>",
"msg_from_op": true,
"msg_subject": "Re: SASL, compression?"
},
{
"msg_contents": "At 01:11 AM 5/20/02 -0600, Bear Giles wrote:\n> > What are the benefits of SASL+Postgresql compared to Postgresql over \n> plain SSL?\n>\n>The anticipated benefit of SASL is that it would replace all of the\n>current authetication code with a set of standard plugins. The\n>authority problem would be reduced to a simple text mapping.\n\n[I'm not a pgsql hacker, so feel free to ignore me :) ]\n\nI can see the benefit of SASL as a standard in public exposed network \nservices like email servers (SMTP, POP, IMAP), where you can support \ndifferent email clients which themselves may or may not support SASL and \nmay use different SASL libraries.\n\nBut for Postgresql - communications is mainly between internal db clients \n(which use the pgsql libraries) and postmaster.\n\nWould the SASL code allow JDBC, Perl DBI+DBD postgresql clients support \nSASL (and encryption) seamlessly? If it would then that's great. If it's \njust psql then not so great.\n\nBecause replacing current authentication code doesn't seem as obvious a \nbenefit to me. The plugin thing sounds useful tho - modular. But would the \nsimple text mapping for authorisation be as simple when UserX is only \nsupposed to have SELECT access to certain tables?\n\nTo me there may be more bang for the buck by improving support for network \nlayer tunnels- like SSL (SASL has more application layer stuff). Maybe even \nsupport plugins for network layer tunnels, rather than plugins for \nauthentication. Because Postgresql already provides authentication and \nauthorisation, we may just need compression/encryption/other tunneling in \nvarious forms.\n\nWould something like this be possible:\nFor postgresql clients - standardise on two handles for input and output \n(ala djb's tcpserver), set environment variables, exec/fork a tunnelapp \nwith argument string. The tunnelapp will read from output handle, write to \ninput handle, and make connection to the tunnelserver (which is where \nthings get difficult - postmaster)..\n\nThen you could have an SASL tunnelapp, an SSL tunnelapp, an SSH tunnelapp.\n\nThis would be bad for O/Ses with not so good forks support like solaris and \nwindows. But the point is - isn't there some other way to abstract the \nnetwork/IO layer stuff so that even recompiles aren't necessary?\n\nSo if there's a bug in the tunnel app it's not a Postgresql problem - only \nthe tunnel app needs to be fixed.\n\n> > Coz Postgresql already supports SSL right?\n>\n>Postgresql minimally supports SSL. It contains some significant\n>coding errors, poor initialization, and no support for client\n>certificates. My recent patches should go a long way towards\n>fixing that.\n\nCool. WRT the patch which requires strict matches on server hostnames - are \nwildcards allowed or is there an option for the client to ignore/loosen \nthings a bit?\n\nCheerio,\nLink.\n\n",
"msg_date": "Mon, 20 May 2002 20:05:42 +0800",
"msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>",
"msg_from_op": false,
"msg_subject": "Re: SASL, compression?"
},
{
"msg_contents": "> I can see the benefit of SASL as a standard in public exposed network \n> services like email servers (SMTP, POP, IMAP), where you can support \n> different email clients which themselves may or may not support SASL and \n> may use different SASL libraries.\n> \n> But for Postgresql - communications is mainly between internal db clients \n> (which use the pgsql libraries) and postmaster.\n \nRemember that the current authentication code requires that each\ndatabase specify the method(s) used to access it. With SASL, it\nshould be possible to specify generic sensitivities (e.g., 'public,'\n'low,' 'high' and 'extreme') and let the systems negotiate any\nauthentication method that satisfies the properties indicated by\nthese sensitivities. Including local authentication methods that\nwe've never heard of.\n\n> Would the SASL code allow JDBC, Perl DBI+DBD postgresql clients support \n> SASL (and encryption) seamlessly? If it would then that's great. If it's \n> just psql then not so great.\n\nSome clients can allow the user to specify a mechanism, but others\nwill require the client to autonegotiate the authentication. Exactly\nhow we'll handle this is one of the open questions.\n\n> Because replacing current authentication code doesn't seem as obvious a \n> benefit to me. The plugin thing sounds useful tho - modular. But would the \n> simple text mapping for authorisation be as simple when UserX is only \n> supposed to have SELECT access to certain tables?\n\nThe authorization question HBA deals with is mapping Kerberos principals\nto pgusers. That level of authorization is handled by the database,\nnot postmaster.\n\n> Cool. WRT the patch which requires strict matches on server hostnames - are \n> wildcards allowed or is there an option for the client to ignore/loosen \n> things a bit?\n\nA lot of CAs won't sign certs with wildcards. They aren't\nnecessary since you can set up the nameserver to provide aliasing.\nIt's also possible to add an arbitrary number of altSubjName extensions,\nso you could always explicitly name all systems if you wanted.\n\nAdding reverse DNS lookup and support for altSubjName extensions\nis on my todo list, but was a lower priority than getting the big\npatch out for feedback.\n\nAs for loosening the cert verification checks, I think a better\nsolution is providing a tool that makes it easy to create good certs.\nIt's too easy to come up with man-in-the-middle attacks if it's easy\nto disable these checks.\n\nAs a compromise, I think it may be possible to run the server with\n*no* cert. This option would be used by sites that only want an\nencrypted channel, and sites that want authentication will make the\ncommitment to creating valid certs.\n\nBear\n",
"msg_date": "Mon, 20 May 2002 09:22:31 -0600 (MDT)",
"msg_from": "Bear Giles <bgiles@coyotesong.com>",
"msg_from_op": true,
"msg_subject": "Re: SASL, compression?"
},
{
"msg_contents": "Tom Lane wrote:\n> Bear Giles <bgiles@coyotesong.com> writes:\n> > 1) add SASL. This is a new standards-track protocol that is often\n> > described as \"PAM\" for network authentication. PostgreSQL could\n> > remove *all* protocol-specific authentication code and use\n> > standard plug-in libraries instead.\n> \n> To me, \"new standards-track protocol\" translates as \"pie in the sky\".\n> When will there be tested, portable, BSD-license libraries that we\n> could *actually* use? I'm afraid this really would end up meaning\n> writing and/or supporting our own SASL code ... and I think there\n> are more important things for the project to be doing.\n> \n> IMHO we've got more than enough poorly-supported authentication options\n> already. Unless you can make a credible case that using SASL would\n> allow us to rip out PAM, Kerberos, MD5, etc *now* (not \"in a few releases\n> when everyone's switched to SASL\"), I think this will end up just being\n> another one :-(.\n> \n> (It doesn't help any that PAM support was sold to us just one release\n> cycle back on the same grounds that it'd be the last authentication\n> method we'd need to add. I'm more than a tad wary now...)\n\nI agree with Tom on this one. \"Plugin\" sounds so slick, but it really\ntranslates to \"abstraction\", and as if our authentication stuff isn't\nalready confusing enough for users to configure, we add another level of\nabstraction into the mix, and things become even more confusing.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 7 Jun 2002 01:24:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SASL, compression?"
}
] |
[
{
"msg_contents": "\n [My apolgies if this turns up in the lists twice (now three times) but my\n mailer claims it's been in the queue for them too long. Not sure why it\n thinks that since it's only a few minutes since I sent it.]\n \n \n On Fri, 17 May 2002, Peter Eisentraut wrote:\n > Nigel J. Andrews writes:\n > \n > > I've attached a patch for libpgtcl which adds access to backend version\n > > numbers.\n > >\n > > This is via a new command:\n > >\n > > pg_version <db channel> <major varname> ?<minor varname>? ?<patch varname>?\n > \n > This doesn't truly reflect the way PostgreSQL version numbers are handled.\n > Say for 7.2.1, the \"major\" is really \"7.2\" and the minor is \"1\". With the\n > interface you proposed, the information major == 7 doesn't really convey\n > any useful information.\n \nAh, oops. I'll change it. I withdraw the patch submission I made yesterday\n(now two days back).\n \n > > I envisage this patch applied to 7.3 tip and to 7.2 for the 7.2.2\n > > release mentioned a couple of days ago. The only problem with doing this\n > > for 7.2 that I can see is where people doing the 'package -exact require\n > > Pgtcl 1.x' thing, and how many of those are there? Even PgAccess doesn't\n > > use that.\n > \n > Normally we only put bug fixes in minor releases. PgAccess may get an\n > exception, but bumping the version number of a library is stretching it a\n > little. If you're intending to use the function for PgAccess, why not\n > make it internal to PgAccess? That way you can tune the major/minor thing\n > exactly how you need it.\n \nIt did occur to me this morning that having it applied for 7.2.2 was perhaps\nsilly as it was introducing a new feature and not a bug fix.\n \nThis feature could be added to PgAccess but I felt it was general enough to be\nplaced in the interface library. I think someone else suggested such a place a\ncouple of weeks ago also. If there is a concensus that this should be done in\nthe application layer I'll happily drop this patch completely.\n \n \n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants\n\n",
"msg_date": "Sat, 18 May 2002 19:11:40 +0100 (BST)",
"msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: [INTERFACES] libpgtcl - backend version information patch"
},
{
"msg_contents": "\n\nThis is similar to the same patch as I submitted Thursday, and hopefully\nwithdrew in time after a response was made. I have repeated the description\nwith appropiate changes for ease of reference.\n\nI've attached a patch for libpgtcl which adds access to backend version\nnumbers.\n\nThis is via a new command:\n\npg_version <db channel> <major varname> ?<minor varname>?\n\nUsing readonly variables rather than a command was my first choice but I\ndecided that it was inappropiate for the library to start assigning global\nvariable(s) when that's really the applications job and the command interface\nis consistent with the rest of the interface.\n\nObviously, backend version numbers are specific to a particular connection. So\nI've created a new data structure, to keep the information as a distinct unit,\nand added an instance of the new structure to the Pg_ConnectionId type. The\nversion information is retrieved from the given connection on first use of\npg_version and cached in the new data structure for subsequent accesses.\n\nIn addition to filling the named variables in the callers scope with version\nnumbers/strings the command returns the complete string as returned by\nversion(). It's not possible to turn this return off at the moment but I don't\nsee it as a problem since normal methods of stopping unwanted values returned\nfrom procedures can be applied in the application if required.\n\nPerhaps the most significant change is that I've increased the package's\nversion number from 1.3 to 1.4. This will adversly effect anyone using an\napplication that requires a specific version of the package where their\npostgres installation is updated but their application has not been. I can't\nimagine there are many applications out there using the package management\nfeatures of TCL though.\n\nThis isn't a bug fix and is therefore for 7.3 not 7.2.2\n\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants\n\n\n\n",
"msg_date": "Sat, 18 May 2002 19:52:27 +0100 (BST)",
"msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>",
"msg_from_op": true,
"msg_subject": "*new* libpgtcl - backend version information patch"
},
{
"msg_contents": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk> writes:\n> This feature could be added to PgAccess but I felt it was general\n> enough to be placed in the interface library. I think someone else\n> suggested such a place a couple of weeks ago also. If there is a\n> concensus that this should be done in the application layer I'll\n> happily drop this patch completely.\n\nI guess I don't quite see the point of doing this in libpgtcl,\nas opposed to doing a \"select version()\" at the application level.\nIt would take only a line or two of Tcl code to do that and parse the\nresult of version(), so why write many lines of C to accomplish the\nsame thing?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 18 May 2002 16:11:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] libpgtcl - backend version information patch "
},
{
"msg_contents": "On Sat, 18 May 2002, Tom Lane wrote:\n\n> \"Nigel J. Andrews\" <nandrews@investsystems.co.uk> writes:\n> > This feature could be added to PgAccess but I felt it was general\n> > enough to be placed in the interface library. I think someone else\n> > suggested such a place a couple of weeks ago also. If there is a\n> > concensus that this should be done in the application layer I'll\n> > happily drop this patch completely.\n> \n> I guess I don't quite see the point of doing this in libpgtcl,\n> as opposed to doing a \"select version()\" at the application level.\n> It would take only a line or two of Tcl code to do that and parse the\n> result of version(), so why write many lines of C to accomplish the\n> same thing?\n\nYes, you're right. It is only a couple of lines to do the exec, error checking\nand parsing.\n\nSomeone mentioned how it might be worth considering putting version testing\ninto the library. I thought it a reasonable idea, something that would be\nreasonably be expected to reused across applications and as I'm not putting\nforward anything for pgaccess until it's decided what the heck is going on with\nit thought I'd do the libpgtcl version of it.\n\nI see the pros as:\n\nversion information is accessable to all TCL applications without each having\nto worry about getting it,\nit comes ready to support multiple DB connections per application.\n\nThe cons:\nwell I don't see anything similar in the perl interface and it's not in libpq\nso as the other interfaces are essentially wrappers for libpq it shouldn't be\nin libpqtcl either,\nthere's more C code than TCL code would take (still, I could change it to use a\nTcl_eval if it's lines of code that count)\n\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants\n\n",
"msg_date": "Sat, 18 May 2002 22:41:00 +0100 (BST)",
"msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: [INTERFACES] libpgtcl - backend version information"
},
{
"msg_contents": "Please note this posting was to HACKERS, INTERFACES, and PATCHES, but\nI'm only subscribed to INTERFACES so that's where this message is going.\n\nOn Sat, 18 May 2002, Nigel J. Andrews wrote:\n\n> On Sat, 18 May 2002, Tom Lane wrote:\n>\n> > \"Nigel J. Andrews\" <nandrews@investsystems.co.uk> writes:\n> > > This feature could be added to PgAccess but I felt it was general\n> > > enough to be placed in the interface library. I think someone else\n> > > suggested such a place a couple of weeks ago also. If there is a\n> > > concensus that this should be done in the application layer I'll\n> > > happily drop this patch completely.\n> >\n> > I guess I don't quite see the point of doing this in libpgtcl,\n> > as opposed to doing a \"select version()\" at the application level.\n> > It would take only a line or two of Tcl code to do that and parse the\n> > result of version(), so why write many lines of C to accomplish the\n> > same thing?\n>\n> Yes, you're right. It is only a couple of lines to do the exec, error checking\n> and parsing.\n>\n> Someone mentioned how it might be worth considering putting version testing\n> into the library. I thought it a reasonable idea, something that would be\n> reasonably be expected to reused across applications and as I'm not putting\n> forward anything for pgaccess until it's decided what the heck is going on with\n> it thought I'd do the libpgtcl version of it.\n\nYes it was I who concurred, for exactly the reasons Nigel offered.\nPutting that functionality into the TCL layer would help more than just\npgaccess. I program in PHP a lot and while it provides the basic\nfunctions of the C api there's also several more functions for returning\na variety of information on the connected database, at least with\npostgresql and mysql.\n\nAs far as pgaccess goes, I don't think we're looking at 7.2.2, as that\nseems to be mostly a bug fix release. Maybe we should shoot for 7.3?\n\n> I see the pros as:\n>\n> version information is accessable to all TCL applications without each having\n> to worry about getting it,\n> it comes ready to support multiple DB connections per application.\n>\n> The cons:\n> well I don't see anything similar in the perl interface and it's not in libpq\n> so as the other interfaces are essentially wrappers for libpq it shouldn't be\n> in libpqtcl either,\n> there's more C code than TCL code would take (still, I could change it to use a\n> Tcl_eval if it's lines of code that count)\n\nI guess not many people are excited about moving this functionality any\nlower. I just think with the changes that are coming up with the schema\nand all you are leaving too much up to the application programmers or\nthe end users. I could be wrong, I haven't really been following\npostgresql for too long.\n\n--Chris\n\n-- \n\ncmaj_at_freedomcorpse_dot_info\nfingerprint 5EB8 2035 F07B 3B09 5A31 7C09 196F 4126 C005 1F6A\n\n\n",
"msg_date": "Sat, 18 May 2002 20:37:11 -0400 (EDT)",
"msg_from": "\"C. Maj\" <cmaj@freedomcorpse.info>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] libpgtcl - backend version information"
}
] |
[
{
"msg_contents": "I came across another bug in the SSL code. backend/libpq/pqcomm.c:pq_eof()\ncalls recv() to read a single byte of data to check for EOF. The\ncharacter is then stuffed into the read buffer.\n\nThis will not work with SSL. Besides the data being encrypted, you\ncould end up reading a byte from an SSL control message instead of a\ndata message, or messing up counts. Fortunately this procedure only\nseems to be called in some password code - if you use 'trust' or 'ident'\nthen the SSL should work fine.\n\nThe quick fix is to add another USE_SSL block, a better fix is to\nexplicitly create a new abstraction layer.\n\nBear\n",
"msg_date": "Sat, 18 May 2002 12:38:29 -0600 (MDT)",
"msg_from": "Bear Giles <bgiles@coyotesong.com>",
"msg_from_op": true,
"msg_subject": "pq_eof() broken with SSL"
},
{
"msg_contents": "Bear Giles writes:\n\n> I came across another bug in the SSL code. backend/libpq/pqcomm.c:pq_eof()\n> calls recv() to read a single byte of data to check for EOF. The\n> character is then stuffed into the read buffer.\n\n> The quick fix is to add another USE_SSL block,\n\nSo it seems. Do you volunteer to do that?\n\n> a better fix is to explicitly create a new abstraction layer.\n\nWell, this is supposed to be an abstraction already. ;-)\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sun, 19 May 2002 17:17:12 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: pq_eof() broken with SSL"
},
{
"msg_contents": "> > a better fix is to explicitly create a new abstraction layer.\n> \n> Well, this is supposed to be an abstraction already. ;-)\n \nThe new abstraction layer would localize SSL vs. plain sockets, and\npossibly SASL as well.\n\nThe SSL issues I've identified to date are:\n\ncritical\n\n - no check for SSL_get_error() after reads and writes. (*)\n\n - code assumes zero bytes read or write indicates an error.\n This isn't necessarily the case with SSL because of control\n messages.\n\nsevere\n\n - pq_eof() fails on SSL. Since this is localized to the password\n handling code, I don't consider this error critical since the\n system can reliably work provided known problematic conditions\n are avoided.\n\n - both front- and back-end should call SSL_shutdown() immediately\n prior to closing connection. (1/2 *)\n\n - private keys should be regular files with mode 0600 or 0400. (*)\n they should be owned by the running process.\n\n - backend should use empheral DH keys.\n\n - encrypted private keys should be supported.\n\nimportant\n\n - client cert handling. (*)\n\n - makecert(?), a tool to generate PostgreSQL server certs.\n It is comparable in function to Apache mod-ssl script of\n the same name, and should be run when installing database\n if SSL is enabled.\n\n - pgkeygen(?), a tool to generate client certificates. It is\n comparable to sshkeygen for SSH.\n\n - client and server should migrate to TLS.\n\n - connections should expire after a period of inactivity.\n\n - clients should provide identification of remote system to\n user. (*)\n\n - clients should verify that the server cert identifies the\n server. (server \"common name\" should resolve to IP address\n of server.)\n\n - DSA keys should work.\n\nongoing\n\n - change protocol to use 'STARTTLS' type negotiation, instead\n of current approach.\n\n - SASL?\n\n - using client certs for authentication\n\nunknown\n\n - I'm not sure select() is guaranteed to work with SSL.\n\n(*) have had patches submitted, but may be superceded by subsequent\npatches.\n\n\nUnfortunately, I'm not sure that this list is complete - I'm still\ndoing research. The patches I already submitted are fairly straight-\nforward - OpenSSL contains sample clients and servers that demonstrate\ngood techniques. Right now I'm cross-checking the code with my\n_SSL and TLS_ book to make sure there aren't other holes, and that\ntakes time.\n\nI hadn't planned on doing any of this, but I got caught up in it while\nsetting up snort to log to PostgreSQL via an encrypted channel. As \nan aside, this is a good example of a case where an SSH tunnel is \ninadequate!\n\nSo to answer the question I clipped, I'm looking at it but I don't\nwant to do a half-assed solution. But as the scope of the solution\nexpands, it becomes more important to have consensus that something\nneeds to be done and this is the right solution. So right now I'm\nnot ready to make any commitments.\n\nBear\n",
"msg_date": "Sun, 19 May 2002 15:38:51 -0600 (MDT)",
"msg_from": "Bear Giles <bgiles@coyotesong.com>",
"msg_from_op": true,
"msg_subject": "Re: pq_eof() broken with SSL"
}
] |
[
{
"msg_contents": "For those who want to play on the bleeding edge of CVS, can someone provide the syntax for the recently-checked-in set-returning functions? I've got it figured out when I'm returning a many rows of single column, but not for many rows of several columns.\n\nIf someone can do this, and no one has put together docs on this feature, I'll volunteer to write this up.\n\nThanks!\n\n- J.\n\n-- \n\nJoel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton\nKnowledge Management & Technology Consultant \n\n\n",
"msg_date": "Sat, 18 May 2002 20:51:31 -0000",
"msg_from": "\"Joel Burton\" <joel@joelburton.com>",
"msg_from_op": true,
"msg_subject": "Set-returning function syntax"
},
{
"msg_contents": "Joel Burton wrote:\n > For those who want to play on the bleeding edge of CVS, can someone\n > provide the syntax for the recently-checked-in set-returning\n > functions? I've got it figured out when I'm returning a many rows of\n > single column, but not for many rows of several columns.\n\nFor multiple columns, you need a composite data type defined -- \nbasically you need to create a table, even if it is an unused shell, \nwhich has the column names and data types of the returned tuple. See \nbelow for more.\n\n >\n > If someone can do this, and no one has put together docs on this\n > feature, I'll volunteer to write this up.\n\nI hadn't gotten to the docs yet, but if you wanted to write something up \nthat would be great! :) I'll certainly help too.\n\nAttached is the script I've been using to test as I go. It shows the \nusage of SRFs in a variety of situations (note that the C function tests \nrequire contrib/dblink installed). There's also a description in one of \nmy earlier posts. Here is a recap, edited to the latest reality:\n\nHow it currently works:\n-----------------------\n1. The SRF may be either marked as returning a set or not. A function \nnot marked as returning a set simply produces one row.\n\n2. The SRF may either return a base data type (e.g. TEXT) or a composite \ndata type (e.g. pg_class). If the function returns a base data type, the \nsingle result column is named for the function. If the function returns \na composite type, the result columns get the same names as the \nindividual attributes of the type.\n\n3. The SRF may be aliased in the FROM clause, but it also be left \nunaliased. If a function is used in the FROM clause with no alias, the \nfunction name is used as the relation name.\n\nHope that's a start.\n\nThanks,\n\nJoe",
"msg_date": "Sat, 18 May 2002 15:16:03 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Set-returning function syntax"
},
{
"msg_contents": "Does your SRF function allow to return a setof composite data type\nusing C function? If so, how can I write such that C function? I\ncouldn't find any example or explanation so far. You referred dblink,\nbut in my understanding it does not have any function that returns a\nsetof composite data type.\n--\nTatsuo Ishii\n\n> Attached is the script I've been using to test as I go. It shows the \n> usage of SRFs in a variety of situations (note that the C function tests \n> require contrib/dblink installed). There's also a description in one of \n> my earlier posts. Here is a recap, edited to the latest reality:\n> \n> How it currently works:\n> -----------------------\n> 1. The SRF may be either marked as returning a set or not. A function \n> not marked as returning a set simply produces one row.\n> \n> 2. The SRF may either return a base data type (e.g. TEXT) or a composite \n> data type (e.g. pg_class). If the function returns a base data type, the \n> single result column is named for the function. If the function returns \n> a composite type, the result columns get the same names as the \n> individual attributes of the type.\n> \n> 3. The SRF may be aliased in the FROM clause, but it also be left \n> unaliased. If a function is used in the FROM clause with no alias, the \n> function name is used as the relation name.\n> \n> Hope that's a start.\n> \n> Thanks,\n> \n> Joe\n",
"msg_date": "Sun, 19 May 2002 09:03:11 +0900 (JST)",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Set-returning function syntax"
},
{
"msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> Does your SRF function allow to return a setof composite data type\n> using C function? If so, how can I write such that C function?\n\nThe \"setof\" part is documented in src/backend/utils/fmgr/README.\nThere's no good documentation for returning tuples at the moment,\nbut basically you return a pointer to a TupleTableSlot. (Re-use\nthe same slot on each call to avoid memory leakage.) There's an\nexample in src/backend/executor/functions.c --- look at the uses\nof funcSlot.\n\nOne reason this isn't documented is that it's really ugly. It might\nbe a good idea to change it before we start having lots of user-written\ncode that depends on it ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 18 May 2002 20:17:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Set-returning function syntax "
},
{
"msg_contents": "Tatsuo Ishii wrote:\n> Does your SRF function allow to return a setof composite data type\n> using C function? If so, how can I write such that C function? I\n> couldn't find any example or explanation so far. You referred dblink,\n> but in my understanding it does not have any function that returns a\n> setof composite data type.\n>\n\nI haven't written a C function yet that returns a composite type. You \nare correct that dblink does not have an example which returns composite \ntype, because that wasn't even possible when I wrote the dblink code ;-)\n\nAt least initially, a C function returning a composite type will have to \ndo alot of dirty work -- i.e. something like:\n- manually form a tuple based on the return type relation attributes\n- save the tuple in a tuple table slot\n- return a pointer to the slot as a datum\n\nI don't know what other complications may be lurking, but I will try to \nget a working example sometime this coming week and post it to HACKERS.\n\nJoe\n\n\n\n",
"msg_date": "Sat, 18 May 2002 17:21:13 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Set-returning function syntax"
},
{
"msg_contents": "Tatsuo Ishii wrote:\n> Does your SRF function allow to return a setof composite data type\n> using C function? If so, how can I write such that C function? I\n\nJust to follow-up, here's a quick look at what works and what doesn't, \nat least using my test script.\n\nSELECT * FROM myfunc();\nLanguage \nRetSet \nRetType \nStatus\n--------------- ------- ------- ---------------------\nC \n\tt\tb\tOK\nC \n\tt\tc\tNot tested\nC \n\tf\tb\tOK\nC \n\tf\tc\tNot tested\nSQL \n\tt\tb\tOK\nSQL \n\tt\tc\tOK\nSQL \n\tf\tb\tOK\nSQL \n\tf\tc\tOK\nPL/pgSQL \nt \nb \nNo retset support\nPL/pgSQL \nt \nc \nNo retset support\nPL/pgSQL \nf \nb \nOK\nPL/pgSQL \nf \nc \nOK\n-----------------------------------------------------\nRetSet: t = function declared to return setof something\nRetType: b = base type; c = composite type\n\nSame cases work when a view is defined as SELECT * FROM myfunc().\n\nJoe\n\n",
"msg_date": "Sat, 18 May 2002 17:56:14 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Set-returning function syntax"
},
{
"msg_contents": "> The \"setof\" part is documented in src/backend/utils/fmgr/README.\n> There's no good documentation for returning tuples at the moment,\n> but basically you return a pointer to a TupleTableSlot. (Re-use\n> the same slot on each call to avoid memory leakage.) There's an\n> example in src/backend/executor/functions.c --- look at the uses\n> of funcSlot.\n\nThat was almost same as I guessed:-)\n\n> One reason this isn't documented is that it's really ugly. It might\n> be a good idea to change it before we start having lots of user-written\n> code that depends on it ...\n\nSounds like a good idea.\n--\nTatsuo Ishii\n",
"msg_date": "Sun, 19 May 2002 11:33:22 +0900 (JST)",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Set-returning function syntax "
}
] |
[
{
"msg_contents": "I've been looking at the uses of scanCommandId with some suspicion.\nWe recently fixed a problem wherein cursors were seeing the effects\nof commands started after the cursor was opened (in the same\ntransaction), and I think there may be more such problems. I don't\nlike the way that SQL functions and cursors mess around with a global\nvariable that affects many other places.\n\nI am thinking that ScanCommandId ought to be part of the SnapshotData\nstruct, and not a global variable at all. In this vision, when\nExecutorStart() saves the current QuerySnapshot into estate->es_snapshot\nto define the MVCC worldview for the current query, it should also save\nCurrentCommandId into that same datastructure. This action would thus\nfreeze visibility of effects of both external transactions and commands\nof the current transaction for the execution of this query.\n\nHaving done this I believe we could remove the scanCommandId global\nvariable entirely, as well as the save/restore logic that exists for\nit in commands/portalcmds.c, executor/functions.c, executor/spi.c.\nThe places where scanCommandId is restored to a prior value would\nnot be needed, because the logic they are trying to protect would now\nbe looking at a previously saved snapshot instead.\n\nThere are three tqual.c routines that make use of scanCommandId:\nHeapTupleSatisfiesNow, HeapTupleSatisfiesUpdate, and\nHeapTupleSatisfiesSnapshot. With scanCommandId in SnapshotData,\nobviously HeapTupleSatisfiesSnapshot would have easy access to the\ncommand ID it should check against. In the case of\nHeapTupleSatisfiesNow, I believe CurrentCommandId should be used in all\ncases. It doesn't make any sense to me that HeapTupleSatisfiesNow will\nrecognize just-committed tuples from other transactions and not tuples\nfrom past commands of the current transaction. That is: if there is no\nsnapshotting effect for other transactions then there should be none for\nmy own commands either. Likewise for HeapTupleSatisfiesUpdate: the\nreference ought always to be CurrentCommandId, never anything older.\n\nComments? Have I missed something?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 19 May 2002 14:53:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "ScanCommandId should become part of snapshot"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.