threads
listlengths
1
2.99k
[ { "msg_contents": "Oopps was not a group reply, sorry all you reader of the list :-)\n\n\n\n----- Forwarded message from Jean-Paul ARGUDO <jean-paul.argudo@IDEALX.com> -----\n\nFrom: Jean-Paul ARGUDO <jean-paul.argudo@IDEALX.com>\nTo: Tom Lane <tgl@sss.pgh.pa.us>\nSubject: Re: [HACKERS] RTREE Index on primary key generated by a sequence\n\n> > Since I was at first Oracle DBA, I've been told many times at\n> > professional trainings that when there is a table wich primary key is\n> > generated by a sequence, it is worth create a RTREE index on it rather\n> > than a BTREE (for index balancing reasons).\n> \n> Huh?\n> \n> RTREEs are for two-or-more-dimensional data (the implementation in PG\n> only handles 2-D, IIRC). So they're not applicable to scalar data.\n> In any case, the claim that RTREEs are more readily balanced than BTREEs\n> seems totally unfounded to me.\n> \n> In PG, the btree implementation is by far the best-tested,\n> best-optimized index access method we have; for example, it's the only\n> one that has decent support for concurrent access. If you want to use\n> one of the other ones, I'd recommend you have a darn good reason.\n> \n> \t\t\tregards, tom lane\n\nThanks Tom for such good advice :)\n\nThus, I made a big mistake this is not RTREE that I am talking about but\nBTREE with reversed keys. It's the Oracle keyword REVERSE that perturbed\nme :-)\n\nSo, what about BTREEs with reversed keys? \n\nSorry again for the mistake. Thanks for the response.\n\n\n-- \nJean-Paul ARGUDO \t\tIDEALX S.A.S\nConsultant bases de donn�es\t\t\t15-17, av. de S�gur\nhttp://IDEALX.com/ \t\t\t\tF-75007 PARIS\n\n----- End forwarded message -----\n-- \nJean-Paul ARGUDO \t\tIDEALX S.A.S\nConsultant bases de donn�es\t\t\t15-17, av. de S�gur\nhttp://IDEALX.com/ \t\t\t\tF-75007 PARIS\n", "msg_date": "Fri, 25 Jan 2002 16:57:00 +0100", "msg_from": "Jean-Paul ARGUDO <jean-paul.argudo@IDEALX.com>", "msg_from_op": true, "msg_subject": "Fwd: Re: RTREE Index on primary key generated by a sequence" } ]
[ { "msg_contents": "Don,\n\ndoes your approach handle directed graphs ( DAG ) ?\nActually our module is just a result of our research for new\ndata type which could handle DAGs ( yahoo, dmoz -like hierarchies)\neffectively in PostgreSQL.\nWhile we didn't find a solution we decided to release this module\nbecause 64 children would quite ok for many people.\nOf course, 128 would be better :-)\n\nHow about 'move' operation in your approach ?\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 25 Jan 2002 22:17:34 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "contrib/tree" }, { "msg_contents": "On Sat, 2002-01-26 at 00:17, Oleg Bartunov wrote:\n> Don,\n> \n> does your approach handle directed graphs ( DAG ) ?\n> Actually our module is just a result of our research for new\n> data type which could handle DAGs ( yahoo, dmoz -like hierarchies)\n> effectively in PostgreSQL.\n\nWhy not use intarray's instead of (n=6)bit-arrays?\n\nIs it just space savings ( 64(0) of anything is enough ;) ) or something\nmore fundamental ?\n\n> While we didn't find a solution we decided to release this module\n> because 64 children would quite ok for many people.\n> Of course, 128 would be better :-)\n\n4294967296 would be enough for almost everybody :)\n\n> How about 'move' operation in your approach ?\n\nI have not looked at his code long enough but it seems to still need\nreplacing all child nodes bitarray tails ...\n\n--------------\nHannu\n\n", "msg_date": "26 Jan 2002 00:38:50 +0500", "msg_from": "Hannu Krosing <hannu@krosing.net>", "msg_from_op": false, "msg_subject": "Re: contrib/tree" }, { "msg_contents": "Hannu Krosing wrote:\n\n\n>>How about 'move' operation in your approach ?\n>>\n> \n> I have not looked at his code long enough but it seems to still need\n> replacing all child nodes bitarray tails ...\n\n\nYes, it does. But moving items around is a rare event in our environment.\n\n\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n", "msg_date": "Fri, 25 Jan 2002 16:19:35 -0800", "msg_from": "Don Baccus <dhogaza@pacifier.com>", "msg_from_op": false, "msg_subject": "Re: contrib/tree" }, { "msg_contents": "Oleg Bartunov writes:\n\n> does your approach handle directed graphs ( DAG ) ?\n> Actually our module is just a result of our research for new\n> data type which could handle DAGs ( yahoo, dmoz -like hierarchies)\n> effectively in PostgreSQL.\n> While we didn't find a solution we decided to release this module\n> because 64 children would quite ok for many people.\n> Of course, 128 would be better :-)\n\nI was under the impression that the typical way to handle tree structures\nin relational databases was with recursive unions. It's probably\ninfinitely slower than stuffing everything into one datum, but it gets you\nall the flexibility that the DBMS has to offer.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sat, 26 Jan 2002 14:07:36 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: contrib/tree" }, { "msg_contents": "Peter Eisentraut wrote:\n\n> Oleg Bartunov writes:\n> \n> \n>>does your approach handle directed graphs ( DAG ) ?\n>>Actually our module is just a result of our research for new\n>>data type which could handle DAGs ( yahoo, dmoz -like hierarchies)\n>>effectively in PostgreSQL.\n>>While we didn't find a solution we decided to release this module\n>>because 64 children would quite ok for many people.\n>>Of course, 128 would be better :-)\n>>\n> \n> I was under the impression that the typical way to handle tree structures\n> in relational databases was with recursive unions. It's probably\n> infinitely slower than stuffing everything into one datum, but it gets you\n> all the flexibility that the DBMS has to offer.\n\n\nAs I explained to Oleg privately (I think it was privately, at least) a \nkey-based approach doesn't work well for DAGs because in essence you \nneed a set of keys, one for each path that can reach the node. One of \nmy websites tracks bird sightings for people in the Pacific NW and our \ngeographical database is a DAG, not a tree (we have wildlife refuges \nthat overlap states, counties etc). In that system I use a \nparent-child table to track the relationships.\n\nMy impression is that there's no single \"best way\" to handle trees or \ngraphs in an RDBMS that doesn't provide internal support (such as Oracle \nwith its \"CONNECT BY\" extension).\n\nThe method we use in OpenACS works very well for us. Insertion and \nselection are fast, and these are the common operations in *our* \nenvironment. YMMV, of course. Key-based approaches are fairly well \nknown, at least none of us claim to have invented anything here. The \nonly novelty, if you will, is our taking advantage of the fact that PG's \nimplementation of BIT VARYING just happens to work really well as a \ndatatype for storing keys. Full indexing support, substring, position, \netc ... very slick.\n\nSomeone asked about using an integer array to store the hierarchical \ninformation. I looked at that a few months back but it would require \nproviding custom operators, so rejected it in favor of the approach \nwe're now using. It is important to us that users be able to fire up \nOpenACS 4 on a vanilla PG, such as the one installed by their Linux or \nBSD distribution. That rules out special operators that require contrib \ncode or the like.\n\nWe do use Oleg's full-text search stuff, but searching's optional and \ncan be added in after the user's more comfortable with our toolkit, \nPostgreSQL, etc. A lot of our users are new to Postgres, or at least \nhave a lot more Oracle experience than PG experience.\n\nBut the integer array approach might well work for folks who don't mind \nthe need to build in special operators.\n\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n", "msg_date": "Sat, 26 Jan 2002 11:25:13 -0800", "msg_from": "Don Baccus <dhogaza@pacifier.com>", "msg_from_op": false, "msg_subject": "Re: contrib/tree" }, { "msg_contents": "On Sun, 2002-01-27 at 00:25, Don Baccus wrote:\n> Peter Eisentraut wrote:\n> > I was under the impression that the typical way to handle tree structures\n> > in relational databases was with recursive unions. It's probably\n> > infinitely slower than stuffing everything into one datum, but it gets you\n> > all the flexibility that the DBMS has to offer.\n\nI see no reason why WITH RECURSIVE should be inherently slower than other \napproaches except that checks for infinite recursion could be pricey. \n\nOther than that getting rows by index should be more or less equal in both \ncases.\n\n\n> As I explained to Oleg privately (I think it was privately, at least) a \n> key-based approach doesn't work well for DAGs because in essence you \n> need a set of keys, one for each path that can reach the node. One of \n> my websites tracks bird sightings for people in the Pacific NW and our \n> geographical database is a DAG, not a tree (we have wildlife refuges \n> that overlap states, counties etc). In that system I use a \n> parent-child table to track the relationships.\n> \n> My impression is that there's no single \"best way\" to handle trees or \n> graphs in an RDBMS that doesn't provide internal support (such as Oracle \n> with its \"CONNECT BY\" extension).\n\nThe full SQL3 syntax for it is much more powerful and complex (see\nattachment).\n\nI think that this is what should eventually go into postgresql.\n\n> Someone asked about using an integer array to store the hierarchical \n> information. I looked at that a few months back but it would require \n> providing custom operators, so rejected it in favor of the approach \n> we're now using. It is important to us that users be able to fire up \n> OpenACS 4 on a vanilla PG, such as the one installed by their Linux or \n> BSD distribution. That rules out special operators that require contrib \n> code or the like.\n> \n> We do use Oleg's full-text search stuff, but searching's optional and \n> can be added in after the user's more comfortable with our toolkit, \n> PostgreSQL, etc. A lot of our users are new to Postgres, or at least \n> have a lot more Oracle experience than PG experience.\n> \n> But the integer array approach might well work for folks who don't mind \n> the need to build in special operators.\n\nI'll try if I can build the operators in PL/PGSL so one would not\n\"really\" need to build special operators ;)\n\nTell me if this is something impossible.\n\n------------------\nHannu", "msg_date": "27 Jan 2002 01:29:56 +0500", "msg_from": "Hannu Krosing <hannu@krosing.net>", "msg_from_op": false, "msg_subject": "Re: contrib/tree" }, { "msg_contents": "dhogaza@pacifier.com (Don Baccus) writes:\n> Peter Eisentraut wrote:\n> > I was under the impression that the typical way to handle tree\n> > structures in relational databases was with recursive unions.\n> > It's probably infinitely slower than stuffing everything into one\n> > datum, but it gets you all the flexibility that the DBMS has to\n> > offer.\n\n> As I explained to Oleg privately (I think it was privately, at\n> least) a key-based approach doesn't work well for DAGs because in\n> essence you need a set of keys, one for each path that can reach the\n> node. One of my websites tracks bird sightings for people in the\n> Pacific NW and our geographical database is a DAG, not a tree (we\n> have wildlife refuges that overlap states, counties etc). In that\n> system I use a parent-child table to track the relationships.\n\n... Where parent/child has the unfortunate demerit that walking the\ntree requires (more-or-less; it could get _marginally_ hidden in a\nstored procedure) a DB query for each node that gets explored.\n\n> My impression is that there's no single \"best way\" to handle trees\n> or graphs in an RDBMS that doesn't provide internal support (such as\n> Oracle with its \"CONNECT BY\" extension).\n\n> The method we use in OpenACS works very well for us. Insertion and\n> selection are fast, and these are the common operations in *our*\n> environment. YMMV, of course. Key-based approaches are fairly well\n> known, at least none of us claim to have invented anything here.\n> The only novelty, if you will, is our taking advantage of the fact\n> that PG's implementation of BIT VARYING just happens to work really\n> well as a datatype for storing keys. Full indexing support,\n> substring, position, etc ... very slick.\n\nHave you a URL for this? (A link to a relevant source code file would\nbe acceptable...)\n\n> Someone asked about using an integer array to store the hierarchical\n> information. I looked at that a few months back but it would\n> require providing custom operators, so rejected it in favor of the\n> approach we're now using. It is important to us that users be able\n> to fire up OpenACS 4 on a vanilla PG, such as the one installed by\n> their Linux or BSD distribution. That rules out special operators\n> that require contrib code or the like.\n\nAre you referring to the \"nested tree\" model (particularly promoted by\nJoe Celko; I don't know of a seminal source behind him)? It\nunfortunately doesn't work with graphs...\n-- \n(concatenate 'string \"cbbrowne\" \"@ntlug.org\")\nhttp://www3.sympatico.ca/cbbrowne/nonrdbms.html\n\"Did you ever walk in a room and forget why you walked in? I think\nthat's how dogs spend their lives.\" -- Sue Murphy\n", "msg_date": "26 Jan 2002 15:46:39 -0500", "msg_from": "Christopher Browne <cbbrowne@acm.org>", "msg_from_op": false, "msg_subject": "Re: contrib/tree" }, { "msg_contents": "Hannu Krosing wrote:\n\n> On Sun, 2002-01-27 at 00:25, Don Baccus wrote:\n> \n>>Peter Eisentraut wrote:\n>>\n>>>I was under the impression that the typical way to handle tree structures\n>>>in relational databases was with recursive unions. It's probably\n>>>infinitely slower than stuffing everything into one datum, but it gets you\n>>>all the flexibility that the DBMS has to offer.\n>>>\n> \n> I see no reason why WITH RECURSIVE should be inherently slower than other \n> approaches except that checks for infinite recursion could be pricey. \n> \n> Other than that getting rows by index should be more or less equal in both \n> cases.\n\n\nWe use Oracle's \"CONNECT BY\", not a key-oriented approach, in the Oracle \nversion of the toolkit. There are some awkwardnesses involved in their \nimplementation. You can't join with the table you're \"connecting\". If \nyou do it in a subselect then join against the result, you get the right \nrows (of course) but Oracle's free to join in any order. So you can't \nget a \"tree walk\" output order if you need to join against your tree, \nmeaning you have to fall back on a sort key anyway (a simpler one, though).\n\nI haven't looked at \"WITH RECURSIVE\" to see if it's defined to be more \nuseful than Oracle's \"CONNECT BY\" since I don't use any RDBMS that \nimplements it.\n\n\n>>My impression is that there's no single \"best way\" to handle trees or \n>>graphs in an RDBMS that doesn't provide internal support (such as Oracle \n>>with its \"CONNECT BY\" extension).\n>>\n> \n> The full SQL3 syntax for it is much more powerful and complex (see\n> attachment).\n> \n> I think that this is what should eventually go into postgresql.\n\n\nYes, indeed.\n\n\n> I'll try if I can build the operators in PL/PGSL so one would not\n> \"really\" need to build special operators ;)\n> \n> Tell me if this is something impossible.\n\n\nThere's the speed issue, of course ... and the extra space, which for \ndeep trees could be significant.\n\nOur solution suits our needs very well, and we're happy with it. We do \nget 2 billion plus immediate children per node and a one-byte per level \nkey for trees that aren't big and flat. The intarray approach is just a \ndifferent storage technique for the same method, I don't see that moving \nnodes is any easier (you have to touch the same number of nodes if you \nmove a subtree around). It takes more storage and the necessary \ncomparisons (even if written in C) will be slower unless the tree's big \nand flat (because you're using four bytes for every level, while our BIT \nVARYING scheme, in practice, uses one byte for each level in a very \nlarge majority of cases).\n\n\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n", "msg_date": "Sat, 26 Jan 2002 17:06:13 -0800", "msg_from": "Don Baccus <dhogaza@pacifier.com>", "msg_from_op": false, "msg_subject": "Re: contrib/tree" }, { "msg_contents": "On Sun, 2002-01-27 at 06:06, Don Baccus wrote:\n> Hannu Krosing wrote:\n> \n> \n> > I'll try if I can build the operators in PL/PGSL so one would not\n> > \"really\" need to build special operators ;)\n\nOk, I've done most of it (the comparison functions and operators), but\nnow I'm stuck with inability to find any arrayconstructing functionality\nin postgres - the only way seems to be the text-to-type functions .\n\nAlso arrays seem to be read only -- a[i] := 1 is a syntax error.\n\nAnd get/set slice operators are defined static in source ;(\n\n> > Tell me if this is something impossible.\n> \n> \n> There's the speed issue, of course ... and the extra space, which for \n> deep trees could be significant.\n> \n> Our solution suits our needs very well, and we're happy with it. We do \n> get 2 billion plus immediate children per node and a one-byte per level \n> key for trees that aren't big and flat. The intarray approach is just a \n> different storage technique for the same method, I don't see that moving \n> nodes is any easier (you have to touch the same number of nodes if you \n> move a subtree around). \n\nIs there a simple query for getting all ancestors of a node ?\n\nThe intarray approach has all them already encoded .\n\n> It takes more storage and the necessary \n> comparisons (even if written in C) will be slower unless the tree's big \n> and flat (because you're using four bytes for every level, while our BIT \n> VARYING scheme, in practice, uses one byte for each level in a very \n> large majority of cases).\n\nI'm inclining more and more towards using your approach. I just even\nfigured out that I don't rreally need to get the ancestors for my needs.\n\n-------------\nHannu\n\n", "msg_date": "27 Jan 2002 16:31:34 +0500", "msg_from": "Hannu Krosing <hannu@krosing.net>", "msg_from_op": false, "msg_subject": "Re: contrib/tree" }, { "msg_contents": "Hannu Krosing wrote:\n\n\n> Is there a simple query for getting all ancestors of a node ?\n\n\nYes, a recursive SQL function that returns a rowset of ancestor keys. \nIt works off the key directly, doesn't need to touch any tables, so is \nquite fast.\n\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n", "msg_date": "Sun, 27 Jan 2002 07:16:40 -0800", "msg_from": "Don Baccus <dhogaza@pacifier.com>", "msg_from_op": false, "msg_subject": "Re: contrib/tree" } ]
[ { "msg_contents": "\nIs it safe to drop and recreate an index used by a sequence? Over\nthree databases I have these key indexes taking up about a gig of\ndisk space and I need to free it up (since the partition is getting\nrather full).\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 25 Jan 2002 15:24:10 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": true, "msg_subject": "sequence indexes" }, { "msg_contents": "Vince Vielhaber <vev@michvhf.com> writes:\n> Is it safe to drop and recreate an index used by a sequence?\n\nUh, sequences haven't got indexes. You mean an index on a \"serial\"\ncolumn, no? Sure, there's no magic there. Don't forget it's a\nunique index, though, if you want to have the same error checking\nas before.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 25 Jan 2002 15:39:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: sequence indexes " }, { "msg_contents": "On Fri, 25 Jan 2002, Tom Lane wrote:\n\n> Vince Vielhaber <vev@michvhf.com> writes:\n> > Is it safe to drop and recreate an index used by a sequence?\n>\n> Uh, sequences haven't got indexes. You mean an index on a \"serial\"\n> column, no? Sure, there's no magic there. Don't forget it's a\n> unique index, though, if you want to have the same error checking\n> as before.\n\nIt's a serial column.\n\n | vev | newclaim_newclaimid_key | index |\n | vev | newclaim_newclaimid_seq | sequence |\n\n 577527808 Jan 25 14:50 newclaim_newclaimid_key\n\nselect count(*) from newclaim;\ncount\n-----\n53747\n(1 row)\n\n\nselect max(newclaimid) from newclaim;\n max\n-------\n9907663\n(1 row)\n\nA bit much diskspace for that, isn't it? The data turns over alot.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 25 Jan 2002 15:49:33 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": true, "msg_subject": "Re: sequence indexes " }, { "msg_contents": "Vince Vielhaber <vev@michvhf.com> writes:\n> A bit much diskspace for that, isn't it? The data turns over alot.\n\nYeah, this is one of the scenarios where we desperately need index\ncompaction. The index pages holding the lower serial numbers are\nno doubt empty or nearly so, but there's no mechanism for reclaiming\nthat space short of rebuilding the index. (BTW you might consider\nREINDEX instead of a manual drop/recreate.)\n\nI've looked at the problem a little bit --- there's literature more\nrecent than Lehmann-Yao that talks about how to do btree compaction\nwithout losing concurrency. But it didn't get done for 7.2.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 25 Jan 2002 15:56:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: sequence indexes " }, { "msg_contents": "On Fri, 25 Jan 2002, Tom Lane wrote:\n\n> Vince Vielhaber <vev@michvhf.com> writes:\n> > A bit much diskspace for that, isn't it? The data turns over alot.\n>\n> Yeah, this is one of the scenarios where we desperately need index\n> compaction. The index pages holding the lower serial numbers are\n> no doubt empty or nearly so, but there's no mechanism for reclaiming\n> that space short of rebuilding the index. (BTW you might consider\n> REINDEX instead of a manual drop/recreate.)\n\nI'm guessing reindex wasn't in 6.5.3. :(\n\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 25 Jan 2002 16:00:03 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": true, "msg_subject": "Re: sequence indexes " }, { "msg_contents": "Vince Vielhaber <vev@michvhf.com> writes:\n> I'm guessing reindex wasn't in 6.5.3. :(\n\nVince, surely you know better than to still be running 6.5.3 :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 25 Jan 2002 16:01:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: sequence indexes " }, { "msg_contents": "On Fri, 25 Jan 2002, Tom Lane wrote:\n\n> Vince Vielhaber <vev@michvhf.com> writes:\n> > I'm guessing reindex wasn't in 6.5.3. :(\n>\n> Vince, surely you know better than to still be running 6.5.3 :-(\n\nActually until just a few minutes ago I thought it was 7.1.3, guess\nthis thing will be getting upgraded next month.\n\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 25 Jan 2002 16:02:53 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": true, "msg_subject": "Re: sequence indexes " } ]
[ { "msg_contents": "\n> Now, with MVCC, the backend has to read through the redo segment to get\n\nYou mean rollback segment, but ...\n\n> the original data value for that row.\n\nWill only need to be looked up if the row is currently beeing modified by \na not yet comitted txn (at least in the default read committed mode) \n\n> \n> Now, while rollback segments do help with cleaning out old UPDATE rows,\n> how does it improve DELETE performance? Seems it would just mark it as\n> expired like we do now.\n\ndelete would probably be: \n1. mark original deleted and write whole row to RS\n\nI don't think you would like to mix looking up deleted rows in heap\nbut updated rows in RS\n\nAndreas\n\nPS: not that I like overwrite with MVCC now\nIf you think of VACUUM as garbage collection PG is highly trendy with\nthe non-overwriting smgr.\n", "msg_date": "Fri, 25 Jan 2002 22:03:53 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Savepoints" }, { "msg_contents": "Zeugswetter Andreas SB SD wrote:\n> \n> > Now, with MVCC, the backend has to read through the redo segment to get\n> \n> You mean rollback segment, but ...\n\n\nSorry, yes. I get redo/undo/rollback mixed up sometimes. :-)\n\n> > the original data value for that row.\n> \n> Will only need to be looked up if the row is currently beeing modified by \n> a not yet comitted txn (at least in the default read committed mode) \n\nUh, not really. The transaction may have completed after my transaction\nstarted, meaning even though it looks like it is committed, to me, it is\nnot visible. Most MVCC visibility will require undo lookup.\n\n> \n> > \n> > Now, while rollback segments do help with cleaning out old UPDATE rows,\n> > how does it improve DELETE performance? Seems it would just mark it as\n> > expired like we do now.\n> \n> delete would probably be: \n> 1. mark original deleted and write whole row to RS\n> \n> I don't think you would like to mix looking up deleted rows in heap\n> but updated rows in RS\n\nYes, so really the overwriting is only a big win for UPDATE. Right now,\nUPDATE is DELETE/INSERT, and that DELETE makes MVCC happy. :-)\n\nMy whole goal was to simplify this so we can see the differences.\n\n\n> PS: not that I like overwrite with MVCC now\n> If you think of VACUUM as garbage collection PG is highly trendy with\n> the non-overwriting smgr.\n\nYes, that is basically what it is now, a garbage collector that collects\nin heap rather than in undo.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 25 Jan 2002 18:57:50 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Savepoints" } ]
[ { "msg_contents": "\n> > I am starting to see the advantages and like it. I also like the exact \n> > name \"public\" for the public schema.\n> \n> I wonder if we should think about a 'group' area so people in a group\n> can create things that others in the group can see, but not people\n> outside the group.\n\nA group simply chooses a special schema name for their group.\n\nMaybe an extra in the ACL area so you can grant privs for a whole \nschema.\n\ngrant select on schema blabla to \"JoeLuser\"\n\nAndreas\n", "msg_date": "Fri, 25 Jan 2002 22:36:37 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: RFD: schemas and different kinds of Postgres objects" } ]
[ { "msg_contents": "\n> > > How about: use overwriting smgr + put old records into rollback\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> > > segments - RS - (you have to keep them somewhere till TX's running\n> > > anyway) + use WAL only as REDO log (RS will be used to \n> rollback TX'\n> > > changes and WAL will be used for RS/data files recovery).\n> > > Something like what Oracle does.\n> > \n> > We have all the info we need in WAL and in the old rows,\n> > why would you want to write them to RS ?\n> > You only need RS for overwriting smgr.\n> \n> This is what I'm saying - implement Overwriting smgr...\n\nYes I am sorry, I am catching up on email and had not read Bruce's \ncomment (nor yours correctly) :-(\n\nI was also long in the pro overwriting camp, because I am used to \nnon MVCC dbs like DB/2 and Informix. (which I like very much) \nBut I am starting to doubt that overwriting is really so good for\nan MVCC db. And I don't think PG wants to switch to non MVCC :-)\n\nImho it would only need a much more aggressive VACUUM backend.\n(aka garbage collector :-) Maybe It could be designed to sniff the \nredo log (buffer) to get a hint at what to actually clean out next.\n\nAndreas\n", "msg_date": "Fri, 25 Jan 2002 22:51:24 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Savepoints" } ]
[ { "msg_contents": "\n> I've looked at the problem a little bit --- there's literature more\n> recent than Lehmann-Yao that talks about how to do btree compaction\n> without losing concurrency. But it didn't get done for 7.2.\n\nYes, there must be. Informix handles this case perfectly.\n(It uses a background btree cleaner)\n\nAndreas\n", "msg_date": "Fri, 25 Jan 2002 22:55:58 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: sequence indexes " }, { "msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n>> I've looked at the problem a little bit --- there's literature more\n>> recent than Lehmann-Yao that talks about how to do btree compaction\n>> without losing concurrency. But it didn't get done for 7.2.\n\n> Yes, there must be. Informix handles this case perfectly.\n> (It uses a background btree cleaner)\n\nRight, I had hoped to fold it into lazy VACUUM, but ran out of time.\n(Of course, had I known in August that we'd still not have released\n7.2 by now, I might have kept after it :-()\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 25 Jan 2002 17:05:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: sequence indexes " }, { "msg_contents": "Tom Lane wrote:\n> \n> \"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> >> I've looked at the problem a little bit --- there's literature more\n> >> recent than Lehmann-Yao that talks about how to do btree compaction\n> >> without losing concurrency. But it didn't get done for 7.2.\n> \n> > Yes, there must be. Informix handles this case perfectly.\n> > (It uses a background btree cleaner)\n\nAs an idle thought, I wonder what other maintenance tasks we could have\na process in the background automatically doing when system activity is\nlow ?\n\nMaintenance\n***********\n- Index compaction\n- Vacuum of various flavours\n\nTuning\n******\n- cpu_tuple costings (and similar) recalculation(s)\n\n\nCan't think of anything else off the top of my head though.\n\nRegards and best wishes,\n\nJustin Clift\n\n> \n> Right, I had hoped to fold it into lazy VACUUM, but ran out of time.\n> (Of course, had I known in August that we'd still not have released\n> 7.2 by now, I might have kept after it :-()\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Sat, 26 Jan 2002 19:08:22 +1100", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: sequence indexes" }, { "msg_contents": "Justin Clift wrote:\n> \n> Tom Lane wrote:\n> >\n> > \"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> > >> I've looked at the problem a little bit --- there's literature more\n> > >> recent than Lehmann-Yao that talks about how to do btree compaction\n> > >> without losing concurrency. But it didn't get done for 7.2.\n> >\n> > > Yes, there must be. Informix handles this case perfectly.\n> > > (It uses a background btree cleaner)\n> \n> As an idle thought, I wonder what other maintenance tasks we could have\n> a process in the background automatically doing when system activity is\n> low ?\n> \n> Maintenance\n> ***********\n> - Index compaction\n> - Vacuum of various flavours\n\n\nI had a couple thoughts about index compaction and vacuum in the\nbackground:\n\nCould one run a postgresql process in a lower priority process and\nperform lazy vacuums without affecting performance all that much?\n\nA live index compaction can be done by indexing the table with a\ntemporary name rename the old index, rename the new index to the old\nname, and drop the old index.\n", "msg_date": "Mon, 28 Jan 2002 15:30:31 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: sequence indexes" }, { "msg_contents": "mlw wrote:\n> \n> \n> Could one run a postgresql process in a lower priority process and\n> perform lazy vacuums without affecting performance all that much?\n\nOne must be very careful not to introduce reverse priority problems -\ni.e. a \nlower priority process locking some resource and then not letting go\nwhile \nhigher priority processes are blocked from running due to needing that\nlock.\n\nIn my tests 1 vacuum process slowed down 100 concurrent pgbench\nprocesses \nby ~2 times.\n\n> A live index compaction can be done by indexing the table with a\n> temporary name rename the old index, rename the new index to the old\n> name, and drop the old index.\n\nIsn't this what REINDEX command does ?\n\n---------------\nHannu\n", "msg_date": "Tue, 29 Jan 2002 09:34:04 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: sequence indexes" }, { "msg_contents": "Hannu Krosing wrote:\n> \n> mlw wrote:\n> >\n> >\n> > Could one run a postgresql process in a lower priority process and\n> > perform lazy vacuums without affecting performance all that much?\n> \n> One must be very careful not to introduce reverse priority problems -\n> i.e. a\n> lower priority process locking some resource and then not letting go\n> while\n> higher priority processes are blocked from running due to needing that\n> lock.\nI understand that, hmm. I wonder if the lock code could boost the priority of a\nprocess which owns a lock.\n\n> \n> In my tests 1 vacuum process slowed down 100 concurrent pgbench\n> processes\n> by ~2 times.\n\nIs that good or bad?\n\n> \n> > A live index compaction can be done by indexing the table with a\n> > temporary name rename the old index, rename the new index to the old\n> > name, and drop the old index.\n> \n> Isn't this what REINDEX command does ?\n\nREINDEX can't be run on a live system, can it?\n\n\n> \n> ---------------\n> Hannu\n", "msg_date": "Tue, 29 Jan 2002 07:43:52 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: sequence indexes" }, { "msg_contents": "mlw wrote:\n> \n> Hannu Krosing wrote:\n> >\n> > mlw wrote:\n> > >\n> > >\n> > > Could one run a postgresql process in a lower priority process and\n> > > perform lazy vacuums without affecting performance all that much?\n> >\n> > One must be very careful not to introduce reverse priority problems -\n> > i.e. a\n> > lower priority process locking some resource and then not letting go\n> > while\n> > higher priority processes are blocked from running due to needing that\n> > lock.\n> I understand that, hmm. I wonder if the lock code could boost the priority of a\n> process which owns a lock.\n> \n> >\n> > In my tests 1 vacuum process slowed down 100 concurrent pgbench\n> > processes\n> > by ~2 times.\n> \n> Is that good or bad?\n\nI had hoped it to take somewhat proportional time, i.e. slow other\nbackends \ndown by 1/100.\n\n> > > A live index compaction can be done by indexing the table with a\n> > > temporary name rename the old index, rename the new index to the old\n> > > name, and drop the old index.\n> >\n> > Isn't this what REINDEX command does ?\n> \n> REINDEX can't be run on a live system, can it?\n\nIt will probably lock something, but otherways I don't say why it can't.\n\nYou may have to add FORCE to the end of command thus:\n\nreindex table tablename force;\n\n-------------\nHannu\n", "msg_date": "Tue, 29 Jan 2002 15:17:19 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: sequence indexes" }, { "msg_contents": "On Tue, Jan 29, 2002 at 07:43:52AM -0500, mlw wrote:\n> Hannu Krosing wrote:\n> > \n> > mlw wrote:\n> > \n> > One must be very careful not to introduce reverse priority problems -\n> > i.e. a\n> > lower priority process locking some resource and then not letting go\n> > while\n> > higher priority processes are blocked from running due to needing that\n> > lock.\n> I understand that, hmm. I wonder if the lock code could boost the priority of a\n> process which owns a lock.\n>\n\nThe classic approach to solving priority inversion is to allow for\npriority inheritance: that is, the low-priority process stays low\npriority, even when it locks a resource, until there is contention for\nthat resource from a higher priority process: then it inherits the higher\npriority of the waiting process.\n\nRoss\n", "msg_date": "Tue, 29 Jan 2002 10:28:04 -0600", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: sequence indexes" } ]
[ { "msg_contents": "Hi Folks\n\n I am using Solaris-2.8 with all the gnu tools. I am trying\nto compile php with PostgreSQL and when I run apache, I\nget this error, I have LD_LIBRARY_PATH set to include\n/usr/local/pgsql/lib:/usr/local/lib and export it.\n\n Everything works fine, when I do not include \"postgreSQL\".\nWhen I compiled and tested PostgreSQL all the tests passed.\n\n> Syntax error on line 222 of\n/export/home/rajkumar/apache/conf/httpd.conf:\n> Cannot load /export/home/rajkumar/apache/libexec/libphp4.so into\nserver: ld.so.1: > /export/home/rajkumar/apache/bin/httpd: fatal:\nrelocation error: file /usr/local/lib/libpq.so.2: symbol main:\n>referenced symbol not found\n\nI would greatly appreciate any help\nJoseph Rajkumar\n\n\n", "msg_date": "Sat, 26 Jan 2002 04:56:02 -0500", "msg_from": "Joseph Rajkumar <rajkumar@telocity.com>", "msg_from_op": true, "msg_subject": "libpq - main symbol unresolved." }, { "msg_contents": "Joseph Rajkumar writes:\n\n> > Syntax error on line 222 of\n> /export/home/rajkumar/apache/conf/httpd.conf:\n> > Cannot load /export/home/rajkumar/apache/libexec/libphp4.so into\n> server: ld.so.1: > /export/home/rajkumar/apache/bin/httpd: fatal:\n> relocation error: file /usr/local/lib/libpq.so.2: symbol main:\n> >referenced symbol not found\n\nThis looks more like a problem in the libphp link process and/or the httpd\ndynamic loading process.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Fri, 1 Feb 2002 16:01:14 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: libpq - main symbol unresolved." } ]
[ { "msg_contents": "For anyone interested, I have posted a new version of multi-threaded\nPostgres 7.0.2 here:\n\nhttp://sourceforge.net/projects/mtpgsql\n\n\nMyron scott\nmkscott@sacadia.com\n\n", "msg_date": "Sat, 26 Jan 2002 13:19:10 -0800 (PST)", "msg_from": "<mkscott@sacadia.com>", "msg_from_op": true, "msg_subject": "Multi-threaded PostgreSQL" } ]
[ { "msg_contents": "Just a note: the current doc/FAQ file mentions that the latest version of\npostgres available is 7.1.3. This should be changed to 7.2 for the\nrelease.\n\nChris\n\n\n", "msg_date": "Mon, 28 Jan 2002 15:57:39 +0800 (WST)", "msg_from": "Christopher Kings-Lynne <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "FAQ errata" }, { "msg_contents": "On Mon, 28 Jan 2002, Christopher Kings-Lynne wrote:\n\n> Just a note: the current doc/FAQ file mentions that the latest version of\n> postgres available is 7.1.3. This should be changed to 7.2 for the\n> release.\n\nRelease hasn't happened yet.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 28 Jan 2002 05:36:17 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: FAQ errata" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> Just a note: the current doc/FAQ file mentions that the latest version of\n> postgres available is 7.1.3. This should be changed to 7.2 for the\n> release.\n\nIf we ever get there. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 28 Jan 2002 07:03:24 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: FAQ errata" } ]
[ { "msg_contents": "Hello all,\n I am working on that SHOW locks todo item, that lists all\n current locks. This seems a bit tricky as there is no central\n place where one could reference a lock, appears they are just\n called.\n Now my question is that, would it be \"ok\" to have a locks\n linked list that held all the locks, and their information\n in the function calls that make the lock? Or is there some\n other method that would be more suitable for this. Seeking\n the advice of experience with this sorta thing (if there is\n anyone, heh).\n If there was this one master linked list, holding all the\n lock information, then making a SHOW locks like command would\n be a snap!\n Also do you think this sort of action would slow the database\n down too much to even warrent doing it this way? Of course i\n personally havent tested yet, so have no figures, but you\n guys know more about database programming and postgresql.\n\nThanks,\nChris\n\n", "msg_date": "Mon, 28 Jan 2002 08:46:35 +0000", "msg_from": "Chris Humphries <chumphries@devis.com>", "msg_from_op": true, "msg_subject": "Locks, more complicated than I orginally thought" }, { "msg_contents": "Chris Humphries <chumphries@devis.com> writes:\n> Now my question is that, would it be \"ok\" to have a locks\n> linked list that held all the locks, and their information\n> in the function calls that make the lock?\n\nHuh? The lock manager keeps lists that show all the locks held by a\ngiven process. These data structures are even rather better documented\nthan is usual for Postgres: src/backend/storage/lmgr/README.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 28 Jan 2002 11:57:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Locks, more complicated than I orginally thought " } ]
[ { "msg_contents": "Hello.\n\nThere is small table:\n----------\ncreate table some_table (\nid int UNIQUE,\nvalue int\n);\nINSERT INTO some_table values(1,0);\n....\nINSERT INTO some_table values(50,0);\n-------------\n\nWhen I do UPDATE some_table set value=... where id=...,\nquery execution time raises in arithmetic progression!\nAfter about 50 updates on every row query consumes ~3 sec against 0.3 \nsec as it was at the beginning.\npsql takes ~80% of CPU time (acording to top).\nVACUUM helps to restore execution speed, but i think it is not the way out.\n\nIs it BUG or FEATURE?\n\nPostgres: 7.1.3;\nSystem: Debian woody (kernel 2.4.17) on K6/450 with 128Mb RAM.\n\n", "msg_date": "Mon, 28 Jan 2002 12:12:08 +0300", "msg_from": "Vladimir Zamiussky <zami@chat.ru>", "msg_from_op": true, "msg_subject": "Execution time of UPDATE raises dramatically!" }, { "msg_contents": "Vladimir Zamiussky writes:\n\n> When I do UPDATE some_table set value=... where id=...,\n> query execution time raises in arithmetic progression!\n> After about 50 updates on every row query consumes ~3 sec against 0.3\n> sec as it was at the beginning.\n> psql takes ~80% of CPU time (acording to top).\n> VACUUM helps to restore execution speed, but i think it is not the way out.\n>\n> Is it BUG or FEATURE?\n\nIt's just a fact of how the system works.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Fri, 1 Feb 2002 15:58:47 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Execution time of UPDATE raises dramatically!" }, { "msg_contents": "> Help\n\nYou are welcome.\n\n> create table some_table (\n> id int UNIQUE,\n> value int\n> );\n> INSERT INTO some_table values(1,0);\n> INSERT INTO some_table values(50,0);\n\nI would prefer :\n\nCREATE TRABLE table_foo (\n foo_oid serial,\n foo_value int\n);\n\nfoo_oid will become a primary key, thus it is being indexed. Which is not the \ncase of your example.\n\n> When I do UPDATE some_table set value=... where id=...,\n> query execution time raises in arithmetic progression!\n> After about 50 updates on every row query consumes ~3 sec against 0.3\n> sec as it was at the beginning.\n> psql takes ~80% of CPU time (acording to top).\n> VACUUM helps to restore execution speed, but i think it is not the way out.\n> Is it BUG or FEATURE?\n\nYou need to create an index OR to add a primary key.\n\n> Postgres: 7.1.3;\n> System: Debian woody (kernel 2.4.17) on K6/450 with 128Mb RAM.\n\nIf you are starting developement, it is highly recommanded you upgraded to \nPostgreSQL 7.2.1. It is the most stable PostgreSQL release, with many bug \nfixes and speed improvement.\n\nAlso, if you have a Windows workstation, try install pgAdmin2 \n(http://pgadmin.postgresql.com). This will speed-up your developements.\n\nDo not hesitate to come back to us to tell if it solved your problem.\n\nCheers,\nJean-Michel POURE\n", "msg_date": "Sun, 5 May 2002 10:53:03 +0200", "msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>", "msg_from_op": false, "msg_subject": "Re: Execution time of UPDATE raises dramatically!" } ]
[ { "msg_contents": "I spent the weekend fooling around trying to reduce the time needed to\nstart a fresh backend. Profiling seemed to indicate that much of the\ntime was going into loading entries into the relcache: relcache entry\nsetup normally requires fetching rows from several different system\ncatalogs. The obvious way to fix that is to preload entries somehow.\nIt turns out we already have a mechanism for this (the pg_internal.init\nfile), but it was only being used to preload entries for a few critical\nsystem indexes --- \"critical\" meaning \"relcache/catcache initialization\nbecomes an infinite recursion otherwise\". I rearranged things so that\npg_internal.init could cache entries for both plain relations and\nindexes, and then set it up to cache all the system catalogs and indexes\nthat are referenced by catalog caches. (This is a somewhat arbitrary\nchoice, but was easy to implement.)\n\nAs near as I can tell, this reduces the user-space CPU time involved in\na backend launch by about a factor of 5; and there's also a very\nsignificant reduction in traffic to shared memory, which should reduce\ncontention problems when multiple backends are involved. It's difficult\nto measure this stuff, however ... profiling is of limited reliability\nwhen you can only get a few clock samples per process launch.\n\nI'm planning to commit these changes when 7.3 opens, unless I hear\nobjections. A possible objection is that caching more system catalog\ndescriptors makes it more difficult to support user alterations to the\nsystem catalogs; but we don't support those anyway, and I haven't heard\nof anyone working to remove the other obstacles to it. (Note that this\nwouldn't completely prevent such things; it would just be necessary to\nfigure out when to delete the pg_internal.init cache file when making\nschema changes.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 28 Jan 2002 15:27:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Improving backend launch time by preloading relcache" }, { "msg_contents": "Tom Lane wrote:\n> \n> I spent the weekend fooling around trying to reduce the time needed to\n> start a fresh backend. Profiling seemed to indicate that much of the\n> time was going into loading entries into the relcache: relcache entry\n> setup normally requires fetching rows from several different system\n> catalogs. The obvious way to fix that is to preload entries somehow.\n> It turns out we already have a mechanism for this (the pg_internal.init\n> file), but it was only being used to preload entries for a few critical\n> system indexes --- \"critical\" meaning \"relcache/catcache initialization\n> becomes an infinite recursion otherwise\". I rearranged things so that\n> pg_internal.init could cache entries for both plain relations and\n> indexes, and then set it up to cache all the system catalogs and indexes\n> that are referenced by catalog caches. (This is a somewhat arbitrary\n> choice, but was easy to implement.)\n\nWhile examining this issue I found the following change\nabout REINDEX.\n\n Subject: [COMMITTERS] pgsql/src/backend catalog/index.c commands/ind\n...\n Date: Mon, 19 Nov 2001 21:46:13 -0500 (EST)\n From: tgl@postgresql.org\n To: pgsql-committers@postgresql.org\n\nCVSROOT: /cvsroot\nModule name: pgsql\nChanges by: tgl@postgresql.org 01/11/19 21:46:13\n\nModified files:\n src/backend/catalog: index.c \n src/backend/commands: indexcmds.c \n src/backend/tcop: utility.c \n\nLog message:\n Some minor tweaks of REINDEX processing: grab exclusive\n\tlock a little earlier, make error checks more uniform.\n\nThe change on tcop/utility.c seems to inhibit the execution\nof REINDEX of system indexes under postmaster which I allowed\nexcept some system indexes in 7.1.\nPlease put it back in 7.2.1.\n\nInhibited relations are the indexes of the followings.\n[Shared relations]\npg_database, pg_shadow, pg_group\n[Nailed relations]\npg_class, pg_type, pg_attribute, pg_proc\n\nThere are some trial stuff to handle nailed relations\n(mostly #ifdef'd ENABLE_REINDEX_NAILED_RELATIONS).\nEspecially setNewRelfilenode() unlinks the pg_internal.init\nfile in case the relation is nailed. However I don't rely\non the mechanism so much that I can't feel like removing the\n#ifdef's.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Tue, 29 Jan 2002 13:42:17 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Improving backend launch time by preloading relcache" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> The change on tcop/utility.c seems to inhibit the execution\n> of REINDEX of system indexes under postmaster which I allowed\n> except some system indexes in 7.1.\n\nThat strikes me as a fairly dangerous idea. Do you really\nbelieve it's safe? Also, why would it be safe to allow reindex\nat the table level and not at the index level, which is what\nthe code did before I touched it?\n\n> Especially setNewRelfilenode() unlinks the pg_internal.init\n> file in case the relation is nailed.\n\nProbably with this change I'm planning, it'll be necessary to unlink\npg_internal.init for any system relation, not only nailed ones.\nThanks for pointing that out.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 29 Jan 2002 00:06:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Improving backend launch time by preloading relcache " }, { "msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > The change on tcop/utility.c seems to inhibit the execution\n> > of REINDEX of system indexes under postmaster which I allowed\n> > except some system indexes in 7.1.\n> \n> That strikes me as a fairly dangerous idea. Do you really\n> believe it's safe? Also, why would it be safe to allow reindex\n> at the table level and not at the index level, which is what\n> the code did before I touched it?\n\nREINDEX uses the relfilenode mechanism since 7.1 which\nlets the replacement of index files be under transactional\ncontrol. I think it's safe enough. One thing I had to worry\nabout REINDEX on system indexes is how to tell that the\ntarget index mustn't be used during the REINDEX operation.\nTurning off the relhasindex column in pg_class tells\nPG system that the indexes are unavailable now. It was \nimplemented by me before 7.0. I didn't provided the\nway to inactivate indexes individually however.\n\n> \n> > Especially setNewRelfilenode() unlinks the pg_internal.init\n> > file in case the relation is nailed.\n> \n> Probably with this change I'm planning, it'll be necessary to unlink\n> pg_internal.init for any system relation, not only nailed ones.\n> Thanks for pointing that out.\n\nWhat I meant was to confirm if it's really reliable.\nCurrently e.g. the failure of rename of temporary\ninit file to pg_internal.init isn't fatal but it\nmay be fatal if we include many relcache info in\nthe pg_internal.init file.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Tue, 29 Jan 2002 14:35:24 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Improving backend launch time by preloading relcache" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> REINDEX uses the relfilenode mechanism since 7.1 which\n> lets the replacement of index files be under transactional\n> control. I think it's safe enough.\n\nOkay, in that case tcop/utility is being too picky about all three\ncases, no?\n\n> What I meant was to confirm if it's really reliable.\n> Currently e.g. the failure of rename of temporary\n> init file to pg_internal.init isn't fatal but it\n> may be fatal if we include many relcache info in\n> the pg_internal.init file.\n\nCertainly not --- it must always be possible for a freshly started\nbackend to build the pg_internal.init file from scratch. The reason\nfor unlinking pg_internal.init after changing a catalog schema tuple\nis that future backends won't know you changed it unless\npg_internal.init is rebuilt.\n\nHmm ... what that says is that unlinking pg_internal.init in\nsetRelfilenode is the wrong place. The right place is *after*\ncommitting your transaction and *before* sending shared cache inval\nmessages. You can't unlink before you commit, or someone may rebuild\nusing the old information. (A backend that's already logged into the\nPROC array when you send SI inval will find out about the changes via SI\ninval. One that is not yet logged in must be prevented from reading the\nnow-obsolete pg_internal.init file. The startup sequence logs into PROC\nbefore trying to read pg_internal.init, so that part is done in the\nright order.) So we need a flag that will cause the unlink to happen at\nthe right time in post-commit cleanup.\n\nVACUUM's got this same timing bug, although its change is only one of\nupdating relpages/reltuples which is not so critical...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 29 Jan 2002 00:49:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Improving backend launch time by preloading relcache " }, { "msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > REINDEX uses the relfilenode mechanism since 7.1 which\n> > lets the replacement of index files be under transactional\n> > control. I think it's safe enough.\n> \n> Okay, in that case tcop/utility is being too picky about all three\n> cases, no?\n\nProbably we don't have to keep the relhasindex info in the\ndb any longer and we had better keep some info about REINDEX\nin memory local to the backend. In fact in the current\nimplementation the relhasindex is local to the backend\nand any backend couldn't see the column committed to\nthe status off. \n\n> Hmm ... what that says is that unlinking pg_internal.init in\n> setRelfilenode is the wrong place.\n\nPossibly. I couldn't find the appropriate place(way) then\nand so #ifdef's are still there.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Tue, 29 Jan 2002 16:18:14 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Improving backend launch time by preloading relcache" }, { "msg_contents": "On Mon, 28 Jan 2002, Tom Lane wrote:\n\n> I spent the weekend fooling around trying to reduce the time needed to\n> start a fresh backend. Profiling seemed to indicate that much of the\n> time was going into loading entries into the relcache: relcache entry\n> setup normally requires fetching rows from several different system\n> catalogs. The obvious way to fix that is to preload entries somehow.\n> It turns out we already have a mechanism for this (the pg_internal.init\n> file), but it was only being used to preload entries for a few critical\n> system indexes --- \"critical\" meaning \"relcache/catcache initialization\n> becomes an infinite recursion otherwise\". I rearranged things so that\n> pg_internal.init could cache entries for both plain relations and\n> indexes, and then set it up to cache all the system catalogs and indexes\n> that are referenced by catalog caches. (This is a somewhat arbitrary\n> choice, but was easy to implement.)\n>\n> As near as I can tell, this reduces the user-space CPU time involved in\n> a backend launch by about a factor of 5; and there's also a very\n> significant reduction in traffic to shared memory, which should reduce\n\nTom, what's about absolute timings ? It's quite interesting, because\nmany people have to keep persistent connections to backend and if\nstatup time would be small ( as in MySQL case ), it'd be possible just\nnot waste a system resources ( in some situations ).\n\n> contention problems when multiple backends are involved. It's difficult\n> to measure this stuff, however ... profiling is of limited reliability\n> when you can only get a few clock samples per process launch.\n>\n> I'm planning to commit these changes when 7.3 opens, unless I hear\n> objections. A possible objection is that caching more system catalog\n> descriptors makes it more difficult to support user alterations to the\n> system catalogs; but we don't support those anyway, and I haven't heard\n> of anyone working to remove the other obstacles to it. (Note that this\n> wouldn't completely prevent such things; it would just be necessary to\n> figure out when to delete the pg_internal.init cache file when making\n> schema changes.)\n>\n\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 29 Jan 2002 13:44:10 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: Improving backend launch time by preloading relcache" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Probably we don't have to keep the relhasindex info in the\n> db any longer and we had better keep some info about REINDEX\n> in memory local to the backend.\n\nI never did much care for the \"change relhasindex\" hack. Why isn't\nIsIgnoringSystemIndexes a sufficient solution? I don't really care\nif REINDEX is a little bit slower than it might be, so just turning\noff use of *all* system indexes seems like an adequate answer.\n\n>> Hmm ... what that says is that unlinking pg_internal.init in\n>> setRelfilenode is the wrong place.\n\n> Possibly. I couldn't find the appropriate place(way) then\n> and so #ifdef's are still there.\n\nOkay. I'll work on that when I commit the patches I have.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 29 Jan 2002 10:05:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Improving backend launch time by preloading relcache " }, { "msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Probably we don't have to keep the relhasindex info in the\n> > db any longer and we had better keep some info about REINDEX\n> > in memory local to the backend.\n> \n> I never did much care for the \"change relhasindex\" hack. Why isn't\n> IsIgnoringSystemIndexes a sufficient solution? I don't really care\n> if REINDEX is a little bit slower than it might be, so just turning\n> off use of *all* system indexes seems like an adequate answer.\n\nIt may be a reasonable solution.\nI thought of another idea while reading the thread [HACKERS]\nsequence indexes. Currently REINDEX recreates indexes from\nthe heap relations because the indexes may be corrupted.\nHowever we can recreate indexes from existent ones if\nthey are sane. It would be a lot faster than the current\nway for large tables. \n\nComments ?\n\nregards,\nHiroshi Inoue\n", "msg_date": "Wed, 30 Jan 2002 08:48:12 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Improving backend launch time by preloading relcache" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> I thought of another idea while reading the thread [HACKERS]\n> sequence indexes. Currently REINDEX recreates indexes from\n> the heap relations because the indexes may be corrupted.\n> However we can recreate indexes from existent ones if\n> they are sane. It would be a lot faster than the current\n> way for large tables. \n\nHmm ... you are thinking about the case where REINDEX is being used\nnot to recover from corruption, but just to shrink indexes that have\naccumulated too much free space. Okay, that's a reasonable case to\ntry to optimize, though I'd like to think the problem will go away\nin a release or two when we implement VACUUM-time index shrinking.\n\nHowever, I'm not sure about the \"lot faster\" part. The only win\nI can see is that when rebuilding a btree index, you could skip\nthe sort step by reading the old index in index order. This'd\nrequire hacking things deep in the guts of the btree index method,\nnot at the level of the present REINDEX code. And AFAICS it doesn't\ntranslate at all to the other index types.\n\nNot sure it's worth the trouble. I'd rather see us expend the same\neffort on shrinking indexes on-the-fly in VACUUM.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 30 Jan 2002 01:33:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Improving backend launch time by preloading relcache " }, { "msg_contents": "Tom Lane wrote:\n> \n> Hmm ... you are thinking about the case where REINDEX is being used\n> not to recover from corruption, but just to shrink indexes that have\n> accumulated too much free space.\n\nYes.\n\n> Okay, that's a reasonable case to\n> try to optimize, though I'd like to think the problem will go away\n> in a release or two when we implement VACUUM-time index shrinking.\n> \n> However, I'm not sure about the \"lot faster\" part. The only win\n> I can see is that when rebuilding a btree index, you could skip\n> the sort step by reading the old index in index order.\n\nDon't we have to scan the (possibly larger) heap table ?\n\nregards,\nHiroshi Inoue\n", "msg_date": "Wed, 30 Jan 2002 16:54:05 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Improving backend launch time by preloading relcache" } ]
[ { "msg_contents": "As hinted somewhere within the schema-objects thread, I'm implementing GUC\nsettings that are stored in the pg_shadow and pg_database system catalogs\nand are activated in sessions for that user or database.\n\nThe basic functionality is done, although two issues sort of need a show\nof hands. First, the order in which these settings are processed: I\nfigured user should be last. I've also got these settings for each group,\nbut this would mean that if a user is a member of more than one group he\ngets a rather random processing order. Any comments on that? If the\ngroup thing stays, I think the most reasonable processing order is\ndatabase, groups, user, since it would be weird to have database between\ngroups and user.\n\nSecond, we need a few commands to set these things. My first thought was\n\nSET {USER name | DATABASE name} [SESSION] DEFAULT varname TO value;\n\nbut it would also make sense to have\n\nALTER {USER name | DATABASE name} SET DEFAULT varname TO value;\n\nOr there could be other permutations that are more or less wordy.\n\nSuggestions? Perhaps a reference to another RDBMS can give a hint.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 28 Jan 2002 16:18:10 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Per-database and per-user GUC settings" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> The basic functionality is done, although two issues sort of need a show\n> of hands. First, the order in which these settings are processed: I\n> figured user should be last.\n\nMeaning the user setting wins if there's a conflict? Fine.\n\n> I've also got these settings for each group,\n> but this would mean that if a user is a member of more than one group he\n> gets a rather random processing order.\n\nThat bothers me; seems like it'll bite someone sooner or later. And I\ndon't see a compelling reason to have per-group settings if we have the\nother two.\n\nOne issue you didn't mention is what security level these options are\nassumed to have by GUC. That plays into what permissions are needed to\nissue the SET/ALTER commands.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 28 Jan 2002 20:01:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Per-database and per-user GUC settings " }, { "msg_contents": "Tom Lane writes:\n\n> One issue you didn't mention is what security level these options are\n> assumed to have by GUC. That plays into what permissions are needed to\n> issue the SET/ALTER commands.\n\nRight. My design was, the SET/ALTER commands are allowed to be executed\nby the user for his own pg_shadow record, the database owner for his\npg_database record, and superusers for everything. (Hmm, good we're not\ndoing the group thing. Would have gotten tricky here.)\n\nNormal users can only add USERSET settings. Other settings provoke a\nNOTICE at runtime (if they happen to sneak in somehow) and will otherwise\nbe ignored.\n\nSuperusers can also add SUSET records to their per-user settings. I'm\ncurrently unsure about whether to allow superusers to add SUSET settings\nto the per-database settings, because it would mean that the database\nsession would behave differently depending on what user invokes it. And\nsince it's not widely known what settings have what permission, I'm afraid\nit could be confusing. On the other hand, superusers should know what\nthey're doing.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 29 Jan 2002 19:32:15 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: Per-database and per-user GUC settings " }, { "msg_contents": "I've thought of some issues that I think will need to be addressed\nbefore per-database/per-user GUC settings can become useful.\n\nOne thing that's bothered me for awhile is that GUC doesn't retain\nany memory of how a variable acquired its present value. It tries to\nresolve conflicts between different sources of values just by processing\nthe sources in \"the right order\". However this cannot work in general.\nSome examples:\n\n1. postgresql.conf contains a setting for some variable, say\nsort_mem=1000. DBA starts postmaster with a command-line option to\noverride the variable, say --sort_mem=2000. Works fine, until he\nSIGHUPs the postmaster for some unrelated reason, at which point\nsort_mem snaps back to 1000.\n\n2. User starts a session and says SET sort_mem=2000. Again, he\nsuccessfully overrides the postgresql.conf value ... but only as\nlong as he doesn't get SIGHUP'd.\n\nThese problems will get very substantially worse once we add\nper-database and per-user GUC settings to the set of possible\nvalue sources. I believe the correct fix is for GUC to define\na prioritized list of value sources (similar to the existing\nPGC_ settings, but probably not quite the same) and remember which\nsource gave the current setting for each variable. Comparing that\nto the source of a would-be new value tells you whether to accept\nor ignore the new value. This would make GUC processing\norder-insensitive which would be a considerable improvement (eg,\nI think you could get rid of the ugly double-scan-of-options hack\nin postmaster.c).\n\nAnother thought: DBAs will probably expect that if they change\nper-database/per-user GUC settings, they can SIGHUP to make existing\nbackends take on those settings. Can we support this? If the\nHUP is received outside any transaction then I guess we could start\na temporary transaction to read the tables involved. If we try to\nprocess HUP at a command boundary inside a transaction then we risk\naborting the whole user's transaction if there's an error. Arguably\nHUP should not be accepted while a transaction is in progress anyway,\nso the simplest answer might be to not process HUP until we are at\nthe idle loop and there's no open transaction block.\n\nThe whole subject of reacting to errors in the per-database/per-user GUC\nsettings needs more thought, too. Worst case scenario: superuser messes\nup his own per-user GUC settings to the point that he can't log in\nanymore. Can we provide an escape hatch, or is he looking at an initdb\nsituation (without even the chance to run pg_dump first :-()? I think\nthe GUC code presently tries to avoid any elog while processing\npostgresql.conf, so that it won't be the cause of backend startup\nfailures, but I'm not convinced that that approach scales. Certainly\nif we are reading tables we cannot absolutely guarantee no elog.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 31 Jan 2002 10:45:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Per-database and per-user GUC settings " }, { "msg_contents": "Tom Lane writes:\n\n> One thing that's bothered me for awhile is that GUC doesn't retain\n> any memory of how a variable acquired its present value. It tries to\n> resolve conflicts between different sources of values just by processing\n> the sources in \"the right order\". However this cannot work in general.\n> Some examples:\n>\n> 1. postgresql.conf contains a setting for some variable, say\n> sort_mem=1000. DBA starts postmaster with a command-line option to\n> override the variable, say --sort_mem=2000. Works fine, until he\n> SIGHUPs the postmaster for some unrelated reason, at which point\n> sort_mem snaps back to 1000.\n\nThis sort of thing was once considered a feature, until someone came along\nand overloaded SIGHUP for unreleated things. ;-)\n\nHowever, possibly this could be covered if we just didn't propagate the\nSIGHUP signal to the postmaster's children. For pg_hba.conf and friends,\nyou don't need it at all once a session is started, and for\npostgresql.conf, the value of the \"feature\" is at least doubtful.\nIntuitively, the admin would probably expect his new settings to take\neffect in newly started sessions. If he wants to alter existing sessions\nit's probably best to signal the backend processes explicitly.\n\n> Another thought: DBAs will probably expect that if they change\n> per-database/per-user GUC settings, they can SIGHUP to make existing\n> backends take on those settings.\n\nI must disagree with this expectation.\n\nSIGHUP is restricted to re-reading configuations files. The\nper-database/per-user settings behave, in my mind, like SET commands\nexecuted immediately before the session loop starts. So once that session\nhas started, they are no longer relevant. Imagine, a user edits his own\nconvenience settings and the admin jumps in and edits some other unrelated\nsetting in the same array -- all the sudden the user's new settings get\nactivated.\n\nAdmins probably don't want to interfere with users' running sessions,\nespecially not asynchronously (from the user's point of view). If they\ndon't like what the user is doing, kill the session. Otherwise they may\ndo more harm than good. Now that I write this, not propagating the SIGHUP\nis something I would really favour.\n\n> The whole subject of reacting to errors in the per-database/per-user GUC\n> settings needs more thought, too. Worst case scenario: superuser messes\n> up his own per-user GUC settings to the point that he can't log in\n> anymore.\n\nYes, that is one of my concerns too, but I don't see me rewiring the whole\nexception handling in the backend because of this.\n\nConsider, what is \"messed up\"? We don't have options that prevent login.\nInvalid option strings are effectively ignored. If you cannot read\npg_database or pg_shadow you have more fundamental problems and you most\nlikely won't get to the options processing at all.\n\n> Can we provide an escape hatch, or is he looking at an initdb\n> situation (without even the chance to run pg_dump first :-()?\n\nIf the database settings are still messed up, you still have template1.\n(You generally wouldn't put actual settings into template1. -- You might\nas well put them into postgresql.conf then.)\n\nIf template1 is blocked or the user's settings are messed up, you have a\nmore fundamental problem, but it's not dissimilar to deleting all your\nusers. We have an escape hatch for that: Start a standalone backend.\n(No options would be processed in that case.)\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 31 Jan 2002 19:54:44 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: Per-database and per-user GUC settings " }, { "msg_contents": "> > 1. postgresql.conf contains a setting for some variable, say\n> > sort_mem=1000. DBA starts postmaster with a command-line option to\n> > override the variable, say --sort_mem=2000. Works fine, until he\n> > SIGHUPs the postmaster for some unrelated reason, at which point\n> > sort_mem snaps back to 1000.\n> \n> This sort of thing was once considered a feature, until someone came along\n> and overloaded SIGHUP for unreleated things. ;-)\n\nI agree we should not propagate config changes to children on SIGHUP. \nThe only major question I had was that command-line flags get wiped out\nby postgresql.conf settings on SIGHUP. I know someone pointed this\nout but has a solution been proposed? That would surprise any admin.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 1 Feb 2002 00:28:41 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Per-database and per-user GUC settings" }, { "msg_contents": "Bruce Momjian wrote:\n> > > 1. postgresql.conf contains a setting for some variable, say\n> > > sort_mem=1000. DBA starts postmaster with a command-line option to\n> > > override the variable, say --sort_mem=2000. Works fine, until he\n> > > SIGHUPs the postmaster for some unrelated reason, at which point\n> > > sort_mem snaps back to 1000.\n> > \n> > This sort of thing was once considered a feature, until someone came along\n> > and overloaded SIGHUP for unreleated things. ;-)\n> \n> I agree we should not propagate config changes to children on SIGHUP. \n> The only major question I had was that command-line flags get wiped out\n> by postgresql.conf settings on SIGHUP. I know someone pointed this\n> out but has a solution been proposed? That would surprise any admin.\n\nAdded to TODO:\n\n* Prevent SIGHUP and 'pg_ctl reload' from changing command line \n specified parameters to postgresql.conf defaults\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 1 Feb 2002 12:00:07 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Per-database and per-user GUC settings" }, { "msg_contents": "Bruce Momjian writes:\n\n> I agree we should not propagate config changes to children on SIGHUP.\n> The only major question I had was that command-line flags get wiped out\n> by postgresql.conf settings on SIGHUP. I know someone pointed this\n> out but has a solution been proposed? That would surprise any admin.\n\nI think it's desirable and fully intentional. How else would you override\ncommand-line args without restarting the server? I can see where there\nmight be a concern, but in my mind, if you're using command-line arguments\nin permanent setups, you're not trying hard enough.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Fri, 1 Feb 2002 12:40:05 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: Per-database and per-user GUC settings" }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > I agree we should not propagate config changes to children on SIGHUP.\n> > The only major question I had was that command-line flags get wiped out\n> > by postgresql.conf settings on SIGHUP. I know someone pointed this\n> > out but has a solution been proposed? That would surprise any admin.\n> \n> I think it's desirable and fully intentional. How else would you override\n> command-line args without restarting the server? I can see where there\n> might be a concern, but in my mind, if you're using command-line arguments\n> in permanent setups, you're not trying hard enough.\n\nBut right now, our command-line arguments override postgresql.conf only\non startup; if they reload, postgresql.conf overrides command-line args.\nSeems very counter-intuitive to me.\n\nI can't think of an acceptable solution so the best one may be to just\nremove the command-line args completely.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 1 Feb 2002 12:44:47 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Per-database and per-user GUC settings" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> 1. postgresql.conf contains a setting for some variable, say\n>> sort_mem=1000. DBA starts postmaster with a command-line option to\n>> override the variable, say --sort_mem=2000. Works fine, until he\n>> SIGHUPs the postmaster for some unrelated reason, at which point\n>> sort_mem snaps back to 1000.\n\n> This sort of thing was once considered a feature, until someone came along\n> and overloaded SIGHUP for unreleated things. ;-)\n\nOverloaded? SIGHUP still means what it's usually taken to mean, ie,\n\"re-read your configuration files\". Whether the configuration data\nlives in one file or several doesn't mean much AFAICS. In particular,\nif I override a config setting with a command-line switch, I wouldn't\nexpect that overriding to stop working when I try to change some\nunrelated value in postgresql.conf.\n\n> However, possibly this could be covered if we just didn't propagate the\n> SIGHUP signal to the postmaster's children. For pg_hba.conf and friends,\n> you don't need it at all once a session is started, and for\n> postgresql.conf, the value of the \"feature\" is at least doubtful.\n> Intuitively, the admin would probably expect his new settings to take\n> effect in newly started sessions. If he wants to alter existing sessions\n> it's probably best to signal the backend processes explicitly.\n\nI disagree with these conclusions entirely. If SIGHUP causes an\nalready-running postmaster to respond to config-file changes, why not\nalready-running backends too? If we had a monolithic server\nimplementation that didn't have any clear distinction between parent\nand child processes, surely you'd not make the above argument. But\nmore to the point, I can see clear usefulness to the DBA in being able\nto get existing backends to respond to config changes; I can't see any\nclear usefulness in failing to do so.\n\n>> Another thought: DBAs will probably expect that if they change\n>> per-database/per-user GUC settings, they can SIGHUP to make existing\n>> backends take on those settings.\n\n> I must disagree with this expectation.\n\n> SIGHUP is restricted to re-reading configuations files. The\n> per-database/per-user settings behave, in my mind, like SET commands\n> executed immediately before the session loop starts.\n\nI think you are allowing implementation simplicity to color your idea of\nwhat the DBA would like to have happen.\n\n> Imagine, a user edits his own\n> convenience settings and the admin jumps in and edits some other unrelated\n> setting in the same array -- all the sudden the user's new settings get\n> activated.\n\nThat's a fair point, but I don't envision the superuser inserting values\ninto other people's per-user GUC settings as a routine matter. That\nseems to me to be roughly comparable to Unix root setting a new password\nfor someone; it's not done lightly. When it is done, arbitrary delays\nuntil the setting takes effect aren't considered acceptable. The same\ngoes for changes to postgresql.conf.\n\n\nIn any case, I think we're talking at cross purposes, as none of the\nabove seems to me to be an argument against the point I was making.\nLet me try to state it more clearly. It seems to me that the existing\nGucContext mechanism folds together three considerations that are\nlogically distinct, thereby making the implementation both confusing\nand restrictive:\n\n1. When is it possible/rational to change a given setting? Two examples:\n the implementation doesn't physically support changing shared_buffers\n after shared memory initialization; while things would still work\n if different backends were running with different log_pid settings,\n it's not really sensible for them to do so. Useful concepts here\n include \"fixed at postmaster start\", \"fixed at backend start\", and\n \"changeable anytime\", and \"system-wide\" vs \"per-backend\". With the\n addition of per-DB GUC settings, \"database-wide\" enters the\n vocabulary too.\n\n2. From a security/privilege point of view, who should have the right to\n change a given setting, and over what span of application? Right now\n the only concepts here are \"superuser\" vs \"ordinary user\" and\n \"current session\" vs \"whole installation\". Adding database-wide GUC\n settings at least introduces the new concepts of \"database owner\" and\n \"within database\".\n\n3. Where did a given setting of a variable come from? (wired-in\n default, postmaster command line, config file, backend options, SET\n command, soon to be augmented by per-DB and per-user table entries)\n\nMy argument is that consideration 3 should only be used directly to\ndetermine which source wins when there is a conflict between different\nvalid sources of a given value (where validity is determined by\nconsiderations 1 and 2). By making that comparison explicit, rather\nthan trying to use processing order as a substitute for it, we can avoid\nthe problems the current implementation has with dynamic changes of the\nconfiguration source data.\n\nYou seem to be arguing that the current implementation isn't broken\nbecause we can define away the need to support dynamic changes of\nconfiguration data, but I don't buy that; at least not overall, even\nif there are good arguments for restricting changes of particular\nvariables in particular scenarios.\n\n\n>> The whole subject of reacting to errors in the per-database/per-user GUC\n>> settings needs more thought, too. Worst case scenario: superuser messes\n>> up his own per-user GUC settings to the point that he can't log in\n>> anymore.\n\n> Yes, that is one of my concerns too, but I don't see me rewiring the whole\n> exception handling in the backend because of this.\n\nNo, of course not; my point was to make sure that there is some kind of\nrecovery path if things get horribly messed up.\n\n> If the database settings are still messed up, you still have template1.\n> (You generally wouldn't put actual settings into template1. -- You might\n> as well put them into postgresql.conf then.)\n\nRight. Should we wire that restriction into the code as a safety measure?\npostgresql.conf can be changed without a functioning database system,\nbut a blown setting for template1 might really mess you up.\n\n> If template1 is blocked or the user's settings are messed up, you have a\n> more fundamental problem, but it's not dissimilar to deleting all your\n> users. We have an escape hatch for that: Start a standalone backend.\n> (No options would be processed in that case.)\n\nOkay, if it's agreed that standalone backends ignore these settings then\nI think we can survive a screwup.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 01 Feb 2002 13:12:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Per-database and per-user GUC settings " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I think it's desirable and fully intentional. How else would you override\n> command-line args without restarting the server?\n\nI say you wouldn't. If it's on the command line then it overrides\npostgresql.conf for that run of the postmaster; full stop, no \"if\"s.\nIf you wanted it to be changeable within-run by postgresql.conf then you\nshould have edited postgresql.conf to start with.\n\nAs the implementation currently works, applying a setting via the\ncommand line is entirely unsafe unless you then back it up with fixing\nthe value in postgresql.conf; otherwise an unrelated change to\npostgresql.conf, perhaps much later, blows the command-line setting\nout of the water. You might as well eliminate command-line settings\nentirely, if they have to be accompanied by a matching change in\npostgresql.conf in order to work reliably.\n\nIMHO the only reason we haven't gotten squawks about this is that\n(a) the majority of uses of command-line arguments are for options\nthat can't be changed after postmaster start anyway (-i, -p, -B, -N);\n(b) since the factory-default postgresql.conf has all settings commented\nout, it doesn't actually cause reversions to happen at SIGHUP.\n\nIf any significant number of people were actually being exposed to the\ncurrent behavior, we'd be getting complaints; lots of 'em. At the\nmoment it's only a corner case because of considerations (a) and (b),\nso few people have had it happen to them.\n\n> but in my mind, if you're using command-line arguments\n> in permanent setups, you're not trying hard enough.\n\nYou're essentially saying that command-line arguments are only useful\nfor prototyping changes that you intend to make in postgresql.conf\nimmediately thereafter. Again, why have we got a command-line option\nfacility at all? Might as well make the change in postgresql.conf\nto begin with, if that's the viewpoint.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 01 Feb 2002 13:26:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Per-database and per-user GUC settings " }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I can't think of an acceptable solution so the best one may be to just\n> remove the command-line args completely.\n\nI've been proposing a workable implementation in this very thread.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 01 Feb 2002 13:27:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Per-database and per-user GUC settings " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I can't think of an acceptable solution so the best one may be to just\n> > remove the command-line args completely.\n> \n> I've been proposing a workable implementation in this very thread.\n\nWhich is to track where the setting came from, right? I was thinking it\nwasn't workable because people were complaining about it. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 1 Feb 2002 13:29:18 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Per-database and per-user GUC settings" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> I've been proposing a workable implementation in this very thread.\n\n> Which is to track where the setting came from, right? I was thinking it\n> wasn't workable because people were complaining about it. :-)\n\nPeter's complaining because he thinks the current behavior is OK.\nAFAICT he isn't saying that my idea wouldn't make the behavior be\nwhat you and I want, but that he doesn't like that behavior.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 01 Feb 2002 13:35:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Per-database and per-user GUC settings " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> I've been proposing a workable implementation in this very thread.\n> \n> > Which is to track where the setting came from, right? I was thinking it\n> > wasn't workable because people were complaining about it. :-)\n> \n> Peter's complaining because he thinks the current behavior is OK.\n> AFAICT he isn't saying that my idea wouldn't make the behavior be\n> what you and I want, but that he doesn't like that behavior.\n\nGetting back to propogating SIGHUP to the children, if I have issued a\nSET in my session, does a postmaster SIGHUP wipe that out, and even if\nit doesn't, what if I do a SHOW early in my session, see the setting is\nOK, then find later that is is changed, for example, the ONLY\ninheritance setting. I guess what I am saying is that I see session\nstability as a feature, rather than propogating changes to running\nchildren, which I think could cause more harm than good.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 1 Feb 2002 13:57:02 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Per-database and per-user GUC settings" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Getting back to propogating SIGHUP to the children, if I have issued a\n> SET in my session, does a postmaster SIGHUP wipe that out,\n\nAt present, yes it does, if postgresql.conf contains a conflicting\nsetting. I think everyone agrees that that's wrong. Peter proposes\nto fix it by disabling backend response to SIGHUP entirely. I propose\nto fix it by changing the code to explicitly understand that SET\noverrides postgresql.conf values, regardless of when postgresql.conf\nis scanned.\n\n> and even if\n> it doesn't, what if I do a SHOW early in my session, see the setting is\n> OK, then find later that is is changed, for example, the ONLY\n> inheritance setting.\n\nThat's a powerful example but I think it's a red herring. Changing the\nsystem-wide default for a semantically critical value is not something\nthat one would do lightly or without preparation. If your applications\nassume a particular inheritance setting, they're not going to be any\nless broken if they connect immediately after the postgresql.conf change\noccurs than if they connect just beforehand. An app that depends on a\nspecific setting should SET that value, not just SHOW it ... and if it\nhas SET the value, then that is what it will get for the duration of\nits run, under either my proposal or Peter's.\n\nSimilarly, under my proposal per-user or per-database GUC settings would\noverride postgresql.conf settings even after SIGHUP, because the code\nwould explicitly treat them as doing so. Under Peter's proposal the\nonly way to maintain that priority relationship is to disable backends\nfrom responding to SIGHUP altogether.\n\nI believe that disabling SIGHUP response in backends is throwing the\nbaby out with the bathwater. There are plenty of examples where you\nabsolutely do want backends to respond to configuration changes: think\nabout any of the debug-logging or statistics-gathering options. If you\nare trying to crank up the debug/stats level in order to find out why\nyour system is misbehaving *now*, it isn't going to help you if the\nexisting backends steadfastly ignore your attempt to change the setting.\n\nA possible compromise is to distinguish between options that change\napplication-visible semantics (like inheritance, datestyle, etc)\nand those that do not (logging options, sort_mem, the planner cost\nsettings, etc). It would be reasonable to treat the former as\n\"frozen after backend start unless changed via SET\". It is not\nreasonable to put the same straitjacket on all system settings,\nhowever. I believe that my proposal could easily be extended to\nsupport such behavior. Peter's approach is all-or-nothing.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 01 Feb 2002 14:27:08 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Per-database and per-user GUC settings " }, { "msg_contents": "Tom Lane writes:\n\n> Let me try to state it more clearly. It seems to me that the existing\n> GucContext mechanism folds together three considerations that are\n> logically distinct, thereby making the implementation both confusing\n> and restrictive:\n\n(Nice explanation -- I'm beginning to buy it ;-) )\n\n> 1. When is it possible/rational to change a given setting? Two examples:\n> the implementation doesn't physically support changing shared_buffers\n> after shared memory initialization; while things would still work\n> if different backends were running with different log_pid settings,\n> it's not really sensible for them to do so. Useful concepts here\n> include \"fixed at postmaster start\", \"fixed at backend start\", and\n> \"changeable anytime\", and \"system-wide\" vs \"per-backend\". With the\n> addition of per-DB GUC settings, \"database-wide\" enters the\n> vocabulary too.\n\nSo basically we map PGC_POSTMASTER => \"fixed at postmaster start\",\nPGC_BACKEND => \"fixed at backend start\", PGC_{SIGHUP,SUSET,USERSET} =>\n\"changeable anytime\".\n\nObviously, PGC_POSTMASTER is always system-wide and PGC_BACKEND,\nPGC_SUSET, and PGC_USERSET are always per-backend. For PGC_SIGHUP, we\ncurrently have this comment in the code:\n\n/*\n * Hmm, the idea of the SIGHUP context is \"ought to be global,\n * but can be changed after postmaster start\". But there's\n * nothing that prevents a crafty administrator from sending\n * SIGHUP signals to individual backends only.\n */\n\nIn fact, the options that are PGC_SIGHUP seem to be relying on this\nglobal theme heavily, so let's count PGC_SIGHUP into system-wide.\n\nSo basically, you have four useful combinations of these settings, which\ncorrespond to PGC_POSTMASTER, PGC_BACKEND, PGC_SIGHUP, and\nPGC_{SUSET,USERSET}.\n\nAs for database-wide, I'm not sure how to interpret that. Does that mean,\nthis parameter must be the same for all concurrent sessions on the same\ndatabase? Is that something we ought to have?\n\n> 2. From a security/privilege point of view, who should have the right to\n> change a given setting, and over what span of application? Right now\n> the only concepts here are \"superuser\" vs \"ordinary user\" and\n> \"current session\" vs \"whole installation\". Adding database-wide GUC\n> settings at least introduces the new concepts of \"database owner\" and\n> \"within database\".\n\nSuperuser vs ordinary user doesn't have a lot of options: ordinary users\nonly get a shot at a subset of the \"change anytime\" + \"per-backend\nsettings\" settings, which splits up PGC_SUSET and PGC_USERSET.\n\nCurrent session vs whole installation comes with the territory. What you\ncan set at postmaster start necessarily affects the whole installation.\nSame with SIGHUP (probably, see concerns above). The rest is restricted\nto the running session. (Unless you want to propagate changes from one\nsession to another -- that seems a little too far out for me.)\n\nI don't see \"database owner\" as an independent concept: if a database\nowner is an otherwise ordinary user he only gets to change the\nordinary-user settings. You could invent a granularity between that, but\nI'd like to see a pressing need first. That would get too complicated to\nmanage, I think.\n\nSo basically, I think the current five levels of PGC_* should cover 1 and\n2 OK. Again, extensions are possible, but I don't see a need yet.\n\n> 3. Where did a given setting of a variable come from? (wired-in\n> default, postmaster command line, config file, backend options, SET\n> command, soon to be augmented by per-DB and per-user table entries)\n>\n> My argument is that consideration 3 should only be used directly to\n> determine which source wins when there is a conflict between different\n> valid sources of a given value (where validity is determined by\n> considerations 1 and 2).\n\nOK, so what's the order:\n\n1. run-time SET\n2. per-user setting\n3. per-database setting\n4. backend options from client\n5. postmaster command line\n6. config file\n7. wired-in default\n\nMeaning, any setting if provided can only take effect if the previous\nsource of the setting had a higher number.\n\nThe given list represents the current state of affairs plus the\nper-user/database settings inserted and the postmaster command line moved\nfrom 6.5 to 5.\n\nAs for implementation, we could just insert an int field with these\nnumbers (or some macros) into the struct config_generic, and so the big\nstruct initializers in guc.c would all start out as 7. The\nSetConfigOption code would simply need an extra comparison to check\nwhether it can proceed.\n\nAnother nice thing about this approach is that the current \"bool\nmakeDefault\" parameter, which decides whether to save the new setting as\ndefault for RESET ALL, could be folded into \"source > 1\" (since\neverything except SET establishes a session default).\n\n> > If template1 is blocked or the user's settings are messed up, you have a\n> > more fundamental problem, but it's not dissimilar to deleting all your\n> > users. We have an escape hatch for that: Start a standalone backend.\n> > (No options would be processed in that case.)\n>\n> Okay, if it's agreed that standalone backends ignore these settings then\n> I think we can survive a screwup.\n\nMaybe there should be an option to turn this on or off, depending on what\nthe default is. Not sure yet.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n\n", "msg_date": "Sat, 2 Feb 2002 12:16:06 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: Per-database and per-user GUC settings " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> So basically, you have four useful combinations of these settings, which\n> correspond to PGC_POSTMASTER, PGC_BACKEND, PGC_SIGHUP, and\n> PGC_{SUSET,USERSET}.\n\nCheck.\n\n> As for database-wide, I'm not sure how to interpret that. Does that mean,\n> this parameter must be the same for all concurrent sessions on the same\n> database? Is that something we ought to have?\n\nWell, we haven't got it now, and I'm not sure whether there's anything\nwe'd really need it for. I was speculating that we might want it for\nsomething to do with schema paths or some such ... but it's hard to see\nwhy those couldn't be treated as per-backend. Certainly anything that\ncould be user-settable wouldn't be database-wide.\n\n>> 2. From a security/privilege point of view, who should have the right to\n>> change a given setting, and over what span of application?\n\n> I don't see \"database owner\" as an independent concept:\n\nWell, it is in terms of who gets to set the per-database GUC settings,\nbut I guess that doesn't directly relate to the privilege levels of the\nindividual settings.\n\n> So basically, I think the current five levels of PGC_* should cover 1 and\n> 2 OK. Again, extensions are possible, but I don't see a need yet.\n\nOkay, we can live with that for now, until/unless we see a reason to\ninvent a level that has something to do with per-database settings.\n\n> OK, so what's the order:\n\n> 1. run-time SET\n> 2. per-user setting\n> 3. per-database setting\n> 4. backend options from client\n> 5. postmaster command line\n> 6. config file\n> 7. wired-in default\n\nNot sure; shouldn't backend options from client supersede the settings\ntaken from tables? I'd be inclined to move up your #4 to between 1 and 2.\nOtherwise this looks good.\n\n> Meaning, any setting if provided can only take effect if the previous\n> source of the setting had a higher number.\n\nSame or higher number (to allow repeated SETs or config file changes).\n\n> Another nice thing about this approach is that the current \"bool\n> makeDefault\" parameter, which decides whether to save the new setting as\n> default for RESET ALL, could be folded into \"source > 1\" (since\n> everything except SET establishes a session default).\n\nRight, I always felt that makeDefault was just a kluge to deal with one\nof the deficiencies of the processing-order-sensitive implementation.\n\n>> Okay, if it's agreed that standalone backends ignore these settings then\n>> I think we can survive a screwup.\n\n> Maybe there should be an option to turn this on or off, depending on what\n> the default is. Not sure yet.\n\nThat makes a lot of sense. The standalone-backend environment is\nalready weird enough to make it mistake-prone; not loading your normal\nsettings would only make it more so. I'd vote for having standalone\nbackends load your normal settings by default, but offer a command-line\nswitch to suppress that behavior; people would only use the switch if\ntheir settings were too broken to load.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 02 Feb 2002 13:35:36 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Per-database and per-user GUC settings " }, { "msg_contents": "\ntest. Sorry.\n\nTom Lane wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > I think it's desirable and fully intentional. How else would you override\n> > command-line args without restarting the server?\n> \n> I say you wouldn't. If it's on the command line then it overrides\n> postgresql.conf for that run of the postmaster; full stop, no \"if\"s.\n> If you wanted it to be changeable within-run by postgresql.conf then you\n> should have edited postgresql.conf to start with.\n> \n> As the implementation currently works, applying a setting via the\n> command line is entirely unsafe unless you then back it up with fixing\n> the value in postgresql.conf; otherwise an unrelated change to\n> postgresql.conf, perhaps much later, blows the command-line setting\n> out of the water. You might as well eliminate command-line settings\n> entirely, if they have to be accompanied by a matching change in\n> postgresql.conf in order to work reliably.\n> \n> IMHO the only reason we haven't gotten squawks about this is that\n> (a) the majority of uses of command-line arguments are for options\n> that can't be changed after postmaster start anyway (-i, -p, -B, -N);\n> (b) since the factory-default postgresql.conf has all settings commented\n> out, it doesn't actually cause reversions to happen at SIGHUP.\n> \n> If any significant number of people were actually being exposed to the\n> current behavior, we'd be getting complaints; lots of 'em. At the\n> moment it's only a corner case because of considerations (a) and (b),\n> so few people have had it happen to them.\n> \n> > but in my mind, if you're using command-line arguments\n> > in permanent setups, you're not trying hard enough.\n> \n> You're essentially saying that command-line arguments are only useful\n> for prototyping changes that you intend to make in postgresql.conf\n> immediately thereafter. Again, why have we got a command-line option\n> facility at all? Might as well make the change in postgresql.conf\n> to begin with, if that's the viewpoint.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 20 Feb 2002 23:47:06 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Per-database and per-user GUC settings" } ]
[ { "msg_contents": "I'm sort of confused about the ways in which you can access tuple data you\nget from heap scans or syscache lookups. Perhaps this can be cleared up\nand documented, because new contributors might like this information.\nHere's the information and questions I have:\n\nTuples obtained from heap scans (heap_getnext, etc.) can always be\ndissected with heap_getattr().\n\nTuples obtained from syscache lookups (SearchSysCache) can always be\ndissected with SysCacheGetAttr().\n\nWhat happens when I try heap_getattr() on a syscache tuple?\n\nTuples obtained from heap scans or syscache lookups may be dissected via\nGETSTRUCT if and only if the attribute and all attributes prior to it are\nfixed-length and non-nullable.\n\n(Probably there should be cases about explicit index scans here, but I\nhaven't done those and they should be rare.)\n\nThe question I'm particularly struggling with is, when does TOASTing and\nde-TOASTing happen? And if it doesn't, what's the official way to do it?\nI've found PG_DETOAST_DATUM and PG_DETOAST_DATUM_COPY. Why would I want a\ncopy? (How can detoasting happen without copying?) And if I want a copy,\nin what memory context does it live? And can I just pfree() the copy if I\ndon't want it any longer?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 28 Jan 2002 16:51:30 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Rules for accessing tuple data in backend code" }, { "msg_contents": "I can't help with most of the question, but as I've implemented new\nTOAST access methods, I can answer this part:\n\nOn Mon, 2002-01-28 at 21:51, Peter Eisentraut wrote:\n> \n> The question I'm particularly struggling with is, when does TOASTing and\n> de-TOASTing happen? And if it doesn't, what's the official way to do it?\n> I've found PG_DETOAST_DATUM and PG_DETOAST_DATUM_COPY. Why would I want a\n> copy? (How can detoasting happen without copying?) And if I want a copy,\n> in what memory context does it live? And can I just pfree() the copy if I\n> don't want it any longer?\n\nI think there are two contexts for detoasting.\n\n1) fmgr functions. The PG_GETARG macro fetches the argument Datum and\npasses it through PG_DETOAST_DATUM (if the Datum is a TOASTable type).\nThus the Datum from PG_GETARG_ is always detoasted.\n\n2) Other access. I believe that heap_getattr will return a Datum -which\nfor TOASTable types will be a varlena struct. This may contain either\nthe literal data for the value (compressed or not) or the TOAST-pointer\n(toastrelid, toastvalueid). These various cases are distinguished by the\ntop two bits of the varlena length field.\n\nIn all cases other than the \"uncompressed, inline\" case, the value must\nbe passed through PG_DETOAST_DATUM to guarantee a \"standard\" varlena\ni.e. a value that is detoasted and stored in memory, can be accessed\ndirectly from C etc. \n\nHowever, the pointer returned by PG_DETOAST_DATUM might be *either* a\npointer to the original varlena struct, or to a decompressed value in\nnewly palloc'ed space. Thus the need for PG_DETOAST_DATUM_COPY, which\nmakes a copy of an uncompressed varlena, so that you can treat all the\ncases in the same way. I believe that the detoasted datums from the\n_COPY macros are ordinary things allocated by palloc in the current\nmemory context, so you can write to them and pfree() them if you wish.\nThe non-COPY variety might return a pointer to the inside of the tuple\ndata, which is not to be modified!\n\nfmgr.h defines all the access methods, and also defines PG_FREE_IF_COPY,\nwhich compares the pointer of the detoasted Datum to the original Datum\npointer and only calls pfree if they differ. \n\nHope this helps.\n\nRegards\n\nJohn\n\n\n", "msg_date": "28 Jan 2002 22:53:48 +0000", "msg_from": "John Gray <jgray@azuli.co.uk>", "msg_from_op": false, "msg_subject": "Re: Rules for accessing tuple data in backend code" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tuples obtained from heap scans (heap_getnext, etc.) can always be\n> dissected with heap_getattr().\n\nCheck. Index scans the same.\n\n> Tuples obtained from syscache lookups (SearchSysCache) can always be\n> dissected with SysCacheGetAttr().\n\nCheck.\n\n> What happens when I try heap_getattr() on a syscache tuple?\n\nWorks fine; in fact, SysCacheGetAttr is just a convenience routine that\ninvokes heap_getattr. The reason it's convenient is that you don't\nnecessarily have a tuple descriptor handy for the catalog that underlies\na particular syscache. SysCacheGetAttr knows where to find a matching\ndescriptor.\n\n> Tuples obtained from heap scans or syscache lookups may be dissected via\n> GETSTRUCT if and only if the attribute and all attributes prior to it are\n> fixed-length and non-nullable.\n\nRight. GETSTRUCT per se isn't very interesting; a more helpful way to\nphrase the above is that \"a C struct definition can be overlaid onto\nthe contents of a tuple, but it's only useful out to the last\nfixed-length, non-null field. We try to arrange the contents of system\ncatalogs so that that usefulness extends as far as possible.\"\n\n> (Probably there should be cases about explicit index scans here, but I\n> haven't done those and they should be rare.)\n\nFor these purposes index and heap scans are the same; either one\nultimately gives back a pointer to a tuple sitting in a disk buffer.\n\n> The question I'm particularly struggling with is, when does TOASTing and\n> de-TOASTing happen?\n\nIt doesn't, at the level of heap_getattr(). For a pass-by-reference\ndatatype (which includes all toastable types, a fortiori), heap_getattr\nsimply gives you back a Datum which is a pointer to the relevant place\nin the tuple. In general, you are not supposed to do anything with a\nDatum except pass it around, unless you know the specific datatype of\nthe value and know how to operate on it. For toastable datatypes, part\nof \"knowing how to operate on it\" is to know to call pg_detoast_datum()\nanytime you are handed a Datum that might possibly point at a toasted\nvalue.\n\nFor the most part, datatype-specific operations are localized in\nfmgr-callable functions, so it's possible to hide most of the knowledge\nabout detoasting in PG_GET_FOO macros for the affected datatypes.\n\n> I've found PG_DETOAST_DATUM and PG_DETOAST_DATUM_COPY. Why would I want a\n> copy? (How can detoasting happen without copying?)\n\nPG_DETOAST_DATUM_COPY guarantees to give you a copy, even if the\noriginal wasn't toasted. This allows you to scribble on the input,\nin case that happens to be a useful way of forming your result.\nWithout a forced copy, a routine for a pass-by-ref datatype must\nNEVER, EVER scribble on its input ... because very possibly it'd\nbe scribbling on a valid tuple in a disk buffer, or a valid entry\nin the syscache.\n\n> And if I want a copy, in what memory context does it live?\n\nIt's just palloc'd, so it's whatever is CurrentMemoryContext.\n\n> And can I just pfree() the copy if I don't want it any longer?\n\nYes. In many scenarios you don't have to because CurrentMemoryContext\nis short-lived, though. There are a lot of pfree's in the system that\nare really just wasted cycles.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 28 Jan 2002 20:24:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Rules for accessing tuple data in backend code " } ]
[ { "msg_contents": "Hi,\n\nthanks for reading this message.\n\nI have a table (in a postgres database) looking like this:\n\n Table \"zdec_bhab\"\n Attribute | Type | Modifier\n-----------+-----------+----------\n run | bigint |\n evt | bigint |\n ...\n pcha | real[] |\n ...\n\nwhere pcha is a 2D array, i.e. the first index can go from 1 to some\nnumber and the second is 1..3. \n\nNow, I'd like to create a plpgsql function taking as an argument \ntwo vectors (arrays) from pcha:\n\nCREATE FUNCTION mytest(real[],real[]) RETURNS real AS '\nDECLARE\n p1 ALIAS FOR $1;\n p2 ALIAS FOR $2;\nbegin\n-- RAISE NOTICE ''xxx %'',p2;\n return p2[1][1];\nend;' LANGUAGE 'plpgsql';\n\nI do the following query:\n\nselect\npcha[1:1][1:3],pcha[2:2][1:3],mytest(pcha[1:1][1:3],pcha[2:2][1:3]) from\nzdec_bhab where nch>=2; \n\nwhich yields:\n pcha | \npcha | mytest\n---------------------------------------------+---------------------------------------------+--------\n {{\"-21.0788\",\"35.0317\",\"19.2111\"}} |\n{{\"21.0605\",\"-34.995\",\"-19.2111\"}} |\n\ni.e. mytest seems to return something empty... however, If I uncomment\nthe RAISE NOTICE\nline, I see the correct values (as in the output of the select\nstatement).\n\nIf I do \n\nselect\npcha[1:1][1:3],pcha[2:2][1:3],mytest(pcha[2:2][1:3],pcha[1:1][1:3]) from\nzdec_bhab where nch>=2; \n\n(i.e. the arguments of mytest exchanged), I get the correct values.\n\nAm I doing something wrong or is this a 'feature' ? \n(I'm using PostgreSQL 7.1.3 on i686-pc-linux-gnu, compiled by GCC\n2.95.3).\n\nIs it possible in plpgsql to have functions with more than\none array argument ? What about plperl ?\n\nOr do I have to convert the 2D array into three 1D arrays pcha_x, pcha_y\nand pcha_z ?\n\n\nbest regards & thanks for the help,\n\nAndr�\n", "msg_date": "Tue, 29 Jan 2002 00:52:21 +0100", "msg_from": "Andre Holzner <Andre.Holzner@cern.ch>", "msg_from_op": true, "msg_subject": "plpgsql function with more than one array argument" }, { "msg_contents": "Andre Holzner <Andre.Holzner@cern.ch> writes:\n> Am I doing something wrong or is this a 'feature' ? \n\nWhat's biting you is that the array slice operator uses the provided\nlower bounds in the resultant array. For example:\n\nregression=# select pcha from zdec_bhab;\n pcha\n------------------------------------\n {{11,12,13},{21,22,23},{31,32,33}}\n(1 row)\n\nregression=# select array_dims(pcha) from zdec_bhab;\n array_dims\n------------\n [1:3][1:3]\n(1 row)\n\nregression=# select pcha[2:2][1:3] from zdec_bhab;\n pcha\n--------------\n {{21,22,23}}\n(1 row)\n\nregression=# select array_dims(pcha[2:2][1:3]) from zdec_bhab;\n array_dims\n------------\n [2:2][1:3]\n(1 row)\n\nSo your function receives an array with first index starting at 2,\nwhich it's not expecting; its attempt to fetch element [1][1] is out\nof bounds and produces a NULL.\n\nOffhand this behavior seems like a misfeature: perhaps it'd be more\nsensible for the extracted slice to always have index lower bounds\nset to 1. But I'd like to see some discussion before changing it\n(and I don't plan to touch it before 7.2 release, in any case ;-)).\n\nComments anyone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 01 Feb 2002 16:56:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Array slice subscripts (was Re: [SQL] plpgsql function with more than\n\tone array argument)" }, { "msg_contents": "\nIs this a TODO item?\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Andre Holzner <Andre.Holzner@cern.ch> writes:\n> > Am I doing something wrong or is this a 'feature' ? \n> \n> What's biting you is that the array slice operator uses the provided\n> lower bounds in the resultant array. For example:\n> \n> regression=# select pcha from zdec_bhab;\n> pcha\n> ------------------------------------\n> {{11,12,13},{21,22,23},{31,32,33}}\n> (1 row)\n> \n> regression=# select array_dims(pcha) from zdec_bhab;\n> array_dims\n> ------------\n> [1:3][1:3]\n> (1 row)\n> \n> regression=# select pcha[2:2][1:3] from zdec_bhab;\n> pcha\n> --------------\n> {{21,22,23}}\n> (1 row)\n> \n> regression=# select array_dims(pcha[2:2][1:3]) from zdec_bhab;\n> array_dims\n> ------------\n> [2:2][1:3]\n> (1 row)\n> \n> So your function receives an array with first index starting at 2,\n> which it's not expecting; its attempt to fetch element [1][1] is out\n> of bounds and produces a NULL.\n> \n> Offhand this behavior seems like a misfeature: perhaps it'd be more\n> sensible for the extracted slice to always have index lower bounds\n> set to 1. But I'd like to see some discussion before changing it\n> (and I don't plan to touch it before 7.2 release, in any case ;-)).\n> \n> Comments anyone?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 21 Feb 2002 22:55:55 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Array slice subscripts (was Re: [SQL] plpgsql function" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Is this a TODO item?\n\nI guess so, since no one seems to have objected to the proposed change.\nIt's a pretty trivial change; I'll take care of it.\n\n\t\t\tregards, tom lane\n\n> Tom Lane wrote:\n>> Andre Holzner <Andre.Holzner@cern.ch> writes:\n> Am I doing something wrong or is this a 'feature' ? \n>> \n>> What's biting you is that the array slice operator uses the provided\n>> lower bounds in the resultant array. For example:\n>> \n>> regression=# select pcha from zdec_bhab;\n>> pcha\n>> ------------------------------------\n>> {{11,12,13},{21,22,23},{31,32,33}}\n>> (1 row)\n>> \n>> regression=# select array_dims(pcha) from zdec_bhab;\n>> array_dims\n>> ------------\n>> [1:3][1:3]\n>> (1 row)\n>> \n>> regression=# select pcha[2:2][1:3] from zdec_bhab;\n>> pcha\n>> --------------\n>> {{21,22,23}}\n>> (1 row)\n>> \n>> regression=# select array_dims(pcha[2:2][1:3]) from zdec_bhab;\n>> array_dims\n>> ------------\n>> [2:2][1:3]\n>> (1 row)\n>> \n>> So your function receives an array with first index starting at 2,\n>> which it's not expecting; its attempt to fetch element [1][1] is out\n>> of bounds and produces a NULL.\n>> \n>> Offhand this behavior seems like a misfeature: perhaps it'd be more\n>> sensible for the extracted slice to always have index lower bounds\n>> set to 1. But I'd like to see some discussion before changing it\n>> (and I don't plan to touch it before 7.2 release, in any case ;-)).\n>> \n>> Comments anyone?\n>> \n>> regards, tom lane\n", "msg_date": "Thu, 21 Feb 2002 23:14:36 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Array slice subscripts (was Re: [SQL] plpgsql function with more\n\tthan one array argument)" }, { "msg_contents": "Hello developpers,\n\nTom Lane wrote:\n> \n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Is this a TODO item?\n> \n> I guess so, since no one seems to have objected to the proposed change.\n> It's a pretty trivial change; I'll take care of it.\n> \n> regards, tom lane\n\nI learned to live without it, but I wont be the last one \nwho tries to use such queries.\n\n\nbest regards & thanks a lot for your efforts,\n\nAndré\n\n\n> \n> > Tom Lane wrote:\n> >> Andre Holzner <Andre.Holzner@cern.ch> writes:\n> > Am I doing something wrong or is this a 'feature' ?\n> >>\n> >> What's biting you is that the array slice operator uses the provided\n> >> lower bounds in the resultant array. For example:\n> >>\n> >> regression=# select pcha from zdec_bhab;\n> >> pcha\n> >> ------------------------------------\n> >> {{11,12,13},{21,22,23},{31,32,33}}\n> >> (1 row)\n> >>\n> >> regression=# select array_dims(pcha) from zdec_bhab;\n> >> array_dims\n> >> ------------\n> >> [1:3][1:3]\n> >> (1 row)\n> >>\n> >> regression=# select pcha[2:2][1:3] from zdec_bhab;\n> >> pcha\n> >> --------------\n> >> {{21,22,23}}\n> >> (1 row)\n> >>\n> >> regression=# select array_dims(pcha[2:2][1:3]) from zdec_bhab;\n> >> array_dims\n> >> ------------\n> >> [2:2][1:3]\n> >> (1 row)\n> >>\n> >> So your function receives an array with first index starting at 2,\n> >> which it's not expecting; its attempt to fetch element [1][1] is out\n> >> of bounds and produces a NULL.\n> >>\n> >> Offhand this behavior seems like a misfeature: perhaps it'd be more\n> >> sensible for the extracted slice to always have index lower bounds\n> >> set to 1. But I'd like to see some discussion before changing it\n> >> (and I don't plan to touch it before 7.2 release, in any case ;-)).\n> >>\n> >> Comments anyone?\n> >>\n> >> regards, tom lane\n\n-- \n------------------+----------------------------------\nAndre Holzner | +41 22 76 76750 \nBureau 32 2-C13 | Building 32 \nCERN | Office 2-C13 \nCH-1211 Geneve 23 | http://wwweth.cern.ch/~holzner/\n", "msg_date": "Fri, 22 Feb 2002 10:17:12 +0100", "msg_from": "Andre Holzner <Andre.Holzner@cern.ch>", "msg_from_op": true, "msg_subject": "Re: Array slice subscripts (was Re: [SQL] plpgsql function " } ]
[ { "msg_contents": "Hi all,\n I am writing a php script to backup my postgres database through web interface, but my database is password required. When I do a pg_dump at the linux shell prompt, it will prompt for password in order to backup the database, does anyone know how to pass this password with the pg_dump command together so that I can do it using php system call? \n\n\n\n\n\n\n\n\nHi all,\n   I am writing a php script to backup my \npostgres database through web interface, but my database is password \nrequired.  When I do a pg_dump at the linux shell prompt, it will prompt \nfor password in order to backup the database, does anyone know how to pass this \npassword with the pg_dump command together so that I can do it using php system \ncall?", "msg_date": "Tue, 29 Jan 2002 11:01:58 +0800", "msg_from": "\"Lau NH\" <launh@perridot.com>", "msg_from_op": true, "msg_subject": "Backup database through web and php" }, { "msg_contents": "You could set the authentification method to \"trust\" for the host the script is being run from, so you won't get a password prompt at all (see the pg_hba.conf file).\n\n----- Original Message ----- \n From: Lau NH \n To: pgsql-admin@postgresql.org ; pgsql-hackers@postgresql.org \n Sent: Tuesday, January 29, 2002 05:01\n Subject: [ADMIN] Backup database through web and php\n\n\n Hi all,\n I am writing a php script to backup my postgres database through web interface, but my database is password required. When I do a pg_dump at the linux shell prompt, it will prompt for password in order to backup the database, does anyone know how to pass this password with the pg_dump command together so that I can do it using php system call? \n\n\n\n\n\n\n\n\nYou could set the authentification \nmethod to \"trust\" for the host the script is being run from, so you \nwon't get a password prompt at all (see the pg_hba.conf file).\n \n----- Original Message ----- \n\nFrom:\nLau NH \nTo: pgsql-admin@postgresql.org ; pgsql-hackers@postgresql.org\n\nSent: Tuesday, January 29, 2002 \n 05:01\nSubject: [ADMIN] Backup database through \n web and php\n\nHi all,\n   I am writing a php script to backup \n my postgres database through web interface, but my database is password \n required.  When I do a pg_dump at the linux shell prompt, it will prompt \n for password in order to backup the database, does anyone know how to pass \n this password with the pg_dump command together so that I can do it using php \n system call?", "msg_date": "Tue, 29 Jan 2002 16:14:15 +0200", "msg_from": "\"Radu-Adrian Popescu\" <radu.popescu@aldratech.com>", "msg_from_op": false, "msg_subject": "Re: Backup database through web and php" }, { "msg_contents": "At 11:01 AM 1/29/02 +0800, Lau NH wrote:\n>Hi all,\n> I am writing a php script to backup my postgres database through web \n> interface, but my database is password required. When I do a pg_dump at \n> the linux shell prompt, it will prompt for password in order to backup \n> the database, does anyone know how to pass this password with the pg_dump \n> command together so that I can do it using php system call?\n>\nI was just reading up on this subject, and it suggests using PGPASSWORD in \nthe environment. I have not tried that yet, but think all the passwords \nmust be the same.\n\nIt is sometimes unclear to me when challenged for a password, logged in as \npostgres, whether or not they want the database password I am working on, \nor postgres'. So far, it appears to be asking for the postgres password.\n--\nNaomi Walker\nChief Information Officer\nEldorado Computing, Inc.\n602-604-3100 ext 242 \n\nAt 11:01 AM 1/29/02 +0800, Lau NH wrote:\nHi all,\n   I am writing a php script to backup my postgres database\nthrough web interface, but my database is password required.  When I\ndo a pg_dump at the linux shell prompt, it will prompt for password in\norder to backup the database, does anyone know how to pass this password\nwith the pg_dump command together so that I can do it using php system\ncall?  \n I was just reading up on this subject, and it suggests\nusing PGPASSWORD in the environment.  I have not tried that yet, but\nthink all the passwords must be the same.  \n\nIt is sometimes unclear to me when challenged for a password, logged in\nas postgres, whether or not they want the database password I am working\non, or postgres'.  So far, it appears to be asking for the postgres\npassword.\n--\nNaomi Walker\nChief Information Officer\nEldorado Computing, Inc.\n602-604-3100  ext 242", "msg_date": "Wed, 30 Jan 2002 10:31:29 -0700", "msg_from": "Naomi Walker <nwalker@eldocomp.com>", "msg_from_op": false, "msg_subject": "Re: Backup database through web and php" } ]
[ { "msg_contents": "Hi hackers,\n\nIs there any way of logging all queries and their duration in postgres? I\nknow you can log queries - but I can't see any way of logging durations.\n\nIf not - would it be a worthwhile TODO?\n\nChris\n\n", "msg_date": "Tue, 29 Jan 2002 15:42:59 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "timing queries" }, { "msg_contents": "On Tue, 29 Jan 2002, Christopher Kings-Lynne wrote:\n\n> Hi hackers,\n> \n> Is there any way of logging all queries and their duration in postgres? I\n> know you can log queries - but I can't see any way of logging durations.\n\nI discussed auditing in PostgreSQL 7.3. This would probably be a useful\nbit of data to include in the audit trail.\n\nGavin\n\n", "msg_date": "Tue, 29 Jan 2002 21:14:18 +1100 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: timing queries" }, { "msg_contents": "Gavin Sherry wrote:\n> \n> On Tue, 29 Jan 2002, Christopher Kings-Lynne wrote:\n> \n> > Hi hackers,\n> >\n> > Is there any way of logging all queries and their duration in postgres? I\n> > know you can log queries - but I can't see any way of logging durations.\n> \n> I discussed auditing in PostgreSQL 7.3. This would probably be a useful\n> bit of data to include in the audit trail.\n\nI use the following code snippet to extract slow queries from postgresql\nlog\n\nIt could be improved trivially to include also the query text -\ncurrently it \njust shows the start time and backend number.\n\n#!/usr/bin/python\n\n\"\"\n020118.10:05:11.879 [27337] ProcessQuery\n020118.10:05:11.879 [27337] CommitTransactionCommand\n020118.10:05:11.882 [27337] StartTransactionCommand\n020118.10:05:11.882 [27337] query: SET DateStyle TO 'ISO';\n\"\"\"\n\nimport sys,string,time\n\ndef ts2time(ts):\n yr = int(ts[ 0: 2])\n mo = int(ts[ 2: 4])\n dy = int(ts[ 4: 6])\n hr = int(ts[ 7: 9])\n mi = int(ts[10:12])\n se = int(ts[13:15])\n hu = float(ts[15:])\n return time.mktime((yr,mo,dy,hr,mi,se,0,0,0)) + hu\n\n\nfile = sys.stdin\nlnr = 0\ntrx_dict = {}\n\nSLOW = 1 # longer than this seconds is slow\n\nwhile 1:\n pos = file.tell()\n line = file.readline()\n if not line: break\n lnr = lnr + 1\n try:\n ts, pid, inf = string.split(line)[:3]\n# print '-', ts, pid, inf\n ti = ts2time(ts)\n except:\n continue\n try:\n lastlnr, lastpos, lasttime, lastts, lastinf = trx_dict[pid]\n# print '*',lastlnr, lastpos, lasttime, lastts, lastinf\n# if lastinf == 'ProcessQuery\\n':\n# print ti-lasttime, lastlnr, lastpos, lastts, ts\n if((ti-lasttime) > SLOW) and (lastinf == 'ProcessQuery'):\n print ti-lasttime, lastlnr, lastpos, lastts, ts\n except:\n pass\n if inf=='exit(0)\\n':\n del trx_dict[pid]\n else:\n trx_dict[pid] = (lnr,pos,ti,ts,inf)\n\nfor key in trx_dict.keys():\n print trx_dict[key]\n", "msg_date": "Tue, 29 Jan 2002 14:34:39 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: timing queries" } ]
[ { "msg_contents": "I've noticed that in postgresql 7.2b4 and in some contrib(I've seen\ndbf2pgsql) there are inline variables, I think this is not ANSI code.\nMIPS PRO compilers did not work with inline unless u use API specific\ntools\nhope it helps", "msg_date": "Tue, 29 Jan 2002 11:58:46 +0100", "msg_from": "Luis Amigo <lamigo@atc.unican.es>", "msg_from_op": true, "msg_subject": "inline is not ANSI C" }, { "msg_contents": "Luis Amigo <lamigo@atc.unican.es> writes:\n> I've noticed that in postgresql 7.2b4 and in some contrib(I've seen\n> dbf2pgsql) there are inline variables, I think this is not ANSI code.\n> MIPS PRO compilers did not work with inline unless u use API specific\n> tools\n\nconfigure arranges to #define inline as empty if your compiler doesn't\ntake it. Is that not working as expected in your case?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 29 Jan 2002 10:36:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: inline is not ANSI C " }, { "msg_contents": "Luis Amigo <lamigo@atc.unican.es> writes:\n\n> I've noticed that in postgresql 7.2b4 and in some contrib(I've seen\n> dbf2pgsql) there are inline variables, I think this is not ANSI\n> code.\n\nStandards evolve.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "29 Jan 2002 10:38:43 -0500", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: inline is not ANSI C" }, { "msg_contents": "Tom Lane wrote:\n\n> Luis Amigo <lamigo@atc.unican.es> writes:\n> > I've noticed that in postgresql 7.2b4 and in some contrib(I've seen\n> > dbf2pgsql) there are inline variables, I think this is not ANSI code.\n> > MIPS PRO compilers did not work with inline unless u use API specific\n> > tools\n>\n> configure arranges to #define inline as empty if your compiler doesn't\n> take it. Is that not working as expected in your case?\n>\n> regards, tom lane\n\nNo it is not working as expected, I'd to remove them at my own.", "msg_date": "Wed, 30 Jan 2002 09:50:24 +0100", "msg_from": "Luis Amigo <lamigo@atc.unican.es>", "msg_from_op": true, "msg_subject": "Re: inline is not ANSI C" }, { "msg_contents": "Trond Eivind Glomsr�d wrote:\n\n> Luis Amigo <lamigo@atc.unican.es> writes:\n>\n> > I've noticed that in postgresql 7.2b4 and in some contrib(I've seen\n> > dbf2pgsql) there are inline variables, I think this is not ANSI\n> > code.\n>\n> Standards evolve.\n>\n> --\n> Trond Eivind Glomsr�d\n> Red Hat, Inc\n\nI think u can not make a unix standard open source dbase if u don't\nrespect standards, OS must evolve, not applications", "msg_date": "Wed, 30 Jan 2002 09:52:01 +0100", "msg_from": "Luis Amigo <lamigo@atc.unican.es>", "msg_from_op": true, "msg_subject": "Re: inline is not ANSI C" }, { "msg_contents": "On Wed, 30 Jan 2002, Luis Amigo wrote:\n\n> Trond Eivind Glomsr�d wrote:\n> \n> > Luis Amigo <lamigo@atc.unican.es> writes:\n> >\n> > > I've noticed that in postgresql 7.2b4 and in some contrib(I've seen\n> > > dbf2pgsql) there are inline variables, I think this is not ANSI\n> > > code.\n> >\n> > Standards evolve.\n> \n> I think u can not make a unix standard open source dbase if u don't\n> respect standards, OS must evolve, not applications\n\nThe recent C standards have inline.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n\n", "msg_date": "Wed, 30 Jan 2002 08:35:50 -0500 (EST)", "msg_from": "=?ISO-8859-1?Q?Trond_Eivind_Glomsr=F8d?= <teg@redhat.com>", "msg_from_op": false, "msg_subject": "Re: inline is not ANSI C" }, { "msg_contents": "> > > I've noticed that in postgresql 7.2b4 and in some contrib(I've seen\n> > > dbf2pgsql) there are inline variables, I think this is not ANSI\n> > > code.\n> > Standards evolve.\n> I think u can not make a unix standard open source dbase if u don't\n> respect standards, OS must evolve, not applications\n\nPer Tom Lane's response, there are provisions in the configuration step\nto define \"inline\" as an empty macro for the preprocessor. So it just\ngoes away from the source code for compilers which do not support it.\n\nTom inquired whether that configuration step seems to not produce the\nright result for you (it sounds like it doesn't). Actually removing\n\"inline\" from the source code is not required.\n\nTry running configure again and check the results. Let us know what you\nfind.\n\n - Thomas\n", "msg_date": "Wed, 30 Jan 2002 14:05:35 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: inline is not ANSI C" }, { "msg_contents": "Luis Amigo <lamigo@atc.unican.es> writes:\n>> configure arranges to #define inline as empty if your compiler doesn't\n>> take it. Is that not working as expected in your case?\n\n> No it is not working as expected, I'd to remove them at my own.\n\nWould you look at configure's test for this and find out why it fails\nto detect that inline doesn't work? As the only one with access to\nthe problem compiler, you cannot expect anyone else to fix this;\nit's your responsibility to improve the configure test so it gets the\nright answer on your platform.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 30 Jan 2002 10:43:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: inline is not ANSI C " }, { "msg_contents": "Thomas Lockhart wrote:\n\n> > > > I've noticed that in postgresql 7.2b4 and in some contrib(I've seen\n> > > > dbf2pgsql) there are inline variables, I think this is not ANSI\n> > > > code.\n> > > Standards evolve.\n> > I think u can not make a unix standard open source dbase if u don't\n> > respect standards, OS must evolve, not applications\n>\n> Per Tom Lane's response, there are provisions in the configuration step\n> to define \"inline\" as an empty macro for the preprocessor. So it just\n> goes away from the source code for compilers which do not support it.\n>\n> Tom inquired whether that configuration step seems to not produce the\n> right result for you (it sounds like it doesn't). Actually removing\n> \"inline\" from the source code is not required.\n>\n> Try running configure again and check the results. Let us know what you\n> find.\n>\n> - Thomas\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\nby now in 7.2b4 configure is not working as expected, I will try b5 when I\ncan.\nIt may be because MIPS Pro compilers have inlining tools, but in cannot be\nused as in gcc inline is an special type, can not be used as integer inline\n...\nRegards", "msg_date": "Thu, 31 Jan 2002 10:41:05 +0100", "msg_from": "Luis Amigo <lamigo@atc.unican.es>", "msg_from_op": true, "msg_subject": "Re: inline is not ANSI C" } ]
[ { "msg_contents": "I'm not an OpenServer guy. I'll forward to the PG Hackers List.\n\nOn Tue, 29 Jan 2002 09:01 PST\nrr@caldera.com wrote:\n\n> Hi Guys,\n> \n> I am having trouble getting PostgreSQL 7.1.3 working with UNIX\n> domain sockets on OpenServer 5. I can start the postmaster with\n> the \"-i\" option to enable TCP/IP connections successfully.\n> However, the default UNIX domain socket connections fail with\n> the error message:\n> \n> \"psql: connectDBStart() -- connect() failed: Unknown error\n> Is the postmaster running locally\n> and accepting connections on Unix socket '/tmp/.s.PGSQL.5432'?\n> createdb: database creation failed\"\n> \n> The named pipe referred to above has been created and the postmaster\n> should be accepting connetions on it:\n> \n> -rw------- 1 postgres postgres 27 Jan 29 08:41 .s.PGSQL.5432.lock\n> prwxrwxrwx 1 postgres postgres 0 Jan 29 08:41 .s.PGSQL.5432\n> \n> Have either of you heard or seen anything about PostgreSQL, UNIX\n> domain sockets and OpenServer ? I appreciate any assistance or\n> condolences you might provide.\n> -rr-\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Tue, 29 Jan 2002 11:12:58 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 7.1.3 on OpenServer 5" } ]
[ { "msg_contents": "While fooling around some more with profiling of simple queries,\nI noticed that in some scenarios there are lots of failing lookups in\nthe syscaches. This happens, for example, when trying to resolve\nambiguous operators or functions: the parser will do a lot of\ncan_coerce_type calls, each of which probes using the syscache to see\nif there is a matching type-coercion function. Often there won't be.\n\nNow, while the syscaches are very good at dealing with repetitive\nsuccessful lookups, they suck on repetitive unsuccessful ones: each\nprobe will do a whole new indexscan search on the underlying catalog.\nIt's not difficult to get into situations where the bulk of the catalog\nsearches are due to repetitive failing syscache probes. For example,\nI added some simple counters to current sources, and got these results\nfor a 100-transaction run of pgbench:\n\nCatcache totals: 43 tup, 14519 srch, 13976 hits, 43 loads, 500 not found\n\nThat says the caches had 43 tuples by the end of the run, and were\nsearched 14519 times, of which 13976 searches were satisfied by an\nexisting entry and another 43 by a newly-loaded entry. The remaining\n500 searches failed. The catcaches were therefore responsible for\n543 catalog indexscans. The 500 failed searches almost certainly\nconsisted of 100 sets of probes for the same five nonexistent tuples,\ngiven the repetitive structure of the queries. As we throw more\nsimilar queries at the backend, no new cache entries will need to be\nloaded ... but every failing search will have to be done again.\n\nI suspect repetitive query structure is a common feature of most\napplications, so these numbers suggest strongly that the catcaches\nought to cache negative results as well as positive ones.\n\nAFAICS there's no logical difficulty in doing this: we simply make\na catcache entry that contains the probed-for key values but is\nmarked \"no one home at this address\". If a probe hits one of these\nthings, it can return NULL without a trip to the catalog. If someone\nlater comes along and creates a tuple that matches the key value,\nthe negative-result cache entry will be invalidated in the usual way\n(this should work because insertion and update are treated identically\nin the caches).\n\nNegative and positive cache entries should compete on an equal basis for\nspace in the cache, since they are equally expensive to recreate.\n\nCan anyone spot a flaw in this reasoning?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 29 Jan 2002 19:32:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Syscaches should store negative entries, too" }, { "msg_contents": "... Interesting...\n\n> Negative and positive cache entries should compete on an equal basis for\n> space in the cache, since they are equally expensive to recreate.\n> Can anyone spot a flaw in this reasoning?\n\nMaybe not a flaw, but an observation: a backend running for an extended\nperiod of time may tend to accumulate a large number of negative cache\nentries and in principle they can grow indefinitely. The positive cache\nentries have an upper limit of the number of actual tuples in the\ncatalog itself (or perhaps something smaller).\n\nPresumably there is an upper limit to the physical cache size. Would\nretaining negative entries tend to cause the cache to cycle or to grow\nwithout bounds if there is no such limit? Or does it seem that things\nwould reach a reasonable steady state no matter what the query topology\ntends to be?\n\n - Thomas\n", "msg_date": "Wed, 30 Jan 2002 02:25:17 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Syscaches should store negative entries, too" }, { "msg_contents": "Thomas Lockhart writes:\n\n> Presumably there is an upper limit to the physical cache size. Would\n> retaining negative entries tend to cause the cache to cycle or to grow\n> without bounds if there is no such limit? Or does it seem that things\n> would reach a reasonable steady state no matter what the query topology\n> tends to be?\n\nI think the key would be that you cache N-1 failed function lookups only\nif you are actually successful finding a useful function in the Nth\nattempt. Then, if your logical cache size is C you would have quick\naccess to C/N function resolution paths, whereas right now the cache is\nreally quite useless for function resolution that requires unsuccessful\nlookups along the way. Note that if your queries are \"well written\" in\nthat they don't require any unsuccessful lookups, the cache behaviour\nwouldn't change. Since N is usually small in reasonable applications you\ncould also simply increase your cache size by a factor of N to compensate\nfor whatever you might be afraid of.\n\nPerhaps Tom Lane was also thinking ahead in terms of schema lookups. I\nimagine this negative cache scheme would really be critical there.\n\nHowever, what you probably wouldn't want to do is cache negative lookups\nthat don't end up producing results or are not part of a search chain at\nall. Those are user errors and not likely to be repeated and do not need\nto be optimized.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 29 Jan 2002 22:55:33 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Syscaches should store negative entries, too" }, { "msg_contents": "On Wed, 2002-01-30 at 10:56, Tom Lane wrote:\n>\n> FWIW, I believe that in typical scenarios there *is* no competition as\n> the syscache never gets full enough to have anything age out. In the\n> regression tests my little stats addition shows no run with more than\n> 266 cache entries accumulated; the average end-of-run cache population\n> is 75 entries. Syscache is currently configured to allow 5000 entries\n> before it starts to drop stuff.\n\nAre there _any_ tests where it does start to drop stuff ?\n\nIn other words - is the stuff-dropping part tested reasonably recently\n(or at all) ?\n\n> The regression tests are probably not representative, but if anything\n> I'd expect them to hit a wider variety of tables on an average run than\n> typical applications do.\n> \n> Bottom line: it's not apparent to me why the cache policy should be\n> anything but straight LRU across both positive and negative entries.\n\nIn other words we should cache Frequently Asked Questions and not\nFrequently Found Answers ;)\n\n-------------\nHannu\n\n", "msg_date": "30 Jan 2002 09:19:45 +0500", "msg_from": "Hannu Krosing <hannu@krosing.net>", "msg_from_op": false, "msg_subject": "Re: Syscaches should store negative entries, too" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> However, what you probably wouldn't want to do is cache negative lookups\n> that don't end up producing results or are not part of a search chain at\n> all. Those are user errors and not likely to be repeated and do not need\n> to be optimized.\n\nI think the key point here is that cache entries that aren't repeatedly\nreferenced won't survive. I don't see why it would matter whether they\nare negative or positive. If you make one reference to a table (and\nthen never touch it again in the session), is the cache supposed to\npresciently know that the resulting positive entries are wasted? No,\nit ages them out on the same schedule as anything else. If entering\nthose entries caused some other stuff to drop out, well it was stuff\nthat hadn't been referenced in quite a while anyhow. AFAICS all this\nreasoning holds equally well for negative entries.\n\nIt is true that the system generally makes many more successful searches\nthan unsuccessful ones --- but that just means that positive entries\nwill be more able to survive in the cache, if the cache gets full enough\nfor there to be competition.\n\nFWIW, I believe that in typical scenarios there *is* no competition as\nthe syscache never gets full enough to have anything age out. In the\nregression tests my little stats addition shows no run with more than\n266 cache entries accumulated; the average end-of-run cache population\nis 75 entries. Syscache is currently configured to allow 5000 entries\nbefore it starts to drop stuff.\n\nThe regression tests are probably not representative, but if anything\nI'd expect them to hit a wider variety of tables on an average run than\ntypical applications do.\n\nBottom line: it's not apparent to me why the cache policy should be\nanything but straight LRU across both positive and negative entries.\n\n\t\t\tregards, tom lane\n\nPS: Just in case anyone wants to see numbers, here are the end-of-run\nstats for each of the regression tests.\n\nDEBUG: Catcache totals: 14 tup, 14 srch, 0 hits, 14 loads, 0 not found\nDEBUG: Catcache totals: 15 tup, 16 srch, 1 hits, 15 loads, 0 not found\nDEBUG: Catcache totals: 71 tup, 130 srch, 59 hits, 71 loads, 0 not found\nDEBUG: Catcache totals: 18 tup, 34 srch, 12 hits, 18 loads, 4 not found\nDEBUG: Catcache totals: 27 tup, 61 srch, 27 hits, 27 loads, 7 not found\nDEBUG: Catcache totals: 3 tup, 4 srch, 0 hits, 3 loads, 1 not found\nDEBUG: Catcache totals: 3 tup, 4 srch, 0 hits, 3 loads, 1 not found\nDEBUG: Catcache totals: 44 tup, 448 srch, 329 hits, 66 loads, 53 not found\nDEBUG: Catcache totals: 52 tup, 271 srch, 182 hits, 64 loads, 25 not found\nDEBUG: Catcache totals: 56 tup, 472 srch, 357 hits, 67 loads, 48 not found\nDEBUG: Catcache totals: 50 tup, 307 srch, 222 hits, 62 loads, 23 not found\nDEBUG: Catcache totals: 39 tup, 106 srch, 52 hits, 46 loads, 8 not found\nDEBUG: Catcache totals: 76 tup, 441 srch, 324 hits, 77 loads, 40 not found\nDEBUG: Catcache totals: 100 tup, 597 srch, 456 hits, 101 loads, 40 not found\nDEBUG: Catcache totals: 59 tup, 1352 srch, 1139 hits, 60 loads, 153 not found\nDEBUG: Catcache totals: 40 tup, 202 srch, 132 hits, 51 loads, 19 not found\nDEBUG: Catcache totals: 51 tup, 340 srch, 255 hits, 52 loads, 33 not found\nDEBUG: Catcache totals: 67 tup, 590 srch, 459 hits, 68 loads, 63 not found\nDEBUG: Catcache totals: 103 tup, 1536 srch, 1187 hits, 171 loads, 178 not found\nDEBUG: Catcache totals: 143 tup, 8319 srch, 7733 hits, 241 loads, 345 not found\nDEBUG: Catcache totals: 89 tup, 1351 srch, 1161 hits, 89 loads, 101 not found\nDEBUG: Catcache totals: 121 tup, 2279 srch, 1866 hits, 167 loads, 246 not found\nDEBUG: Catcache totals: 96 tup, 1223 srch, 1009 hits, 97 loads, 117 not found\nDEBUG: Catcache totals: 77 tup, 391 srch, 274 hits, 78 loads, 39 not found\nDEBUG: Catcache totals: 61 tup, 367 srch, 271 hits, 62 loads, 34 not found\nDEBUG: Catcache totals: 41 tup, 175 srch, 107 hits, 48 loads, 20 not found\nDEBUG: Catcache totals: 60 tup, 290 srch, 206 hits, 67 loads, 17 not found\nDEBUG: Catcache totals: 88 tup, 1050 srch, 845 hits, 89 loads, 116 not found\nDEBUG: Catcache totals: 37 tup, 226 srch, 175 hits, 38 loads, 13 not found\nDEBUG: Catcache totals: 56 tup, 374 srch, 274 hits, 57 loads, 43 not found\nDEBUG: Catcache totals: 55 tup, 383 srch, 280 hits, 56 loads, 47 not found\nDEBUG: Catcache totals: 67 tup, 2082 srch, 1850 hits, 68 loads, 164 not found\nDEBUG: Catcache totals: 67 tup, 2082 srch, 1850 hits, 68 loads, 164 not found\nDEBUG: Catcache totals: 44 tup, 272 srch, 205 hits, 45 loads, 22 not found\nDEBUG: Catcache totals: 65 tup, 593 srch, 468 hits, 66 loads, 59 not found\nDEBUG: Catcache totals: 40 tup, 207 srch, 146 hits, 41 loads, 20 not found\nDEBUG: Catcache totals: 54 tup, 325 srch, 241 hits, 55 loads, 29 not found\nDEBUG: Catcache totals: 127 tup, 2494 srch, 1887 hits, 142 loads, 465 not found\nDEBUG: Catcache totals: 22 tup, 51 srch, 25 hits, 22 loads, 4 not found\nDEBUG: Catcache totals: 224 tup, 16980 srch, 13307 hits, 224 loads, 3449 not found\nDEBUG: Catcache totals: 152 tup, 3879 srch, 2974 hits, 152 loads, 753 not found\nDEBUG: Catcache totals: 217 tup, 11008 srch, 8561 hits, 217 loads, 2230 not found\nDEBUG: Catcache totals: 151 tup, 1206 srch, 912 hits, 151 loads, 143 not found\nDEBUG: Catcache totals: 206 tup, 2496 srch, 2028 hits, 217 loads, 251 not found\nDEBUG: Catcache totals: 23 tup, 64 srch, 29 hits, 23 loads, 12 not found\nDEBUG: Catcache totals: 43 tup, 140 srch, 61 hits, 55 loads, 24 not found\nDEBUG: Catcache totals: 55 tup, 1011 srch, 723 hits, 201 loads, 87 not found\nDEBUG: Catcache totals: 46 tup, 131 srch, 66 hits, 46 loads, 19 not found\nDEBUG: Catcache totals: 31 tup, 283 srch, 247 hits, 31 loads, 5 not found\nDEBUG: Catcache totals: 93 tup, 2200 srch, 1552 hits, 430 loads, 218 not found\nDEBUG: Catcache totals: 76 tup, 1503 srch, 1048 hits, 287 loads, 168 not found\nDEBUG: Catcache totals: 95 tup, 1509 srch, 1254 hits, 127 loads, 128 not found\nDEBUG: Catcache totals: 28 tup, 54 srch, 18 hits, 28 loads, 8 not found\nDEBUG: Catcache totals: 24 tup, 44 srch, 16 hits, 24 loads, 4 not found\nDEBUG: Catcache totals: 112 tup, 561 srch, 239 hits, 288 loads, 34 not found\nDEBUG: Catcache totals: 67 tup, 3491 srch, 2632 hits, 102 loads, 757 not found\nDEBUG: Catcache totals: 46 tup, 111 srch, 52 hits, 49 loads, 10 not found\nDEBUG: Catcache totals: 65 tup, 845 srch, 411 hits, 425 loads, 9 not found\nDEBUG: Catcache totals: 93 tup, 217 srch, 108 hits, 94 loads, 15 not found\nDEBUG: Catcache totals: 95 tup, 879 srch, 689 hits, 97 loads, 93 not found\nDEBUG: Catcache totals: 64 tup, 272 srch, 129 hits, 116 loads, 27 not found\nDEBUG: Catcache totals: 43 tup, 151 srch, 94 hits, 43 loads, 14 not found\nDEBUG: Catcache totals: 34 tup, 102 srch, 58 hits, 34 loads, 10 not found\nDEBUG: Catcache totals: 60 tup, 1015 srch, 801 hits, 96 loads, 118 not found\nDEBUG: Catcache totals: 55 tup, 342 srch, 249 hits, 69 loads, 24 not found\nDEBUG: Catcache totals: 127 tup, 1174 srch, 903 hits, 128 loads, 143 not found\nDEBUG: Catcache totals: 73 tup, 1240 srch, 1052 hits, 73 loads, 115 not found\nDEBUG: Catcache totals: 119 tup, 2773 srch, 2271 hits, 143 loads, 359 not found\nDEBUG: Catcache totals: 43 tup, 1527 srch, 1183 hits, 89 loads, 255 not found\nDEBUG: Catcache totals: 82 tup, 702 srch, 586 hits, 82 loads, 34 not found\nDEBUG: Catcache totals: 42 tup, 163 srch, 96 hits, 44 loads, 23 not found\nDEBUG: Catcache totals: 53 tup, 225 srch, 157 hits, 54 loads, 14 not found\nDEBUG: Catcache totals: 34 tup, 2018 srch, 1612 hits, 34 loads, 372 not found\nDEBUG: Catcache totals: 78 tup, 750 srch, 558 hits, 85 loads, 107 not found\nDEBUG: Catcache totals: 78 tup, 364 srch, 222 hits, 78 loads, 64 not found\nDEBUG: Catcache totals: 55 tup, 612 srch, 465 hits, 55 loads, 92 not found\nDEBUG: Catcache totals: 151 tup, 2521 srch, 1958 hits, 179 loads, 384 not found\nDEBUG: Catcache totals: 44 tup, 1680 srch, 1434 hits, 44 loads, 202 not found\nDEBUG: Catcache totals: 27 tup, 159 srch, 32 hits, 111 loads, 16 not found\nDEBUG: Catcache totals: 167 tup, 5601 srch, 4777 hits, 392 loads, 432 not found\nDEBUG: Catcache totals: 64 tup, 191 srch, 106 hits, 64 loads, 21 not found\nDEBUG: Catcache totals: 180 tup, 5935 srch, 4106 hits, 1077 loads, 752 not found\nDEBUG: Catcache totals: 61 tup, 1128 srch, 854 hits, 61 loads, 213 not found\nDEBUG: Catcache totals: 266 tup, 9834 srch, 8013 hits, 617 loads, 1204 not found\nDEBUG: Catcache totals: 160 tup, 16136 srch, 12750 hits, 1341 loads, 2045 not found\nDEBUG: Catcache totals: 46 tup, 437 srch, 333 hits, 46 loads, 58 not found\n\nThe grand totals are 137123 searches, 107792 hits, 11055 successful\nloads and 18276 not-found searches... of course these numbers do not\nsay how many of the not-found searches would be eliminated by negative\ncache entries, but it sure looks worth trying.\n", "msg_date": "Wed, 30 Jan 2002 00:56:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Syscaches should store negative entries, too " }, { "msg_contents": "Hannu Krosing <hannu@krosing.net> writes:\n> Are there _any_ tests where it does start to drop stuff ?\n\n> In other words - is the stuff-dropping part tested reasonably recently\n> (or at all) ?\n\nGood point. The drop mechanism is exercised well enough as driven by\nshared-cache-invalidation cases, but the case where it's driven by cache\nentry count isn't getting tested.\n\n> In other words we should cache Frequently Asked Questions and not\n> Frequently Found Answers ;)\n\nRight ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 30 Jan 2002 10:46:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Syscaches should store negative entries, too " }, { "msg_contents": "On Thu, 2002-01-31 at 00:00, Tom Lane wrote:\n> \n> To make negative cache entries work correctly, we'd have to issue\n> cross-backend SI messages for inserts into the system catalogs, not\n> only for updates and deletes. This would mean more SI traffic than\n> there is now. I think it'd still be a win, but the case for negative\n> cache entries isn't quite as airtight as I thought. There could be\n> scenarios where the extra SI traffic outweighs the savings from avoiding\n> failing catalog searches.\n> \n> Comments anyone?\n\nThe standard one - if not sure, make it an option ;)\n\n------------\nHannu\n\n", "msg_date": "30 Jan 2002 23:40:17 +0500", "msg_from": "Hannu Krosing <hannu@krosing.net>", "msg_from_op": false, "msg_subject": "Re: Syscaches should store negative entries, too" }, { "msg_contents": "I wrote:\n> AFAICS there's no logical difficulty in doing this: we simply make\n> a catcache entry that contains the probed-for key values but is\n> marked \"no one home at this address\". If a probe hits one of these\n> things, it can return NULL without a trip to the catalog. If someone\n> later comes along and creates a tuple that matches the key value,\n> the negative-result cache entry will be invalidated in the usual way\n> (this should work because insertion and update are treated identically\n> in the caches).\n\nThat last claim is false, unfortunately. Shared cache invalidation\ntreats inserts differently from updates and deletes (see the comments\nat the top of src/backend/utils/cache/inval.c).\n\nTo make negative cache entries work correctly, we'd have to issue\ncross-backend SI messages for inserts into the system catalogs, not\nonly for updates and deletes. This would mean more SI traffic than\nthere is now. I think it'd still be a win, but the case for negative\ncache entries isn't quite as airtight as I thought. There could be\nscenarios where the extra SI traffic outweighs the savings from avoiding\nfailing catalog searches.\n\nComments anyone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 30 Jan 2002 14:00:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Syscaches should store negative entries, too " }, { "msg_contents": "Hannu Krosing wrote:\n> On Thu, 2002-01-31 at 00:00, Tom Lane wrote:\n> > \n> > To make negative cache entries work correctly, we'd have to issue\n> > cross-backend SI messages for inserts into the system catalogs, not\n> > only for updates and deletes. This would mean more SI traffic than\n> > there is now. I think it'd still be a win, but the case for negative\n> > cache entries isn't quite as airtight as I thought. There could be\n> > scenarios where the extra SI traffic outweighs the savings from avoiding\n> > failing catalog searches.\n> > \n> > Comments anyone?\n> \n> The standard one - if not sure, make it an option ;)\n\nActually, that is not our standard comment. We want to give people a\nlimited set of meaningful options. If then want billions of options,\nthey should use Oracle. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 1 Feb 2002 14:32:35 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Syscaches should store negative entries, too" }, { "msg_contents": "> > Actually, that is not our standard comment. We want to give people a\n> > limited set of meaningful options. If then want billions of options,\n> > they should use Oracle. :-)\n> \n> In that case we have to decide for the user for which case we optimize\n> and give user suboptimal performanse for the _other_ case.\n\nIf we can't decide, and it is significant, we have to give any option,\nbut only if both are true.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 1 Feb 2002 17:57:59 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Syscaches should store negative entries, too" }, { "msg_contents": "Hannu Krosing <hannu@krosing.net> writes:\n> In that case we have to decide for the user for which case we optimize\n> and give user suboptimal performanse for the _other_ case.\n\nIf we're gonna do it, I'd just do it; that code is hairy enough already\nwithout trying to support multiple, fundamentally different operating\nmodes.\n\nAn advantage of switching is that the insert, update, and delete cases\nwould all become truly alike to inval.c, thus making that module simpler\ninstea of more complex. I find this attractive.\n\nOne thing I realized after my last message on the subject is that the\ncross-backend SI messages do not carry the actual key values of the\ntuple being inserted/deleted/updated. What they do carry is a hash\nindex, which is presently used only to speed up the search for matching\ncache entries to be purged. What we'd have to do with negative cache\nentries is (for an insert) purge *all* negative cache entries on that\nhash chain, since we couldn't tell exactly which if any correspond to\nthe keys of the inserted tuple. I don't think this is a big problem;\nhopefully each individual hash chain is short and so not very many\nnegative entries would become collateral casualties. But it is another\npotential source of inefficiency.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 01 Feb 2002 18:11:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Syscaches should store negative entries, too " } ]
[ { "msg_contents": "The other day, I did a test build of \"everything\", which involved\nspecifying 17 command-line arguments to configure. This is probably the\nreason why some fringe features are not tested very often: the list of\noptions is pretty overwhelming.\n\nI remembered that in the old days PHP had an interactive setup script,\nthat asked you mainly yes/no questions about each feature you wanted, and\nwould run \"configure\" based on the answers it got. This sort of thing\nmight help our situation, because instead of having to specify all the\noptions, users can just keep pressing Y all the time. Of course it could\nalso be considered as a general improvement in user-friendliness.\n\nNow I just realized that the latest PHP source code doesn't have this\nthing anymore, so maybe they didn't like it? What do you think?\n\nAs far as maintaining something like this goes, I think I have an idea\nthat would basically require zero effort, so at least that shouldn't be\ntoo much of a concern.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 29 Jan 2002 19:33:29 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "A simpler way to configure the source code?" }, { "msg_contents": "Peter Eisentraut wrote:\n\n\n> Now I just realized that the latest PHP source code doesn't have this\n> thing anymore, so maybe they didn't like it? What do you think?\n\n\nThe linux kernel has something like this, maybe PHP was using it and \nmaybe they dropped it when they went from the GPL to the more liberal \nlicense they're using now?\n\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n", "msg_date": "Tue, 29 Jan 2002 16:43:01 -0800", "msg_from": "Don Baccus <dhogaza@pacifier.com>", "msg_from_op": false, "msg_subject": "Re: A simpler way to configure the source code?" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> The other day, I did a test build of \"everything\", which involved\n> specifying 17 command-line arguments to configure. This is probably the\n> reason why some fringe features are not tested very often: the list of\n> options is pretty overwhelming.\n\n\"--with-everything\"? Or more usefully, \"--with-everything --without-perl\"\nif, say, you don't have Perl installed.\n\nOr you could just reverse the defaults on all the options, but that'd\nlikely provoke a revolt.\n\n> I remembered that in the old days PHP had an interactive setup script,\n\nPerl's got one of those too, and I hate it ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 29 Jan 2002 19:44:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A simpler way to configure the source code? " }, { "msg_contents": "> > I remembered that in the old days PHP had an interactive setup script,\n>\n> Perl's got one of those too, and I hate it ...\n\nAs do I.. I disliked the PHP one too.\n\nGetting all the options setup initially is a bit daunting but once you have\nyour options, you can save it and cut/paste it in the future (which is what\nI do, I have an entire doc dedicated to configure options and such of the\nservers I configure)... Still, I'm not sure it's a huge issue as normal\nusers probably don't recompile very often...\n\nI don't think switching to an interactive script would be good but giving\npeople the option will make some users happy I'm sure..\n\n-Mitchell\n\n\n", "msg_date": "Tue, 29 Jan 2002 18:33:22 -0700", "msg_from": "\"Mitch Vincent\" <mitch@doot.org>", "msg_from_op": false, "msg_subject": "Re: A simpler way to configure the source code? " }, { "msg_contents": "Tom Lane wrote:\n> > I remembered that in the old days PHP had an interactive setup script,\n> \n> Perl's got one of those too, and I hate it ...\n\nYes, it sucks.\n\nLinux for instance, offers three (up to my knowledge) ways of handling setup.\nA) A yes/no array of questios\nB) A text mode, menuized approach\nC) A graphic mode, with buttons which open menus.\n\nI find B) especially friendly.\n\nRegards,\nHaroldo.\n", "msg_date": "Tue, 29 Jan 2002 23:41:29 -0300", "msg_from": "Haroldo Stenger <hstenger@adinet.com.uy>", "msg_from_op": false, "msg_subject": "Re: A simpler way to configure the source code?" }, { "msg_contents": "Peter Eisentraut wrote:\n> \n> The other day, I did a test build of \"everything\", which involved\n> specifying 17 command-line arguments to configure. This is probably the\n> reason why some fringe features are not tested very often: the list of\n> options is pretty overwhelming.\n> \n> I remembered that in the old days PHP had an interactive setup script,\n> that asked you mainly yes/no questions about each feature you wanted, and\n> would run \"configure\" based on the answers it got. This sort of thing\n> might help our situation, because instead of having to specify all the\n> options, users can just keep pressing Y all the time. Of course it could\n> also be considered as a general improvement in user-friendliness.\n> \n> Now I just realized that the latest PHP source code doesn't have this\n> thing anymore, so maybe they didn't like it? What do you think?\n> \n> As far as maintaining something like this goes, I think I have an idea\n> that would basically require zero effort, so at least that shouldn't be\n> too much of a concern.\n\nI feel having the \"fringe features\" more tested is a great idea, and\nwill lead to a better PostgreSQL, and therefore happier users. :) A\nfriendly, and decently-easy-to-user interactive setup (Linux\n\"menuconfig\" kernel style?) would be beneficial.\n\nIf it doesn't add signifcant overhead to maintenance, and is very\nportable, it sounds to me like a good idea.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n> \n> --\n> Peter Eisentraut peter_e@gmx.net\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Thu, 31 Jan 2002 01:19:03 +1100", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: A simpler way to configure the source code?" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n\n> The other day, I did a test build of \"everything\", which involved\n> specifying 17 command-line arguments to configure. This is probably the\n> reason why some fringe features are not tested very often: the list of\n> options is pretty overwhelming.\n> \n> I remembered that in the old days PHP had an interactive setup script,\n> that asked you mainly yes/no questions about each feature you wanted, and\n> would run \"configure\" based on the answers it got. This sort of thing\n> might help our situation, because instead of having to specify all the\n> options, users can just keep pressing Y all the time. Of course it could\n> also be considered as a general improvement in user-friendliness.\n\nI hated that (same for perl).\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "30 Jan 2002 10:50:05 -0500", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: A simpler way to configure the source code?" }, { "msg_contents": "> Perl's got one of those too, and I hate it ...\n\nYou mean the endless number of questions when making Perl?\n\nNo, the Linux kernel has a nice menu based script. It will remember the last \nchoice if you build more than one time and you only have to choose the areas \nyou'd like to change.\nIt's very nice. There's a tk based frontend, I think, and there's extensive \nexplanation for each point, so you almost always know why you have to choose \non or off even for features you may not know too well.\n\n-- \nKaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\nKaki Data tshirts, merchandize Fax: 3816 2501\nHowitzvej 75 �ben 14.00-18.00 Web: www.suse.dk\n2000 Frederiksberg L�rdag 11.00-17.00 Email: kar@kakidata.dk\n", "msg_date": "Wed, 30 Jan 2002 20:46:09 +0100", "msg_from": "Kaare Rasmussen <kar@kakidata.dk>", "msg_from_op": false, "msg_subject": "Re: A simpler way to configure the source code?" }, { "msg_contents": "Kaare Rasmussen wrote:\n\n>>Perl's got one of those too, and I hate it ...\n>>\n> \n> You mean the endless number of questions when making Perl?\n> \n> No, the Linux kernel has a nice menu based script. It will remember the last \n> choice if you build more than one time and you only have to choose the areas \n> you'd like to change.\n\n\nand there's a feature that hides everything except new options. If \nyou've built an older kernel, update, and there are new configuration \noptions you can choose to just see those, and your past choices for \nolder configuration parameters are kept.\n\nHandy when someone adds configuration parameters for yet-another-IDE \nchipset that you don't really give care about.\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n", "msg_date": "Wed, 30 Jan 2002 12:05:18 -0800", "msg_from": "Don Baccus <dhogaza@pacifier.com>", "msg_from_op": false, "msg_subject": "Re: A simpler way to configure the source code?" } ]
[ { "msg_contents": "\n> Superusers can also add SUSET records to their per-user settings. I'm\n> currently unsure about whether to allow superusers to add SUSET settings\n> to the per-database settings, because it would mean that the database\n> session would behave differently depending on what user invokes it.\n\nImho if the dba adds SUSET's to the per-database settings it should be\nset regardless of the user's privs. (Set with elevated rights)\n\nThis setting would then be forced on the user, because he himself \ndoes not have the privs to change it himself.\n\nAndreas\n", "msg_date": "Wed, 30 Jan 2002 09:43:24 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Per-database and per-user GUC settings " }, { "msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n>> Superusers can also add SUSET records to their per-user settings. I'm\n>> currently unsure about whether to allow superusers to add SUSET settings\n>> to the per-database settings, because it would mean that the database\n>> session would behave differently depending on what user invokes it.\n\n> Imho if the dba adds SUSET's to the per-database settings it should be\n> set regardless of the user's privs. (Set with elevated rights)\n\nYes, it would seem to make more sense to verify the privilege level of\nthe GUC variables during the SET/ALTER command, and then execute them\nmore-or-less unconditionally during actual startup. During the SET/ALTER\ncommand you know who is trying to insert the value, and you can let\nsuperusers and db owners do more than average users. But once the\nvariable has been successfully inserted, it should apply to everyone.\n\nExample application: suppose we had a SUSET-level variable that controls\npriority level of a backend (which we don't, but it makes a good example\ncase here). The superuser should be able to set this variable in the\nper-user settings of a particular user; the user should then be unable\nto change it himself. Note this is a different case from Andreas'\nexample of a per-database setting, but I agree with his example too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 30 Jan 2002 10:41:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Per-database and per-user GUC settings " }, { "msg_contents": "Tom Lane writes:\n\n> Yes, it would seem to make more sense to verify the privilege level of\n> the GUC variables during the SET/ALTER command, and then execute them\n> more-or-less unconditionally during actual startup.\n\nOK, that makes sense to me, too.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 30 Jan 2002 21:03:05 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Per-database and per-user GUC settings " } ]
[ { "msg_contents": "\n> > Okay, that's a reasonable case to\n> > try to optimize, though I'd like to think the problem will go away\n> > in a release or two when we implement VACUUM-time index shrinking.\n> > \n> > However, I'm not sure about the \"lot faster\" part. The only win\n> > I can see is that when rebuilding a btree index, you could skip\n> > the sort step by reading the old index in index order.\n> \n> Don't we have to scan the (possibly larger) heap table ?\n\nYes, but that is done with a seq scan, but the index has to be read in \nrandom order, since the index pages are not physically sorted on disk\nfrom lowest to highest value. Of course you can spare the sort, \nbut overall ...\n\nImho spending effort on VACUUM is more fruitful, and has the potential to \nallow much more concurrency than REINDEX, no ?\n\nAndreas\n", "msg_date": "Wed, 30 Jan 2002 11:21:42 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Improving backend launch time by preloading relcache" }, { "msg_contents": "Zeugswetter Andreas SB SD wrote:\n> \n> > > Okay, that's a reasonable case to\n> > > try to optimize, though I'd like to think the problem will go away\n> > > in a release or two when we implement VACUUM-time index shrinking.\n> > >\n> > > However, I'm not sure about the \"lot faster\" part. The only win\n> > > I can see is that when rebuilding a btree index, you could skip\n> > > the sort step by reading the old index in index order.\n> >\n> > Don't we have to scan the (possibly larger) heap table ?\n> \n> Yes, but that is done with a seq scan, but the index has to be read in\n> random order, since the index pages are not physically sorted on disk\n> from lowest to highest value. Of course you can spare the sort,\n> but overall ...\n\nReading a index file is not faster than reading the heap file ?\nDoes sorting finish in a moment ?\nIf so we have to use sequential scan much more often.\n\nAnyway there seems no point on changing REINDEX.\nThe only thing I have to do is to remove the needless\ncheck in tcop/utility.c as soon as 7.2 is released.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Thu, 31 Jan 2002 11:07:56 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Improving backend launch time by preloading relcache" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Anyway there seems no point on changing REINDEX.\n> The only thing I have to do is to remove the needless\n> check in tcop/utility.c as soon as 7.2 is released.\n\nI don't believe it's needless, and I suggest you not remove it,\nuntil we do something about making the pg_internal unlink happen\nat the right time. With the unlink where it is, I think it's quite\nunsafe to reindex system indexes in a live database.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 30 Jan 2002 21:34:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Improving backend launch time by preloading relcache " }, { "msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Anyway there seems no point on changing REINDEX.\n> > The only thing I have to do is to remove the needless\n> > check in tcop/utility.c as soon as 7.2 is released.\n> \n> I don't believe it's needless, and I suggest you not remove it,\n> until we do something about making the pg_internal unlink happen\n> at the right time. With the unlink where it is, I think it's quite\n> unsafe to reindex system indexes in a live database.\n\nCurrently there are just a few relations info kept in\npg_internal.init and they are all nailed. I'm not\nallowing REINDEX for nailed relations though there's\na #ifdef'd trial implementation. I'm intending the\nchange for 7.2.1 not 7.3. If it isn't allowed in 7.2.x\nI would strongly object to the 7.2 release itself.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Thu, 31 Jan 2002 11:52:18 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Improving backend launch time by preloading relcache" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Currently there are just a few relations info kept in\n> pg_internal.init and they are all nailed. I'm not\n> allowing REINDEX for nailed relations \n\nOh, okay.\n\n> a #ifdef'd trial implementation. I'm intending the\n> change for 7.2.1 not 7.3. If it isn't allowed in 7.2.x\n> I would strongly object to the 7.2 release itself.\n\nIf you think it should be changed then change it now. I see no\nreason to wait.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 30 Jan 2002 22:12:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Improving backend launch time by preloading relcache " } ]
[ { "msg_contents": "\n> However, what you probably wouldn't want to do is cache negative lookups\n> that don't end up producing results or are not part of a search chain at\n> all. Those are user errors and not likely to be repeated and do not need\n> to be optimized.\n\nBut this also is resolved by the LRU mechanism, no ?\nPeople are not going to issue the same errors repeatedly,\nbut if they do, we are still interested in minimizing the \nresource consumption.\n\nAndreas\n", "msg_date": "Wed, 30 Jan 2002 11:25:31 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Syscaches should store negative entries, too" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Justin Clift [mailto:justin@postgresql.org] \n> Sent: 30 January 2002 14:19\n> To: Peter Eisentraut\n> Cc: PostgreSQL Development\n> Subject: Re: [HACKERS] A simpler way to configure the source code?\n> \n> \n> I feel having the \"fringe features\" more tested is a great \n> idea, and will lead to a better PostgreSQL, and therefore \n> happier users. :) A friendly, and decently-easy-to-user \n> interactive setup (Linux \"menuconfig\" kernel style?) would be \n> beneficial.\n> \n> If it doesn't add signifcant overhead to maintenance, and is \n> very portable, it sounds to me like a good idea.\n> \n\n+1 (not that this is a vote :-) )\n\nRegards, Dave.\n", "msg_date": "Wed, 30 Jan 2002 15:18:02 -0000", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: A simpler way to configure the source code?" }, { "msg_contents": "> > I feel having the \"fringe features\" more tested is a great\n> > idea, and will lead to a better PostgreSQL, and therefore\n> > happier users. :) A friendly, and decently-easy-to-user\n> > interactive setup (Linux \"menuconfig\" kernel style?) would be\n> > beneficial.\n> >\n> > If it doesn't add signifcant overhead to maintenance, and is\n> > very portable, it sounds to me like a good idea.\n> >\n>\n> +1 (not that this is a vote :-) )\n\nHmmm...yuck. I think a --with-everything is a good idea, but surely all\nthat needs be done is make the regression test test everything? It's\nannoying little setup scripts that make porting things to FreeBSD a pain...\n\n-1\n\nChris\n\n", "msg_date": "Thu, 31 Jan 2002 09:33:43 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: A simpler way to configure the source code?" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> \n> > > I feel having the \"fringe features\" more tested is a great\n> > > idea, and will lead to a better PostgreSQL, and therefore\n> > > happier users. :) A friendly, and decently-easy-to-user\n> > > interactive setup (Linux \"menuconfig\" kernel style?) would be\n> > > beneficial.\n> > >\n> > > If it doesn't add signifcant overhead to maintenance, and is\n> > > very portable, it sounds to me like a good idea.\n> > >\n> >\n> > +1 (not that this is a vote :-) )\n> \n> Hmmm...yuck. I think a --with-everything is a good idea, but surely all\n> that needs be done is make the regression test test everything? It's\n> annoying little setup scripts that make porting things to FreeBSD a pain...\n> \n> -1\n\nHi Chris,\n\nWould you consider it to be an \"alright thing\" if it was an optional\nfeature? As in, people could use the interactive menu if they wanted,\nor they could do the --with-whatever (--with-everything is prob a good\nidea too) as per normal?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n> \n> Chris\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Thu, 31 Jan 2002 13:19:24 +1100", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: A simpler way to configure the source code?" }, { "msg_contents": "> Hi Chris,\n> \n> Would you consider it to be an \"alright thing\" if it was an optional\n> feature? As in, people could use the interactive menu if they wanted,\n> or they could do the --with-whatever (--with-everything is prob a good\n> idea too) as per normal?\n\nI'm sure I could deal :)\n\nChris\n\n", "msg_date": "Thu, 31 Jan 2002 10:29:33 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: A simpler way to configure the source code?" } ]
[ { "msg_contents": "Hi,\n\nanybody has an experience how is stable postgresql under Windows system ?\nI tried postgresq 7.1 under Cygwin, Windows 98 and was dissapointed\nby very bad performance. Are there something I could tune ?\nI got 250 sel/sec on simple select from table with 500 rows !\nUnder Linux I have 2500 sel/sec.\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Wed, 30 Jan 2002 20:21:00 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "postgresql under Windows is slow " }, { "msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> anybody has an experience how is stable postgresql under Windows system ?\n> I tried postgresq 7.1 under Cygwin, Windows 98 and was dissapointed\n> by very bad performance. Are there something I could tune ?\n> I got 250 sel/sec on simple select from table with 500 rows !\n> Under Linux I have 2500 sel/sec.\n\nNever tried it myself, but I distinctly recall someone reporting that\nthey got comparable performance on Cygwin as on Linux. You might try\nasking on pgsql-cygwin.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 30 Jan 2002 12:58:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: postgresql under Windows is slow " }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn Wednesday 30 January 2002 11:58 am, Tom Lane wrote:\n> Oleg Bartunov <oleg@sai.msu.su> writes:\n> > anybody has an experience how is stable postgresql under Windows system ?\n> > I tried postgresq 7.1 under Cygwin, Windows 98 and was dissapointed\n> > by very bad performance. Are there something I could tune ?\n> > I got 250 sel/sec on simple select from table with 500 rows !\n> > Under Linux I have 2500 sel/sec.\n>\n> Never tried it myself, but I distinctly recall someone reporting that\n> they got comparable performance on Cygwin as on Linux. You might try\n> asking on pgsql-cygwin.\n\nWe have a few developers that run Apache, PHP, Postgresql on Win2k and we \nhave definitely seen postgres be a good bit slower on Windows. Have never \nbenchmarked it however.\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.6 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE8WD6K8BXvT14W9HARAh+GAJwKqs2k8fwJYooenFCqHMMgXr0DjQCcCaQr\nbQxY10HunpD2+IACH/L0yas=\n=p8Mb\n-----END PGP SIGNATURE-----\n", "msg_date": "Wed, 30 Jan 2002 12:42:14 -0600", "msg_from": "\"Matthew T. O'Connor\" <matthew@zeut.net>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] postgresql under Windows is slow" }, { "msg_contents": "On Wed, 30 Jan 2002, Oleg Bartunov wrote:\n\n> anybody has an experience how is stable postgresql under Windows system ?\n> I tried postgresq 7.1 under Cygwin, Windows 98 and was dissapointed\n> by very bad performance. Are there something I could tune ?\n> I got 250 sel/sec on simple select from table with 500 rows !\n> Under Linux I have 2500 sel/sec.\n\nWhat are your plans for postgresql on Windows? Just by the nature of the\nOS, I'd never expect too much performance from a Cygwin app, especially\none like postgresql. Do you plan on running it on 98 in a production\nmanner?\n\nMy own experience is that postgresql runs quite a bit slower on WinMe than\nmy NetBSD box (same hardware), but I don't consider that to be a problem.\n\nAndy\n\nacruhl@sdf.lonestar.org\nSDF Public Access UNIX System - http://sdf.lonestar.org\n\n", "msg_date": "Thu, 31 Jan 2002 16:13:38 +0000 (UTC)", "msg_from": "Andy Ruhl <acruhl@sdf.lonestar.org>", "msg_from_op": false, "msg_subject": "Re: postgresql under Windows is slow " }, { "msg_contents": "On Thu, 31 Jan 2002, Andy Ruhl wrote:\n\n> On Wed, 30 Jan 2002, Oleg Bartunov wrote:\n>\n> > anybody has an experience how is stable postgresql under Windows system ?\n> > I tried postgresq 7.1 under Cygwin, Windows 98 and was dissapointed\n> > by very bad performance. Are there something I could tune ?\n> > I got 250 sel/sec on simple select from table with 500 rows !\n> > Under Linux I have 2500 sel/sec.\n>\n> What are your plans for postgresql on Windows? Just by the nature of the\n> OS, I'd never expect too much performance from a Cygwin app, especially\n> one like postgresql. Do you plan on running it on 98 in a production\n> manner?\n\neven worse, we have to port our application to Windows and it should\nrun under W95..XP on different hardware (PII ...). It'll run in\nsingle-user environment (thank goodness). Database will have about\n20-30 K rows.\n\n\n>\n> My own experience is that postgresql runs quite a bit slower on WinMe than\n> my NetBSD box (same hardware), but I don't consider that to be a problem.\n>\n\nDid you make some special tuning ? btw, what's about locale support ?\n\n> Andy\n>\n> acruhl@sdf.lonestar.org\n> SDF Public Access UNIX System - http://sdf.lonestar.org\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Thu, 31 Jan 2002 19:33:28 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "Re: postgresql under Windows is slow " }, { "msg_contents": "On Thu, 31 Jan 2002, Oleg Bartunov wrote:\n\n> even worse, we have to port our application to Windows and it should\n> run under W95..XP on different hardware (PII ...). It'll run in\n> single-user environment (thank goodness). Database will have about\n> 20-30 K rows.\n\nUgh. Why pay for an OS these days? The free ones are getting too good.\nIt's amazing how entrenched Windows is.\n\n> > My own experience is that postgresql runs quite a bit slower on WinMe than\n> > my NetBSD box (same hardware), but I don't consider that to be a problem.\n> >\n>\n> Did you make some special tuning ? btw, what's about locale support ?\n\nIt was a very quick, seat of the pants test. I made a junk schema with\nsome insert statements. I noticed that it took a few seconds longer to\nfinish on Windows than in NetBSD. Didn't do anything more than that. Don't\nknow about locale stuff, I never have to change that stuff...\n\nAndy\n\nacruhl@sdf.lonestar.org\nSDF Public Access UNIX System - http://sdf.lonestar.org\n\n", "msg_date": "Thu, 31 Jan 2002 17:05:12 +0000 (UTC)", "msg_from": "Andy Ruhl <acruhl@sdf.lonestar.org>", "msg_from_op": false, "msg_subject": "Re: postgresql under Windows is slow " }, { "msg_contents": "On Thu, 2002-01-31 at 21:33, Oleg Bartunov wrote:\n> On Thu, 31 Jan 2002, Andy Ruhl wrote:\n> \n> > On Wed, 30 Jan 2002, Oleg Bartunov wrote:\n> >\n> > > anybody has an experience how is stable postgresql under Windows system ?\n> > > I tried postgresq 7.1 under Cygwin, Windows 98 and was dissapointed\n> > > by very bad performance. Are there something I could tune ?\n> > > I got 250 sel/sec on simple select from table with 500 rows !\n> > > Under Linux I have 2500 sel/sec.\n> >\n> > What are your plans for postgresql on Windows? Just by the nature of the\n> > OS, I'd never expect too much performance from a Cygwin app, especially\n> > one like postgresql. Do you plan on running it on 98 in a production\n> > manner?\n> \n> even worse, we have to port our application to Windows and it should\n> run under W95..XP on different hardware (PII ...).\n\nIt can be thet task switching behaviour of Win9x in not very best and\nthus too much time is spent switching between client and server :)\n\nTo test this theory you could dry to rerun your benchmarks by sending\nmore queries in one request to lower the switching overhead. When you\nrun 250 single selects/sec you need to switch between client and server\nprocess 500x/sec. Try sending your queries 10 at a time and see what you\nget, or for even cleaner test results use separate server comuter\n\n> It'll run in single-user environment (thank goodness).\n\nIt could help if you could run in a single process perhaps by putting\nyour whole application logic in pl/perl (or will this still use\n_another_ backent to execute it's queries.\n\n> Database will have about 20-30 K rows.\n\nDo you really need all that performance on win9x ?\n\n-------------\nHannu\n\n", "msg_date": "31 Jan 2002 23:34:16 +0500", "msg_from": "Hannu Krosing <hannu@krosing.net>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] postgresql under Windows is slow" }, { "msg_contents": ">>>>> \"Oleg\" == Oleg Bartunov <oleg@sai.msu.su> writes:\n\n Oleg> On Thu, 31 Jan 2002, Andy Ruhl wrote:\n >> On Wed, 30 Jan 2002, Oleg Bartunov wrote:\n >> \n >> > anybody has an experience how is stable postgresql under\n >> Windows system ? > I tried postgresq 7.1 under Cygwin, Windows\n >> 98 and was dissapointed > by very bad performance. Are there\n >> something I could tune ? > I got 250 sel/sec on simple select\n >> from table with 500 rows ! > Under Linux I have 2500 sel/sec.\n >> \n >> What are your plans for postgresql on Windows? Just by the\n >> nature of the OS, I'd never expect too much performance from a\n >> Cygwin app, especially one like postgresql. Do you plan on\n >> running it on 98 in a production manner?\n\n Oleg> even worse, we have to port our application to Windows and\n Oleg> it should run under W95..XP on different hardware (PII\n Oleg> ...). It'll run in single-user environment (thank\n Oleg> goodness). Database will have about 20-30 K rows.\n\nFor starters, check out the Cywgin FAQ :-\n\nhttp://cygwin.com/faq/faq.html#TOC75 (How is fork implemented)\n\nand the Cygwin User's guide, especially :-\n\nHighlights of Cygwin Functionality/Process Creation\n\nThere are also some discussions about fork and vfork in the mailing\nlist archives, and I see that there have been some changes to do with\nvfork during the last year or so but don't know whether that is\napplicable to Postgresql.\n\nSincerely\n\nAdrian Phillips\n\n-- \nYour mouse has moved.\nWindows NT must be restarted for the change to take effect.\nReboot now? [OK]\n", "msg_date": "31 Jan 2002 20:25:22 +0100", "msg_from": "Adrian Phillips <adrianp@powertech.no>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] postgresql under Windows is slow" }, { "msg_contents": "hug... wakala...\n\nI have PostgreSQL running in several boxes, since a clonic PIII400 , two\nNetVista PIII800 and two NetFinity PIII 550.\n\nAll of them are Linux boxes, and obviously there are some performance\ndiferences related to the hardware, so I try hack hard to get best\nperformance posible in each case, but I never ever think to use a guindous\nbox to do serious work, especially if it is a guin98.\n\nI hope you know that Linux 'n, FreeBSD are free and without costs, so why\npay for a cuasi-OS that is going to break your brain and make you live the\nIT practice ? ? ?\n\nTry it on Linux on the half hardware 'n tell us later.\n\nBest for you and Greetings\n\n", "msg_date": "Thu, 31 Jan 2002 19:12:57 -0300", "msg_from": "\"Eduardo Caillava\" <ecaillava@interlap.com.ar>", "msg_from_op": false, "msg_subject": "Re: postgresql under Windows is slow " } ]
[ { "msg_contents": "\nMorning all ...\n\n\tSo far as we can tell, we're *finally* ready for release ... Tom\nmade a few benign commits since rc2 that he feels doesn't warrant an rc3,\nso we are planning on doing the final release on Monday ...\n\n\tDoes *anyone* have any outstanding bugs that they would like to\nreport *before* the final release? Anything that should hold up the\nrelease even if for a couple of days?\n\n\tIf so, please speak up now or forever hold your peace ...\n\n\tIf not, I'm going to roll up the final release on Sunday night,\nand do a full announce on Monday afternoon ...\n\n\tIts been a *long* release cycle this time through, but nobody can\naccuse us of not being thorough in our testing :)\n\n\tGreat work, and heart-felt thanks to everyone for the work that\nwent into this release ... should prove another one to serve us well and\nmake us proud :)\n\n\n\n", "msg_date": "Wed, 30 Jan 2002 15:29:57 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "PostgreSQL Final Release ... Monday?" }, { "msg_contents": "On Wed, 30 Jan 2002, Marc G. Fournier wrote:\n\n>\n> Morning all ...\n>\n> \tSo far as we can tell, we're *finally* ready for release ... Tom\n> made a few benign commits since rc2 that he feels doesn't warrant an rc3,\n> so we are planning on doing the final release on Monday ...\n>\n> \tDoes *anyone* have any outstanding bugs that they would like to\n> report *before* the final release? Anything that should hold up the\n> release even if for a couple of days?\n>\n> \tIf so, please speak up now or forever hold your peace ...\n\nHow are we set for docs? I don't recall Thomas calling for a freeze.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 30 Jan 2002 14:52:13 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Final Release ... Monday?" }, { "msg_contents": "\nActually, talked to him about it, and he says that since we aren't doing\nthe hardcopy in the distribution anymore, we're fine for that too ...\n\n\nOn Wed, 30 Jan 2002, Vince Vielhaber wrote:\n\n> On Wed, 30 Jan 2002, Marc G. Fournier wrote:\n>\n> >\n> > Morning all ...\n> >\n> > \tSo far as we can tell, we're *finally* ready for release ... Tom\n> > made a few benign commits since rc2 that he feels doesn't warrant an rc3,\n> > so we are planning on doing the final release on Monday ...\n> >\n> > \tDoes *anyone* have any outstanding bugs that they would like to\n> > report *before* the final release? Anything that should hold up the\n> > release even if for a couple of days?\n> >\n> > \tIf so, please speak up now or forever hold your peace ...\n>\n> How are we set for docs? I don't recall Thomas calling for a freeze.\n>\n> Vince.\n> --\n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n>\n>\n>\n>\n\n", "msg_date": "Wed, 30 Jan 2002 15:57:37 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL Final Release ... Monday?" }, { "msg_contents": "On Wed, 30 Jan 2002, Marc G. Fournier wrote:\n\n>\n> Actually, talked to him about it, and he says that since we aren't doing\n> the hardcopy in the distribution anymore, we're fine for that too ...\n\nPerfect!\n\n>\n>\n> On Wed, 30 Jan 2002, Vince Vielhaber wrote:\n>\n> > On Wed, 30 Jan 2002, Marc G. Fournier wrote:\n> >\n> > >\n> > > Morning all ...\n> > >\n> > > \tSo far as we can tell, we're *finally* ready for release ... Tom\n> > > made a few benign commits since rc2 that he feels doesn't warrant an rc3,\n> > > so we are planning on doing the final release on Monday ...\n> > >\n> > > \tDoes *anyone* have any outstanding bugs that they would like to\n> > > report *before* the final release? Anything that should hold up the\n> > > release even if for a couple of days?\n> > >\n> > > \tIf so, please speak up now or forever hold your peace ...\n> >\n> > How are we set for docs? I don't recall Thomas calling for a freeze.\n> >\n> > Vince.\n> > --\n> > ==========================================================================\n> > Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> > 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> > Online Campground Directory http://www.camping-usa.com\n> > Online Giftshop Superstore http://www.cloudninegifts.com\n> > ==========================================================================\n> >\n> >\n> >\n> >\n>\n>\n\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 30 Jan 2002 15:01:00 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Final Release ... Monday?" }, { "msg_contents": "On Thu, 2002-01-31 at 02:36, mlw wrote:\n> \n> I submitted a \"contrib\" project for integer array\n> aggregation/enumeration. I didn't see it in rc2.\n> \n\nSome general questions about arrays -\n\n1. Could this aggregation/enumeration be done in a generic way, so\n that you could aggregate over any array ?\n\n The only generic function I currently know of is count(), but it\n is generic only on argument side, i.e count(any) returns always\n integer.\n\n2. Is there a way to define a function such that\n\n declare make_array(any) returns any[] ?\n\n3. Also, can I prescribe order of aggregation (aggregation applied\n _after_ ORDER BY) that would act in a way similar to HAVING .\n\n4. what arguments must I give to array_in so that it produces an \n array of specific kind ? I tried some more obvious things and \n really got nowhere.\n\n--------------\nHannu\n\nPS. I attach an alpha-contrib ofPL/PgSQL code to compare intarrays.\nI have not had time to tie these ops to indexes\n\n--------------\nHannu", "msg_date": "31 Jan 2002 01:06:31 +0500", "msg_from": "Hannu Krosing <hannu@krosing.net>", "msg_from_op": false, "msg_subject": "Array aggregation. Was: PostgreSQL Final Release ... Monday?" }, { "msg_contents": "On Wed, 2002-01-30 at 14:29, Marc G. Fournier wrote:\n> \n> Morning all ...\n> \n> \tSo far as we can tell, we're *finally* ready for release ... Tom\n> made a few benign commits since rc2 that he feels doesn't warrant an rc3,\n> so we are planning on doing the final release on Monday ...\n> \n> \tDoes *anyone* have any outstanding bugs that they would like to\n> report *before* the final release? Anything that should hold up the\n> release even if for a couple of days?\n\nTypo in docs, I think.\n\nuser guide 4.7 should be \"Date Type Formatting Functions\" instead of\n\"Data\", ISTM\n\n-- \nKarl DeBisschop\nDirector, Software Engineering & Development\nLearning Network / Information Please\nwww.learningnetwork.com / www.infoplease.com\n\n", "msg_date": "30 Jan 2002 15:16:32 -0500", "msg_from": "Karl DeBisschop <kdebisschop@alert.infoplease.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Final Release ... Monday?" }, { "msg_contents": "\"Marc G. Fournier\" wrote:\n> \n> Morning all ...\n> \n> So far as we can tell, we're *finally* ready for release ... Tom\n> made a few benign commits since rc2 that he feels doesn't warrant an rc3,\n> so we are planning on doing the final release on Monday ...\n> \n> Does *anyone* have any outstanding bugs that they would like to\n> report *before* the final release? Anything that should hold up the\n> release even if for a couple of days?\n> \n> If so, please speak up now or forever hold your peace ...\n> \n> If not, I'm going to roll up the final release on Sunday night,\n> and do a full announce on Monday afternoon ...\n> \n> Its been a *long* release cycle this time through, but nobody can\n> accuse us of not being thorough in our testing :)\n> \n> Great work, and heart-felt thanks to everyone for the work that\n> went into this release ... should prove another one to serve us well and\n> make us proud :)\n\nI submitted a \"contrib\" project for integer array\naggregation/enumeration. I didn't see it in rc2.\n", "msg_date": "Wed, 30 Jan 2002 16:36:00 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Final Release ... Monday?" }, { "msg_contents": "> > went into this release ... should prove another one to serve us well and\n> > make us proud :)\n> \n> I submitted a \"contrib\" project for integer array\n> aggregation/enumeration. I didn't see it in rc2.\n\nThat is being held for 7.3:\n\n\nThis has been saved for the 7.3 release:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches2\n\n---------------------------------------------------------------------------\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 30 Jan 2002 16:46:33 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Final Release ... Monday?" }, { "msg_contents": "\nIt is used as an aggregation function. It only works on integers, and assumes\nsizeof(int) == sizeof(void *)\n\nThere are two functions:\n\nint_array_aggregate(int4)\nand \nint_array_enum(int4[])\n\n\nOne creates an integer array as:\n\ncreate table tst as select some_field, int_array_aggregate(int_field) as field\nfrom table group by some_field;\n\nThis will poduce one row for all the unique \"some_fields\" with an array of all\n\"int_field\" associated.\n\nTo extract this, you use int_array_enum(int_array); whic returns multiple\nresults.\n\n\n\n\n\nHannu Krosing wrote:\n> \n> On Thu, 2002-01-31 at 02:36, mlw wrote:\n> >\n> > I submitted a \"contrib\" project for integer array\n> > aggregation/enumeration. I didn't see it in rc2.\n> >\n> \n> Some general questions about arrays -\n> \n> 1. Could this aggregation/enumeration be done in a generic way, so\n> that you could aggregate over any array ?\n> \n> The only generic function I currently know of is count(), but it\n> is generic only on argument side, i.e count(any) returns always\n> integer.\n> \n> 2. Is there a way to define a function such that\n> \n> declare make_array(any) returns any[] ?\n> \n> 3. Also, can I prescribe order of aggregation (aggregation applied\n> _after_ ORDER BY) that would act in a way similar to HAVING .\n> \n> 4. what arguments must I give to array_in so that it produces an\n> array of specific kind ? I tried some more obvious things and\n> really got nowhere.\n> \n> --------------\n> Hannu\n> \n> PS. I attach an alpha-contrib ofPL/PgSQL code to compare intarrays.\n> I have not had time to tie these ops to indexes\n> \n> --------------\n> Hannu\n> \n> -------------------------------------------------------------------------------\n> Name: ARRAY_COMP.SQL\n> ARRAY_COMP.SQL Type: text/x-sql\n> Encoding: quoted-printable\n", "msg_date": "Wed, 30 Jan 2002 18:50:49 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Array aggregation. Was: PostgreSQL Final Release ... Monday?" }, { "msg_contents": "Marc G. Fournier writes:\n\n> \tSo far as we can tell, we're *finally* ready for release ... Tom\n> made a few benign commits since rc2 that he feels doesn't warrant an rc3,\n> so we are planning on doing the final release on Monday ...\n\nGood. I've just gone through checking that all the files that need manual\nupdates have received them, and I've built and uploaded new man pages.\n\nThe only thing that still needs to be done is setting the date top the\nregister.txt file. Don't forget that when you're ready.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 30 Jan 2002 19:56:50 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Final Release ... Monday?" }, { "msg_contents": "Karl DeBisschop <kdebisschop@range.infoplease.com> writes:\n> Typo in docs, I think.\n\n> user guide 4.7 should be \"Date Type Formatting Functions\" instead of\n> \"Data\", ISTM\n\nNo, because to_number and the numeric variants of to_char are in there\ntoo.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 01 Feb 2002 15:09:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Final Release ... Monday? " }, { "msg_contents": "Hannu Krosing <hannu@krosing.net> writes:\n> 2. Is there a way to define a function such that\n\n> declare make_array(any) returns any[] ?\n\nNo. We do need a way to construct an array as an expression result,\nbut I think it will have to be a special syntactic case, not an ordinary\nfunction. Maybe something roughly like a CAST construct,\n\tARRAY(expr,expr,expr,... OF type-name)\n\nThe function definition language isn't nearly powerful enough to deal\nwith this --- heck, we don't even support a variable number of\narguments. If it were, it'd probably break the whole ambiguous-\nfunction-call resolution mechanism --- what type do you assign to the\noutput if you're not entirely sure how the inputs are to be interpreted?\n\n> 3. Also, can I prescribe order of aggregation (aggregation applied\n> _after_ ORDER BY) that would act in a way similar to HAVING .\n\nSub-select in FROM might help here.\n\n> 4. what arguments must I give to array_in so that it produces an \n> array of specific kind ?\n\nYou don't. array_in is meant to be used as the declared typinput\nroutine for an array type; that linkage is what causes the system\nto know what the output array type is. array_in by itself can't\ncause the system to assign a correct type to its result.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 01 Feb 2002 17:46:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Array aggregation. Was: PostgreSQL Final Release ... Monday? " }, { "msg_contents": "> > \tDoes *anyone* have any outstanding bugs that they would like to\n> > report *before* the final release? Anything that should hold up the\n> > release even if for a couple of days?\n>\n> Typo in docs, I think.\n>\n> user guide 4.7 should be \"Date Type Formatting Functions\" instead of\n> \"Data\", ISTM\n\nAnd unless you've fixed it already - the FAQ still refers to 7.1.3. Just\nto a grep of the source for '7.1.3' maybe.\n\nChris\n\n\n", "msg_date": "Sat, 2 Feb 2002 12:00:50 +0800 (WST)", "msg_from": "Christopher Kings-Lynne <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Final Release ... Monday?" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us] \n> Sent: 30 January 2002 17:58\n> To: Oleg Bartunov\n> Cc: pgsql-admin@postgresql.org; Pgsql Hackers\n> Subject: Re: [HACKERS] [ADMIN] postgresql under Windows is slow \n> \n> \n> Oleg Bartunov <oleg@sai.msu.su> writes:\n> > anybody has an experience how is stable postgresql under Windows \n> > system ? I tried postgresq 7.1 under Cygwin, Windows 98 and was \n> > dissapointed by very bad performance. Are there something I \n> could tune \n> > ? I got 250 sel/sec on simple select from table with 500 \n> rows ! Under \n> > Linux I have 2500 sel/sec.\n> \n> Never tried it myself, but I distinctly recall someone \n> reporting that they got comparable performance on Cygwin as \n> on Linux. You might try asking on pgsql-cygwin.\n> \n\nI have never benchmarked it, but I do run pg on Cygwin/Win2K/XP and\nSlackware Linux 8 on the same laptop. PostgreSQL always *seems* slower under\nCygwin.\n\nHowever, I know that one of the guys at Greatbridge did do some benchmarking\nand as I recall reported getting comparable performance up to about 100\nusers.\n\nIt's possible that it's because you are running on Windows 98. For the 7.2\nrelease we've done all regression testing on XP/2K - I suspect that Jason\nonly tested 7.1 on NT4 or 2K.\n\nRegards, Dave.\n", "msg_date": "Wed, 30 Jan 2002 20:16:21 -0000", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: [ADMIN] postgresql under Windows is slow " }, { "msg_contents": "On Wed, 30 Jan 2002, Dave Page wrote:\n\n>\n>\n> > -----Original Message-----\n> > From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> > Sent: 30 January 2002 17:58\n> > To: Oleg Bartunov\n> > Cc: pgsql-admin@postgresql.org; Pgsql Hackers\n> > Subject: Re: [HACKERS] [ADMIN] postgresql under Windows is slow\n> >\n> >\n> > Oleg Bartunov <oleg@sai.msu.su> writes:\n> > > anybody has an experience how is stable postgresql under Windows\n> > > system ? I tried postgresq 7.1 under Cygwin, Windows 98 and was\n> > > dissapointed by very bad performance. Are there something I\n> > could tune\n> > > ? I got 250 sel/sec on simple select from table with 500\n> > rows ! Under\n> > > Linux I have 2500 sel/sec.\n> >\n> > Never tried it myself, but I distinctly recall someone\n> > reporting that they got comparable performance on Cygwin as\n> > on Linux. You might try asking on pgsql-cygwin.\n> >\n>\n> I have never benchmarked it, but I do run pg on Cygwin/Win2K/XP and\n> Slackware Linux 8 on the same laptop. PostgreSQL always *seems* slower under\n> Cygwin.\n>\n> However, I know that one of the guys at Greatbridge did do some benchmarking\n> and as I recall reported getting comparable performance up to about 100\n> users.\n\nWho is this guy ? What I can do to get more preformance ?\nIs't possible to compile 7.2 sources under Cygwin ? Do I need to do\nsomething special ?\n\n>\n> It's possible that it's because you are running on Windows 98. For the 7.2\n> release we've done all regression testing on XP/2K - I suspect that Jason\n> only tested 7.1 on NT4 or 2K.\n>\n> Regards, Dave.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Thu, 31 Jan 2002 14:13:08 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postgresql under Windows is slow " }, { "msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> What I can do to get more preformance ?\n\nSounds like the answer is \"stop using Windows 98\".\n\nNow that Dave mentions it, I do recall that it was Great Bridge's\npeople who had gotten pretty much the same numbers from their TPC-C\nbenchmark on Cygwin as on Linux. I'm quite sure they'd have been\nusing a current-at-the-time Windows ... so, either NT or W2K.\nProbably NT but I can't say that for certain.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 31 Jan 2002 10:16:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postgresql under Windows is slow " } ]
[ { "msg_contents": "Hi,\n\nI seem to recall that someone on the list here is responsible for the\ndatabases/postgresql7 port on FreeBSD?\n\nIf so, then could you create a new port for 7.2 (ie.\ndatabases/postgresql72), and leave the current one as 7.1.3 - it would be a\nGood Thing(tm).\n\nCheers,\n\nChris\n\n", "msg_date": "Thu, 31 Jan 2002 11:17:16 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "freebsd postgres port" } ]
[ { "msg_contents": "Hi All,\n\nI recently had a spate of permission probs on our PHP site, with lots of log\nentries like this:\n\n2002-01-31 09:20:58 ERROR: users_stats: Permission denied.\n2002-01-31 09:20:58 ERROR: recipe_recipes: Permission denied.\n\nWhat I would like to see instead is:\n\n2002-01-31 09:20:58 ERROR: Permission denied: INSERT on table \"users_stats\"\nfor \"au-users\".\n2002-01-31 09:20:58 ERROR: Permission denied: SELECT on table\n\"recipe_recipes\" for \"au-recipes\".\n2002-01-31 09:20:58 ERROR: Permission denied: UPDATE on sequence \"blah\" for\n\"adsfafds\".\n\nThat would have made my job of tracking down the errors MUCH easier.\n\nI've looked in backend/catalog/aclparse.c (IIRC) but all it has is the\nstatic string 'Permission denied'. Looks like you have to fix every call to\nacl_check() Would someone with more postgres experience be able to do up a\npatch for it? Or at least add it to the TODO?\n\nI'd _love_ to have this functionality - it would have come in useful many\ntimes in the past...\n\nChris\n\n", "msg_date": "Thu, 31 Jan 2002 12:30:26 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "more info in permission errors" } ]
[ { "msg_contents": "I don't think it's any surprise that I'm hot on kerberos (updated docs\nwill come, one thing at a time), however I'm really really really\nsupper annoyed with the fact that I can't specify a way for a host to\noptionally use krb5 or optionally use password authentication. If\nyou've got kerberos compiled in, you're stuck using kerberos. Anyone\nhave any suggetions or preferred ways of handling libpq so that the\nfe-auth can fail back to password if krb5 fail? Thanks. -sc\n\n-- \nSean Chittenden\n", "msg_date": "Wed, 30 Jan 2002 20:43:51 -0800", "msg_from": "Sean Chittenden <sean@chittenden.org>", "msg_from_op": true, "msg_subject": "Kerberos and fe-auth..." }, { "msg_contents": "Sean Chittenden writes:\n\n> I don't think it's any surprise that I'm hot on kerberos (updated docs\n> will come, one thing at a time), however I'm really really really\n> supper annoyed with the fact that I can't specify a way for a host to\n> optionally use krb5 or optionally use password authentication.\n\nWhat??? Have you looked at pg_hba.conf recently?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 31 Jan 2002 00:05:32 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Kerberos and fe-auth..." }, { "msg_contents": "> > I don't think it's any surprise that I'm hot on kerberos (updated docs\n> > will come, one thing at a time), however I'm really really really\n> > supper annoyed with the fact that I can't specify a way for a host to\n> > optionally use krb5 or optionally use password authentication.\n> \n> What??? Have you looked at pg_hba.conf recently?\n\nIn pg_hba.conf:\n\nhost all 0.0.0.0 0.0.0.0 krb5\nhost all 0.0.0.0 0.0.0.0 password\n\n\nAnd from the CLI:\n\n> klist\nklist: No credentials cache found (ticket cache FILE:/tmp/krb5cc_80.1)\n> psql -h db1 dbname user\npsql: fe_sendauth: krb5 authentication failed\n\nThe only way I can do something about that is to reverse the order of\nthe above entries in hba.conf, however, if I do that, then I can't use\nkrb5. One or the other, not both, and that's my problem... thoughts?\n-sc\n\n-- \nSean Chittenden\n", "msg_date": "Wed, 30 Jan 2002 21:21:48 -0800", "msg_from": "Sean Chittenden <sean@chittenden.org>", "msg_from_op": true, "msg_subject": "Re: Kerberos and fe-auth..." } ]
[ { "msg_contents": "> This is probably a pretty stupid thing we are missing but we cant get hebrew to collate properly in PG7.1\n> \n> We are running PG7.1 on RH6.2 with ACS 3.2/AOLServer \n> \n> we set LC_COLLATE and LC_CTYPE to iw_IL.UTF-8 as the system locale. (We want to support hebrew)\n> \n> we then ran INITDB, createdb -E UNICODE and started entering data.\n> \n> However - the hebrew data collates randomly on a SELECT ....ORDER by....\n> statement. FWIW - it seems to collate strings of a single character properly\n> \n> Is this a bug in PG7.1? \n> what do we need to do to setup the db cluster properly so that the ORDER will collate the unicode words properly?\n\nHave you enabled the locale support (--enable-locale)? If you enable\nit, and still see the problem, then there might be problems with the\nlocale database. Can you run a small program something like following?\n\n#include <string.h>\n#include <locale.h>\nmain()\n{\n static char *s1 = \"utf_8_hebrew_string_here\";\n static char *s2 = \"another_utf_8_hebrew_string_here\";\";\n\n setlocale(LC_ALL,\"\");\n\n printf(\"%d\\n\",strcoll(s2,s1));\n}\n\n", "msg_date": "Thu, 31 Jan 2002 14:01:16 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: proper ordering of a UNICODE / Hebrew postgres database cluster" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Christopher Kings-Lynne [mailto:chriskl@familyhealth.com.au] \n> Sent: 31 January 2002 01:34\n> To: Dave Page; 'Peter Eisentraut'\n> Cc: 'PostgreSQL Development'\n> Subject: Re: [HACKERS] A simpler way to configure the source code?\n> \n> \n> > > I feel having the \"fringe features\" more tested is a \n> great idea, and \n> > > will lead to a better PostgreSQL, and therefore happier \n> users. :) \n> > > A friendly, and decently-easy-to-user interactive setup (Linux \n> > > \"menuconfig\" kernel style?) would be beneficial.\n> > >\n> > > If it doesn't add signifcant overhead to maintenance, and is very \n> > > portable, it sounds to me like a good idea.\n> > >\n> >\n> > +1 (not that this is a vote :-) )\n> \n> Hmmm...yuck. I think a --with-everything is a good idea, but \n> surely all that needs be done is make the regression test \n> test everything? It's annoying little setup scripts that \n> make porting things to FreeBSD a pain...\n\nI seem to recall that Peter's proposal was for a script that drove configure\nfor you so presumably you could either use the script, or ignore it and\n./configure --with..... if you preferred.\n\nIf this were the case I don't see how anyone could object as long as it's\nnice and portable and is easy to maintain.\n\nRegards, Dave.\n", "msg_date": "Thu, 31 Jan 2002 08:35:20 -0000", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: A simpler way to configure the source code?" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Oleg Bartunov [mailto:oleg@sai.msu.su] \n> Sent: 31 January 2002 11:13\n> To: Dave Page\n> Cc: 'Tom Lane'; 'pgsql-admin@postgresql.org'; 'Pgsql Hackers'\n> Subject: Re: [HACKERS] [ADMIN] postgresql under Windows is slow \n> \n> \n> On Wed, 30 Jan 2002, Dave Page wrote:\n> \n> >\n> >\n> > > -----Original Message-----\n> > > From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> > > Sent: 30 January 2002 17:58\n> > > To: Oleg Bartunov\n> > > Cc: pgsql-admin@postgresql.org; Pgsql Hackers\n> > > Subject: Re: [HACKERS] [ADMIN] postgresql under Windows is slow\n> > >\n> > >\n> > > Oleg Bartunov <oleg@sai.msu.su> writes:\n> > > > anybody has an experience how is stable postgresql \n> under Windows \n> > > > system ? I tried postgresq 7.1 under Cygwin, Windows 98 and was \n> > > > dissapointed by very bad performance. Are there something I\n> > > could tune\n> > > > ? I got 250 sel/sec on simple select from table with 500\n> > > rows ! Under\n> > > > Linux I have 2500 sel/sec.\n> > >\n> > > Never tried it myself, but I distinctly recall someone reporting \n> > > that they got comparable performance on Cygwin as on Linux. You \n> > > might try asking on pgsql-cygwin.\n> > >\n> >\n> > I have never benchmarked it, but I do run pg on Cygwin/Win2K/XP and \n> > Slackware Linux 8 on the same laptop. PostgreSQL always \n> *seems* slower \n> > under Cygwin.\n> >\n> > However, I know that one of the guys at Greatbridge did do some \n> > benchmarking and as I recall reported getting comparable \n> performance \n> > up to about 100 users.\n> \n> Who is this guy ? What I can do to get more preformance ?\n> Is't possible to compile 7.2 sources under Cygwin ? Do I need \n> to do something special ?\n\nYes you can compile it yourself under Cygwin, but there's not really any\npoint that I can see. I don't think it'll run any faster. I can't tell you\nthe name of the guy a Greatbridge at the moment as I can't seem to search\nthe archives right now.\n\nRegards, Dave\n", "msg_date": "Thu, 31 Jan 2002 11:47:32 -0000", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: [ADMIN] postgresql under Windows is slow " }, { "msg_contents": "On Thu, 31 Jan 2002, Dave Page wrote:\n\n>\n> > Who is this guy ? What I can do to get more preformance ?\n> > Is't possible to compile 7.2 sources under Cygwin ? Do I need\n> > to do something special ?\n>\n> Yes you can compile it yourself under Cygwin, but there's not really any\n> point that I can see. I don't think it'll run any faster. I can't tell you\n> the name of the guy a Greatbridge at the moment as I can't seem to search\n> the archives right now.\n\nOhh, this problem shoul dbe addressed to Marc. It's shame we can't search\nmailing archive.\n\n\n>\n> Regards, Dave\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Thu, 31 Jan 2002 14:50:58 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postgresql under Windows is slow " } ]
[ { "msg_contents": "We are having some strange behavior with PostgreSQL 7.1.3 where rows \nare inserted with a timestamp that is out of order... here is the \ntable description and a few rows:\n\n\n Table \"event_page\"\n Attribute | Type | Modifier\n--------------+--------------------------+----------------------------------\n user_id | oid |\n session_id | character varying(24) |\n course_ident | character varying(128) |\n page | character varying(32) |\n date | timestamp with time zone | default \"timestamp\"('now'::text)\n\n\n 69714 | 61381 | yVLsSYYpZIa1G54VnDY7qcz2 | COG_001_ELEARNING | \n15.html | 2002-01-31 18:07:21+08\n 69715 | 61381 | yVLsSYYpZIa1G54VnDY7qcz2 | COG_001_ELEARNING | \n16.html | 2002-01-31 18:07:40+08\n 69717 | 61453 | uZuNbGXsWnXXRECH6PGeFdtf | COG_001_ELEARNING | \n1.html | 2002-01-31 18:08:14+08\n 69718 | 61453 | uZuNbGXsWnXXRECH6PGeFdtf | COG_001_ELEARNING | \n15.html | 2002-01-31 18:09:30+08\n 69719 | 61453 | uZuNbGXsWnXXRECH6PGeFdtf | COG_001_ELEARNING | \n16.html | 2002-01-31 18:07:49+08\n\n\n\nThe oid correctly reflects the order of the insertion of the rows... \nbut look at the timestamp - the last row has a timestamp _2 minutes \nbefore_ the previous row. How could this be happening? We know row \n69719 was inserted _after_ 69718, by probably about 30 seconds.\n\nThanks much.\n\nElaine Lindelef\n", "msg_date": "Thu, 31 Jan 2002 18:20:47 -0800", "msg_from": "Elaine Lindelef <eel@cognitivity.com>", "msg_from_op": true, "msg_subject": "timestamp weirdness" }, { "msg_contents": "...\n> The oid correctly reflects the order of the insertion of the rows...\n> but look at the timestamp - the last row has a timestamp _2 minutes\n> before_ the previous row. How could this be happening? We know row\n> 69719 was inserted _after_ 69718, by probably about 30 seconds.\n\nThe timestamp provided as a result of evaluating 'now' is the time of\nthe start of the transaction, not the instantaneous wall clock time (if\nyou want the latter there is a function to provide it).\n\nSo, the times will reflect the time the transaction was started, while\nthe OID will reflect the order in which the insert/update actually\nhappened within the transaction.\n\nhth\n\n - Thomas\n", "msg_date": "Fri, 01 Feb 2002 04:14:02 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: timestamp weirdness" }, { "msg_contents": "At 04:14 AM 01-02-2002 +0000, Thomas Lockhart wrote:\n>...\n>> The oid correctly reflects the order of the insertion of the rows...\n>> but look at the timestamp - the last row has a timestamp _2 minutes\n>> before_ the previous row. How could this be happening? We know row\n>> 69719 was inserted _after_ 69718, by probably about 30 seconds.\n>\n>The timestamp provided as a result of evaluating 'now' is the time of\n>the start of the transaction, not the instantaneous wall clock time (if\n>you want the latter there is a function to provide it).\n>\n>So, the times will reflect the time the transaction was started, while\n>the OID will reflect the order in which the insert/update actually\n>happened within the transaction.\n\nDo postgresql backends still preallocate ranges of OIDs?\n\nLink.\n\n", "msg_date": "Fri, 01 Feb 2002 13:43:44 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: timestamp weirdness" }, { "msg_contents": "Lincoln Yeoh <lyeoh@pop.jaring.my> writes:\n> Do postgresql backends still preallocate ranges of OIDs?\n\nGood point ... but in 7.1 they don't anymore. There's just one shared\nOID counter.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 01 Feb 2002 09:45:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: timestamp weirdness " }, { "msg_contents": ">...\n> > The oid correctly reflects the order of the insertion of the rows...\n> > but look at the timestamp - the last row has a timestamp _2 minutes\n> > before_ the previous row. How could this be happening? We know row\n> > 69719 was inserted _after_ 69718, by probably about 30 seconds.\n>\n>The timestamp provided as a result of evaluating 'now' is the time of\n>the start of the transaction, not the instantaneous wall clock time (if\n>you want the latter there is a function to provide it).\n>\n>So, the times will reflect the time the transaction was started, while\n>the OID will reflect the order in which the insert/update actually\n>happened within the transaction.\n>\n>hth\n>\n> - Thomas\n\nThe beginning of the transactions was definitely in the same order as \nthe OID reflects, and I'm quite sure the previous transaction was \ncompleted before the next connection was started as well.\n\nElaine\n\n", "msg_date": "Fri, 1 Feb 2002 09:37:58 -0800", "msg_from": "Elaine Lindelef <eel@cognitivity.com>", "msg_from_op": true, "msg_subject": "Re: timestamp weirdness" }, { "msg_contents": "> >The timestamp provided as a result of evaluating 'now' is the time of\n> >the start of the transaction, not the instantaneous wall clock time (if\n> >you want the latter there is a function to provide it).\n> We are now thinking that what is happening is that the first write\n> may have a long open transaction.\n> What is the function you mention to provide instantaneous time?\n\nlockhart=# select timestamp 'now', cast(timeofday() as timestamp);\n ?column? | ?column? \n------------------------+---------------------------\n 2002-02-01 17:01:19-08 | 2002-02-01 17:01:31.21-08\n\nhth\n\n - Thomas\n", "msg_date": "Fri, 01 Feb 2002 17:03:14 -0800", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] timestamp weirdness" } ]
[ { "msg_contents": "Not sure if this is the right newsgroup to use. I did not see a general \none or sql statement one for postgres. Please point me to the correct \nlocation if this is wrong and I apologize for the off topic if it is.\n\n\nI am attempting to do a select in which I force an existing \ncolumn(field) to a specific value. I need to do so in order to group \ndata properly.\n\nie:\nagi_timesheets=# select distinct \nemployee_id,first_name,last_name,date,sum(hours),\"R\" as job_code from \ntimesheet group by employee_id,first_name,last_name,date having job_code \n<> 'H' and job_code <> 'V' and date >= '01-01-2002' and date <= \n'01-15-2002';\nERROR: Attribute 'R' not found\n\nagi_timesheets=# select distinct \nemployee_id,first_name,last_name,date,sum(hours),job_code AS \"R\" from \ntimesheet group by employee_id,first_name,last_name,date having job_code \n<> 'H' and job_code <> 'V' and date >= '01-01-2002' and date <= \n'01-15-2002';\nERROR: Attribute timesheet.job_code must be GROUPed or used in an \naggregate function\n\nagi_timesheets=# select distinct \nemployee_id,first_name,last_name,date,sum(hours),job_code AS 'R' from \ntimesheet group by employee_id,first_name,last_name,date having job_code \n<> 'H' and job_code <> 'V' and date >= '01-01-2002' and date <= \n'01-15-2002';\nERROR: parser: parse error at or near \"'\"\n\nagi_timesheets=# select distinct \nemployee_id,first_name,last_name,date,sum(hours),job_code = 'R' from \ntimesheet group by employee_id,first_name,last_name,date having job_code \n<> 'H' and job_code <> 'V' and date >= '01-01-2002' and date <= \n'01-15-2002';\nERROR: Attribute timesheet.job_code must be GROUPed or used in an \naggregate function\n\n\netc. etc. etc. I have tried all possible combinations (or I think so) \nof \"R\", 'R', R using = or AS on either side of job_code. Nothing seems \nto work.\n\nSeveral of these combinations work in MySQL, Access, and Oracle. Or at \nleast according to those online I have spoke to they do.\n\nCan any explain to me what I am doing wrong? If this is possible in \nPostgreSQL? Or the proper way of doing this? Or even a source of \ninformation that explains it. The closest source I found was obviously \nthe psql documentation but I have yet to find a specific example of what \nI am doing.\n\nThanks.\n\n", "msg_date": "Fri, 01 Feb 2002 08:24:53 -0700", "msg_from": "DzZero <spinzero@aero-graphics.com>", "msg_from_op": true, "msg_subject": "sql select query with column 'AS' assignment" }, { "msg_contents": "DzZero wrote:\n\n> Not sure if this is the right newsgroup to use. I did not see a general \n> one or sql statement one for postgres. Please point me to the correct \n> location if this is wrong and I apologize for the off topic if it is.\n> \n> \n> I am attempting to do a select in which I force an existing \n> column(field) to a specific value. I need to do so in order to group \n> data properly.\n> \n> ie:\n> agi_timesheets=# select distinct \n> employee_id,first_name,last_name,date,sum(hours),\"R\" as job_code from \n> timesheet group by employee_id,first_name,last_name,date having job_code \n> <> 'H' and job_code <> 'V' and date >= '01-01-2002' and date <= \n> '01-15-2002';\n> ERROR: Attribute 'R' not found\n> \n> agi_timesheets=# select distinct \n> employee_id,first_name,last_name,date,sum(hours),job_code AS \"R\" from \n> timesheet group by employee_id,first_name,last_name,date having job_code \n> <> 'H' and job_code <> 'V' and date >= '01-01-2002' and date <= \n> '01-15-2002';\n> ERROR: Attribute timesheet.job_code must be GROUPed or used in an \n> aggregate function\n> \n> agi_timesheets=# select distinct \n> employee_id,first_name,last_name,date,sum(hours),job_code AS 'R' from \n> timesheet group by employee_id,first_name,last_name,date having job_code \n> <> 'H' and job_code <> 'V' and date >= '01-01-2002' and date <= \n> '01-15-2002';\n> ERROR: parser: parse error at or near \"'\"\n> \n> agi_timesheets=# select distinct \n> employee_id,first_name,last_name,date,sum(hours),job_code = 'R' from \n> timesheet group by employee_id,first_name,last_name,date having job_code \n> <> 'H' and job_code <> 'V' and date >= '01-01-2002' and date <= \n> '01-15-2002';\n> ERROR: Attribute timesheet.job_code must be GROUPed or used in an \n> aggregate function\n> \n> \n> etc. etc. etc. I have tried all possible combinations (or I think so) \n> of \"R\", 'R', R using = or AS on either side of job_code. Nothing seems \n> to work.\n> \n> Several of these combinations work in MySQL, Access, and Oracle. Or at \n> least according to those online I have spoke to they do.\n> \n> Can any explain to me what I am doing wrong? If this is possible in \n> PostgreSQL? Or the proper way of doing this? Or even a source of \n> information that explains it. The closest source I found was obviously \n> the psql documentation but I have yet to find a specific example of what \n> I am doing.\n> \n> Thanks.\n> \n\nBTW.If I group by job_code on the last statement I posted it does so but \nit groups them as if job_code has the orginal values in it. Also I end \nup with something like:\n\n employee_id | first_name | last_name | date | sum | ?column?\n-------------+------------+------------+------------+-----+----------\n 7 | Larry | James | 2002-01-02 | 8 | f\n\n\nI'm lost. heh\n\n", "msg_date": "Fri, 01 Feb 2002 10:24:41 -0700", "msg_from": "DzZero <spinzero@aero-graphics.com>", "msg_from_op": true, "msg_subject": "Re: sql select query with column 'AS' assignment" }, { "msg_contents": "\nOn Fri, 1 Feb 2002, DzZero wrote:\n\n> DzZero wrote:\n>\n> > Several of these combinations work in MySQL, Access, and Oracle. Or at\n> > least according to those online I have spoke to they do.\n> >\n> > Can any explain to me what I am doing wrong? If this is possible in\n> > PostgreSQL? Or the proper way of doing this? Or even a source of\n> > information that explains it. The closest source I found was obviously\n> > the psql documentation but I have yet to find a specific example of what\n> > I am doing.\n> >\n> > Thanks.\n> >\n>\n> BTW.If I group by job_code on the last statement I posted it does so but\n> it groups them as if job_code has the orginal values in it. Also I end\n> up with something like:\n>\n> employee_id | first_name | last_name | date | sum | ?column?\n> -------------+------------+------------+------------+-----+----------\n> 7 | Larry | James | 2002-01-02 | 8 | f\n>\n>\n> I'm lost. heh\n\nIIRC the grouping happens on the stuff from the from,\nnot from the select list. If you want to do this, you'd probably need\na subselect in the from.\n\nAs for the above, the job_code='R' is a boolean expression (is job_code\nequal to R?)\n> > agi_timesheets=# select distinct\n> > employee_id,first_name,last_name,date,sum(hours),job_code = 'R' from\n> > timesheet group by employee_id,first_name,last_name,date having job_code\n> > <> 'H' and job_code <> 'V' and date >= '01-01-2002' and date <=\n> > '01-15-2002';\n> > ERROR: Attribute timesheet.job_code must be GROUPed or used in an\n> > aggregate function\n> >\n\nI'm not 100% sure what you're trying to get out, but maybe:\nselect employee_id, first_name, last_name, date, sum(hours), job_code\nfrom\n(select employee_id, first_name, last_name, date, hours, 'R' AS job_code\n from timesheet where job_code<>'H' and job_code<>'V' and\n date>='01-01-2002' and date<='01-15-2002'\n) group by employee_id, first_name, last_name, date, job_code;\n\n\n", "msg_date": "Fri, 1 Feb 2002 11:33:54 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: sql select query with column 'AS' assignment" }, { "msg_contents": "DzZero <spinzero@aero-graphics.com> writes:\n> agi_timesheets=# select distinct \n> employee_id,first_name,last_name,date,sum(hours),\"R\" as job_code from \n> timesheet group by employee_id,first_name,last_name,date having job_code \n> <> 'H' and job_code <> 'V' and date >= '01-01-2002' and date <= \n> '01-15-2002';\n> ERROR: Attribute 'R' not found\n\n\"R\" and 'R' are two quite different things: \"R\" is a name, 'R' is a\nliteral constant. Not sure how many of your problems stem from lack\nof understanding of this basic point, but quite a few of them do.\n\n> agi_timesheets=# select distinct \n> employee_id,first_name,last_name,date,sum(hours),job_code AS \"R\" from \n> timesheet group by employee_id,first_name,last_name,date having job_code \n> <> 'H' and job_code <> 'V' and date >= '01-01-2002' and date <= \n> '01-15-2002';\n> ERROR: Attribute timesheet.job_code must be GROUPed or used in an \n> aggregate function\n\nIsn't the error message clear enough? You need to add job_code to\nthe GROUP BY list.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 01 Feb 2002 15:01:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: sql select query with column 'AS' assignment " } ]
[ { "msg_contents": "Neil,\n\nwe've looked at your patch and seems everything is fine.\nTom or Bruce, apply it for 7.3\n\nWe're looking for gist developers, but anyway, thanks for\nthis \"janitorial\" work :-)\n\nWe plan to add concurrency support for GiST, so if you\nfeel interest you're welcome !\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 1 Feb 2002 19:28:29 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] [PATCH] GiST code cleanup" } ]
[ { "msg_contents": "Last Minute AIX_FAQ patch with small corrections for current version, \nplease apply.\n\nAndreas", "msg_date": "Fri, 1 Feb 2002 18:48:54 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "[PATCHES] FAQ_AIX patch for 7.2" }, { "msg_contents": "\nDone. Docs/FAQ's go in right away.\n\n---------------------------------------------------------------------------\n\nZeugswetter Andreas SB SD wrote:\n> \n> Last Minute AIX_FAQ patch with small corrections for current version, \n> please apply.\n> \n> Andreas\n\nContent-Description: faq_aix.patch\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 1 Feb 2002 13:23:26 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] FAQ_AIX patch for 7.2" } ]
[ { "msg_contents": "I am writing an ODBC driver for PostgreSQL.\n\nI am using binary cursors. For most data types, I have no problems, but\nfor a couple I am confused. If someone could point me to the proper\ndocumentation, source file or whatever, I would very much appreciate it.\n\nThe types I am having problems with are:\n1. Currency data type. The binary pointer to the object, I assumed,\nwould point to either a NumericVar type (from Numeric.c) or to a Numeric\npointer from numeric.h. Neither of those assumptions are correct, as I\nhave looked carefully while tracing in the debugger.\nSo please, where do I find the correct definition for the binary object\nhanded back from Libpq when using a binary cursor with numeric data\ntype?\n\n2. Array type objects. I cannot find a description of how to decipher\narray objects from a binary cursor. Where can I find a definition of\nhow to do this.\n\nI apologize for the intrusion, but I have been agonizing over this for\nseveral days.\n", "msg_date": "Fri, 1 Feb 2002 11:22:55 -0800", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "A couple binary cursor questions" }, { "msg_contents": "\"Dann Corbit\" <DCorbit@connx.com> writes:\n> The types I am having problems with are:\n> 1. Currency data type.\n\nIf you mean type MONEY, it's just an int4. (There's a reason why it's\ndeprecated...)\n\n> 2. Array type objects.\n\nSee src/include/utils/array.h\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 01 Feb 2002 15:07:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A couple binary cursor questions " } ]
[ { "msg_contents": "-----Original Message-----\nFrom: Tom Lane [mailto:tgl@sss.pgh.pa.us]\nSent: Friday, February 01, 2002 12:08 PM\nTo: Dann Corbit\nCc: pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] A couple binary cursor questions \n\n\n\"Dann Corbit\" <DCorbit@connx.com> writes:\n> The types I am having problems with are:\n> 1. Currency data type.\n\nIf you mean type MONEY, it's just an int4. (There's a reason why it's\ndeprecated...)\n-----------------------------------------------------------------------\nSorry. Brain seizure on my part. I meant NUMERIC data type.\nAs in numeric with a precision of 800 (big whomping number types)\n-----------------------------------------------------------------------\n> 2. Array type objects.\n\nSee src/include/utils/array.h\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 1 Feb 2002 12:25:23 -0800", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: A couple binary cursor questions " }, { "msg_contents": "\"Dann Corbit\" <DCorbit@connx.com> writes:\n> Sorry. Brain seizure on my part. I meant NUMERIC data type.\n\nAh. In that case see src/include/utils/numeric.h, as well as the\ncomments near the head of src/backend/utils/adt/numeric.c. Briefly,\nit's packed BCD floating point...\n\n>> 2. Array type objects.\n\n> See src/include/utils/array.h\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 01 Feb 2002 16:01:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A couple binary cursor questions " } ]
[ { "msg_contents": "Hi all,\n\nDoes anyone know how to disable the \" \\! \" and \" \\l \" commands ?\n\nI´m using PostgreSql 7.1 on a Solaris 7.\n\nThe case is:\n\nUsers connect on another solaris through SSH with a shell developed by me in perl, and connect to the PGSQL_SERVER through psql. The problem is: when the user is on the PGSQL PROMPT and he types \" \\! /bin/sh \", he gets the /bin/sh on the server.\n\n\n\nHere´s the shell :\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n#!/usr/bin/perl\n#------------------------------------------------------------------------------\n# Variaveis\n#------------------------------------------------------------------------------\n\n$mainmenu = '/usr/local/etc/wwrent/ssh_menu.txt';\n$mysql_user_table = '/usr/local/etc/wwrent/mysql_users.txt';\n$pgsql_user_table = '/usr/local/etc/wwrent/pgsql_users.txt';\n$sendmail_dir = '/etc/mail';\n$val_dominio = '/usr/local/etc/wwrent/ssh_users.txt';\n$ENV{'SHELL'} = \"/usr/local/bin/shell.pl\";\n\n#------------------------------------------------------------------------------\n\n$myself = getlogin || getpwuid($<) || \"nobody\";\nchop($mydir = `pwd`);\nmain_loop();\n\n\n#------------------------------------------------------------------------------\n# Sub-rotinas\n#------------------------------------------------------------------------------\n\nsub main_loop{\n while (true){\n system(\"clear\");\n print_menu($mainmenu);\n chop($opcao = <STDIN>);\n $opcao =~ tr/0-9/ /cs;\n $opcao =~ s/ //g;\n if (!opcao_valida($opcao)){\n print \"Você escolheu uma opção inválida!\\n\";\n get_enter();\n }elsif ($opcao == 1){\n $mysql_user = get_mysql_user($myself);\n system \"/usr/local/mysql/bin/mysql -h172.17.0.5 -u $mysql_user -p\";\n }elsif ($opcao == 2){\n pgsql_loop();\n }elsif ($opcao == 3){\n system \"/usr/local/bin/pine -i\";\n }elsif ($opcao == 4){\n system \"/bin/passwd\";\n get_enter();\n }elsif ($opcao == 5){\n system \"/usr/ucb/quota -v\";\n get_enter();\n }elsif ($opcao == 6){\n exit;\n }else{\n print \"Você escolheu a opção $opcao\\n\";\n get_enter();\n }\n }\n}\n\nsub print_menu{\n $menufile = shift;\n if (-e \"$menufile\"){\n open(in, \"$menufile\");\n for $line (<in>){\n print \"$line\";\n }\n }else{\n print \"Arquivo $menufile não encontrado\\n\";\n }\n print \"Escolha uma das opções acima ---> \";\n}\n\nsub get_enter(){\n print \"+-----------------------------------------------------------+\\n\";\n print \"| APERTE ENTER PARA CONTINUAR |\\n\";\n print \"+-----------------------------------------------------------+\\n\";\n chop($lixo = <STDIN>);\n}\n\nsub opcao_valida{\n $oque = shift;\n if (\n $oque == 1 ||\n $oque == 2 ||\n $oque == 3 ||\n $oque == 4 ||\n $oque == 5 ||\n $oque == 6\n )\n {\n return 1;\n }else{\n return 0;\n }\n}\n\nsub get_mysql_user{\n my($login) = shift;\n my(@lines) = `cat $mysql_user_table`;\n my($line,$unixl,$sqll);\n for $line (@lines){\n chop($line);\n ($unixl,$sqll) = split(/:/,$line);\n if ($unixl eq $login){\n return $sqll;\n }\n }\n return $login;\n}\n\nsub get_pgsql_user{\n my($login) = shift;\n my(@lines) = `cat $pgsql_user_table`;\n my($line,$unixl,$sqll);\n for $line (@lines){\n chop($line);\n ($unixl,$sqll) = split(/:/,$line);\n if ($unixl eq $login){\n return $sqll;\n }\n }\n return $login;\n}\n\nsub get_filename{\n my($filename);\n while (true){\n print \"Digite o nome do arquivo: \";\n chop($filename = <STDIN>);\n if (!$filename){\n return 0;\n }\n $first1 = substr($filename,0,1);\n $filename =~ tr/\\./\\./s;\n if ($first1 eq '/'){\n print \"O nome do arquivo não pode começar com /\\n\";\n }elsif (-e \"$filename\"){\n return $filename;\n }else{\n print \"Arquivo $filename não encontrado\\n\";\n print \"Deseja criar esse arquivo? (s/n) \";\n chop($resp = <STDIN>);\n if ($resp eq 's' || $resp eq 'S'){\n return $filename;\n }\n }\n }\n}\n\nsub pgsql_loop{\n $pgsql_user = get_pgsql_user($myself);\n while(true){\n system \"clear\";\n $ENV{'PATH'} = \"$ENV{'PATH'}:/usr/local/pgsql/bin\";\n $ENV{'LD_LIBRARY_PATH'} = \"$ENV{'LD_LIBRARY_PATH'}:/usr/local/pgsql/lib\";\n print \"Escolha a opção desejada: \\n\";\n print \"\\t1 - Executar o cliente psql\\n\";\n print \"\\t2 - Sair\\n\";\n print \"\\tSua Escolha--->\";\n chop($resp = <STDIN>);\n if ($resp == 1){\n print \"\\tDigite o nome da base de dados: \";\n chop($bd = <STDIN>);\n system \"psql -h PGSQL_SERVER -U $pgsql_user $bd\";\n }elsif ($resp == 2){\n return;\n }\n get_enter();\n }\n}\n\n \n\nsub user_exists{\n my($login) = shift;\n my(@possible) = `grep $login:x /etc/passwd | cut -d: -f1`;\n for $pos (@possible){\n chop($pos);\n if ($pos eq $login){\n return 1;\n }\n }\n return 0;\n}\n#--------------- End ---------------\n\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nDaniel Henrique Cassela\nSupport Analist - WWRent\ncassela@wwrent.com.br\nICQ - 93631946\n\n\n\n\n\n\n\n\nHi all,\n \nDoes anyone know how to disable the \" \\! \" and \n\" \\l \" commands ?\n \nI´m using PostgreSql 7.1 on a Solaris \n7.\n \nThe case is:\n \nUsers connect on another solaris through SSH \nwith a shell developed by me in perl, and connect to the PGSQL_SERVER through \npsql. The problem is: when the user is on the PGSQL PROMPT and he types \" \n\\! /bin/sh \", he gets the /bin/sh on the server.\n \n \n \nHere´s the shell :\n \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n \n#!/usr/bin/perl#------------------------------------------------------------------------------#                               \nVariaveis#------------------------------------------------------------------------------\n \n$mainmenu = \n'/usr/local/etc/wwrent/ssh_menu.txt';$mysql_user_table = \n'/usr/local/etc/wwrent/mysql_users.txt';$pgsql_user_table = \n'/usr/local/etc/wwrent/pgsql_users.txt';$sendmail_dir = \n'/etc/mail';$val_dominio = \n'/usr/local/etc/wwrent/ssh_users.txt';$ENV{'SHELL'} = \n\"/usr/local/bin/shell.pl\";\n \n#------------------------------------------------------------------------------\n \n$myself = getlogin || getpwuid($<) || \n\"nobody\";chop($mydir  = `pwd`);main_loop();\n \n#------------------------------------------------------------------------------#                               \nSub-rotinas#------------------------------------------------------------------------------\n \nsub main_loop{   while \n(true){        \nsystem(\"clear\");        \nprint_menu($mainmenu);        chop($opcao \n= <STDIN>);        $opcao =~ \ntr/0-9/ /cs;        $opcao =~ s/ \n//g;        if \n(!opcao_valida($opcao)){                \nprint \"Você escolheu uma opção \ninválida!\\n\";                \nget_enter();        }elsif ($opcao == \n1){                \n$mysql_user = \nget_mysql_user($myself);                \nsystem \"/usr/local/mysql/bin/mysql -h172.17.0.5 -u $mysql_user \n-p\";        }elsif ($opcao == \n2){                \npgsql_loop();        }elsif ($opcao == \n3){                \nsystem \"/usr/local/bin/pine -i\";        \n}elsif ($opcao == \n4){                \nsystem \n\"/bin/passwd\";                \nget_enter();        }elsif ($opcao == \n5){                \nsystem \"/usr/ucb/quota \n-v\";                \nget_enter();        }elsif ($opcao == \n6){                \nexit;        \n}else{                \nprint \"Você escolheu a opção \n$opcao\\n\";                \nget_enter();        }   \n}}\n \nsub \nprint_menu{        $menufile = \nshift;        if (-e \n\"$menufile\"){                \nopen(in, \n\"$menufile\");                \nfor $line \n(<in>){                        \nprint \n\"$line\";                \n}        \n}else{                \nprint \"Arquivo $menufile não \nencontrado\\n\";        \n}        print \"Escolha uma das opções \nacima ---> \";}\n \nsub \nget_enter(){        print \n\"+-----------------------------------------------------------+\\n\";        \nprint \n\"|                \nAPERTE ENTER PARA \nCONTINUAR                \n|\\n\";        print \n\"+-----------------------------------------------------------+\\n\";        \nchop($lixo = <STDIN>);}\n \nsub \nopcao_valida{        $oque = \nshift;        if \n(            $oque == \n1 ||            $oque \n== 2 ||            \n$oque == 3 \n||            $oque \n== 4 ||            \n$oque == 5 \n||            $oque \n== 6            \n)        \n{                \nreturn 1;        \n}else{                \nreturn 0;        }}\n \nsub \nget_mysql_user{        my($login) = \nshift;        my(@lines) = `cat \n$mysql_user_table`;        \nmy($line,$unixl,$sqll);        for $line \n(@lines){                \nchop($line);                \n($unixl,$sqll) = \nsplit(/:/,$line);                \nif ($unixl eq \n$login){                        \nreturn \n$sqll;                \n}        \n}        return $login;}\n \nsub \nget_pgsql_user{        my($login) = \nshift;        my(@lines) = `cat \n$pgsql_user_table`;        \nmy($line,$unixl,$sqll);        for $line \n(@lines){                \nchop($line);                \n($unixl,$sqll) = \nsplit(/:/,$line);                \nif ($unixl eq \n$login){                        \nreturn \n$sqll;                \n}        \n}        return $login;}\n \nsub \nget_filename{        \nmy($filename);        while \n(true){                \nprint \"Digite o nome do arquivo: \n\";                \nchop($filename = \n<STDIN>);                \nif \n(!$filename){                        \nreturn \n0;                \n}                \n$first1 = \nsubstr($filename,0,1);                \n$filename =~ \ntr/\\./\\./s;                \nif ($first1 eq \n'/'){                        \nprint \"O nome do arquivo não pode começar com \n/\\n\";                \n}elsif (-e \n\"$filename\"){                        \nreturn \n$filename;                \n}else{                        \nprint \"Arquivo $filename não \nencontrado\\n\";                        \nprint \"Deseja criar esse arquivo? (s/n) \n\";                        \nchop($resp = \n<STDIN>);                        \nif ($resp eq 's' || $resp eq \n'S'){                                \nreturn \n$filename;                        \n}                \n}        }}\n \nsub \npgsql_loop{        $pgsql_user = \nget_pgsql_user($myself);        \nwhile(true){                \nsystem \n\"clear\";                \n$ENV{'PATH'} = \n\"$ENV{'PATH'}:/usr/local/pgsql/bin\";                \n$ENV{'LD_LIBRARY_PATH'} = \n\"$ENV{'LD_LIBRARY_PATH'}:/usr/local/pgsql/lib\";                \nprint \"Escolha a opção desejada: \n\\n\";                \nprint \"\\t1 - Executar o cliente \npsql\\n\";                \nprint \"\\t2 - \nSair\\n\";                \nprint \"\\tSua \nEscolha--->\";                \nchop($resp = \n<STDIN>);                \nif ($resp == \n1){                        \nprint \"\\tDigite o nome da base de dados: \n\";                        \nchop($bd = \n<STDIN>);                        \nsystem \"psql -h PGSQL_SERVER -U $pgsql_user \n$bd\";                \n}elsif ($resp == \n2){                        \nreturn;                \n}                \nget_enter();        }}\n \n \n \nsub \nuser_exists{        my($login) = \nshift;        my(@possible) = `grep \n$login:x /etc/passwd | cut -d: \n-f1`;        for $pos \n(@possible){                \nchop($pos);                \nif ($pos eq \n$login){                        \nreturn \n1;                \n}        \n}        return 0;}\n#--------------- End \n---------------\n\n \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n \nDaniel Henrique CasselaSupport Analist - \nWWRentcassela@wwrent.com.brICQ - \n93631946", "msg_date": "Fri, 1 Feb 2002 18:35:10 -0200", "msg_from": "\"Suporte\" <Suporte@wwrent.com.br>", "msg_from_op": true, "msg_subject": "The \" \\! \" and \" \\l \" commands" }, { "msg_contents": "Suporte writes:\n\n> Does anyone know how to disable the \" \\! \" and \" \\l \" commands ?\n\nFor outright disablement, you edit the file src/bin/psql/commands.c,\nremove the portions that deal with these commands, and rebuild.\n\nBut...\n\n> Users connect on another solaris through SSH with a shell developed by\n> me in perl, and connect to the PGSQL_SERVER through psql. The problem\n> is: when the user is on the PGSQL PROMPT and he types \" \\! /bin/sh \",\n> he gets the /bin/sh on the server.\n\nYou could start the psql program with SHELL=/bin/false in the environment.\n\n(I don't see what your situation has to do with \\l.)\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Fri, 1 Feb 2002 16:11:22 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: The \" \\! \" and \" \\l \" commands" }, { "msg_contents": "On Fri, 2002-02-01 at 21:11, Peter Eisentraut wrote:\n\n> You could start the psql program with SHELL=/bin/false in the environment.\n\nI just experimented with that; it doesn't stop you doing \"\\! sh\". Do we\nneed a psql equivalent of rbash (restricted Bash shell)?\n \nYou will probably have to run psql in a severely restricted chroot\nenvironment; or tweak the code of psql to eliminate the various\nloopholes (\\!, \\g, \\o).\n\nPerhaps instead you should look into IP-tunnelling into the PostgreSQL\nserver through ssh. I think your aim should be not to run psql on the\nserver at all.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n\n \"And be not conformed to this world; but be ye \n transformed by the renewing of your mind, that ye may \n prove what is that good, and acceptable, and perfect, \n will of God.\" Romans 12:2", "msg_date": "01 Feb 2002 21:54:07 +0000", "msg_from": "Oliver Elphick <olly@lfix.co.uk>", "msg_from_op": false, "msg_subject": "Re: The \" \\! \" and \" \\l \" commands" }, { "msg_contents": "Oliver Elphick <olly@lfix.co.uk> writes:\n> Perhaps instead you should look into IP-tunnelling into the PostgreSQL\n> server through ssh. I think your aim should be not to run psql on the\n> server at all.\n\nI agree. psql is meant to be run by the user, ie with end-user\npermissions. Trying to force it to be secure is swimming against\nthe tide. Run it on the client side with the client's permissions,\nor don't use it at all (there are plenty of alternatives...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 01 Feb 2002 17:36:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: The \" \\! \" and \" \\l \" commands " } ]
[ { "msg_contents": "Frank Wiles writes:\n\n> While considering you're preparing for a new release this probably\n> isn't the best time for this question, however I noticed you have a\n> TODO item \"Add documentation for perl, including mention of DBI/DBD\n> perl location\". Is this still outstanding?\n\nYes.\n\n> If so, I'd be happy to take on this item just let me know where you\n> want it in the documentation tree.\n\nBasically, we looking for a way to tie in the perldoc-style documentation\nwith the DocBook sources. One that is done, just add a sentence or three\nabout DBI/DBD. Since we don't supply that driver nor control it, you\nwouldn't want to get into too many details, just tell people that it's\nthere and where.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Fri, 1 Feb 2002 15:54:40 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: TODO Perl documentation question" }, { "msg_contents": " While considering you're preparing for a new release this probably\n isn't the best time for this question, however I noticed you have a \n TODO item \"Add documentation for perl, including mention of DBI/DBD \n perl location\". Is this still outstanding? \n \n If so, I'd be happy to take on this item just let me know where you \n want it in the documentation tree. \n\n ---------------------------------\n Frank Wiles <frank@wiles.org>\n http://frank.wiles.org\n ---------------------------------\n\n", "msg_date": "Fri, 1 Feb 2002 15:28:19 -0600", "msg_from": "Frank Wiles <frank@wiles.org>", "msg_from_op": false, "msg_subject": "TODO Perl documentation question" }, { "msg_contents": " .------[ Peter Eisentraut wrote (2002/02/01 at 15:54:40) ]------\n | \n | Frank Wiles writes:\n | \n | > While considering you're preparing for a new release this probably\n | > isn't the best time for this question, however I noticed you have a\n | > TODO item \"Add documentation for perl, including mention of DBI/DBD\n | > perl location\". Is this still outstanding?\n | \n | Yes.\n | \n | > If so, I'd be happy to take on this item just let me know where you\n | > want it in the documentation tree.\n | \n | Basically, we looking for a way to tie in the perldoc-style documentation\n | with the DocBook sources. One that is done, just add a sentence or three\n | about DBI/DBD. Since we don't supply that driver nor control it, you\n | wouldn't want to get into too many details, just tell people that it's\n | there and where.\n | \n `-------------------------------------------------\n\n There is pod2docbook on CPAN, have you looked into using that? Could\n probably use that and a small perl script to put in the proper chapter\n ids, links to DBI/DBD on CPAN, and a small blurb into the\n pod2docbook output. \n\n Is this along the lines of what you're after? \n\n ---------------------------------\n Frank Wiles <frank@wiles.org>\n http://frank.wiles.org\n ---------------------------------\n\n", "msg_date": "Fri, 1 Feb 2002 16:45:17 -0600", "msg_from": "Frank Wiles <frank@wiles.org>", "msg_from_op": false, "msg_subject": "Re: TODO Perl documentation question" } ]
[ { "msg_contents": "This fixes references to 7.1.3 (I think). It also modifies the japanese\nFAQ, so I'm not sure if that's done correctly.\n\nChris", "msg_date": "Sat, 2 Feb 2002 12:33:14 +0800 (WST)", "msg_from": "Christopher Kings-Lynne <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Those doc corrections I suggested..." } ]
[ { "msg_contents": "Hi,\n\nThe problem can be reproduced by following these steps.\n\n1) initdb a new DB cluster\n\n2) connect to template1 as postgres\n\n3) CREATE USER foo WITH CREATEDB;\n\n4) \\c template1 foo\n\n5) CREATE DATABASE foo;\n\n6) \\c template1 postgres\n\n7) ALTER USER foo NOCREATEDB;\n\n8) (quit psql); pg_dumpall\n\nThe dump that is produced will attempt to re-create the database like\nso:\n\n (1) create a user 'foo' with 'nocreatedb', since that's what the\nlatest data in pg_shadow says to do\n\n (2) database 'foo' that was created by user 'foo': so the next step\nis to connect as 'foo' and create the database\n\nObviously, the 2nd step fails. This wasn't too annoying for me (as I was\njust doing development), but for, say, a corporate DBA migrating a\ncouple hundred GB of data in a production environment, it could be a\n_real_ annoyance.\n\nNow, is this a bug?\n\nPerhaps pg_dump could check the current user permissions and see if such\na contradictory situation will arise? IMHO, it is better to detect such\na condition during the dump and bailout than to create a dump we _know_\nwon't restore properly. This still seems like a kludge...\n\nMaybe we could not allow \"ALTER USER foo NOCREATEDB\" if there is an\nentry in pg_database where 'datdba' = the user's sysID. Or at the least,\nemit a warning...\n\nAnyway, I just ran into this so I figured I'd toss it out for some\ncomments. This is running RC2, BTW.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n", "msg_date": "01 Feb 2002 23:40:55 -0500", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": true, "msg_subject": "pg_dump: bug?" }, { "msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> Now, is this a bug?\n\nGood question. I don't think this is the only example of a\nnon-self-consistent situation that could arise after a series of\nALTER commands; I'm not sure that we can or should try to solve\nevery one.\n\nHowever, it does seem that a superuser should be able to create\ndatabases on behalf of users who can't themselves do so. So\nI'd say that we need a \"CREATE DATABASE foo WITH OWNER bar\" option.\nThen pg_dumpall should emit such critters rather than the\ncircumlocution it uses now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 02 Feb 2002 00:29:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump: bug? " }, { "msg_contents": "Tom Lane wrote:\n\n> Neil Conway <nconway@klamath.dyndns.org> writes:\n> \n>>Now, is this a bug?\n>>\n> \n> Good question. I don't think this is the only example of a\n> non-self-consistent situation that could arise after a series of\n> ALTER commands; I'm not sure that we can or should try to solve\n> every one.\n\n\nUmmm...at some point in time, PG will need to be able to dump and \nrecreate a database no matter what the history.\n\nNo matter whether or not \"non-self-consistent situations\" occur. PG \nneeds to be able to snapshot and restore current state, whether or not \nit is a horror.\n\nOr else you might as well state that, like MySQL, the only thing to do \nis to knock down the database, tar files, and hope no one is interested \nin 24x7 uptime.\n\nWhen my clients ask about Oracle vs. PG I like to say \"PG\". They still \nmostly say \"Oracle\" and I oblige.\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n", "msg_date": "Fri, 01 Feb 2002 22:14:50 -0800", "msg_from": "Don Baccus <dhogaza@pacifier.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump: bug?" }, { "msg_contents": "On Sat, 2 Feb 2002, Tom Lane wrote:\n\n> However, it does seem that a superuser should be able to create\n> databases on behalf of users who can't themselves do so. So\n> I'd say that we need a \"CREATE DATABASE foo WITH OWNER bar\" option.\n\nI have submitted a patch to enable this. From memory Bruce put it against\n7.3.\n\nGavin\n\n", "msg_date": "Sat, 2 Feb 2002 22:27:00 +1100 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: pg_dump: bug? " } ]
[ { "msg_contents": "Hi,i re installed my server yesterday, upgrading mainly the os( RH 6.0 to RH7.2),Postgres 7.2b5 -> 7.2RC2 and some hardware.(raid controller)\n\nI got like 30 000 visit / day with 2Millions hits.\n\nthe os is : RH 7.2, the cpu is an athlon 600MHz with 1.5Go SDRAM\ngot an Adaptec raid controller 3200S ( one array in raid 1 - 2x18Go IBM 10 000 rpm)\n\nTop snapshot:\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n 4:41pm up 2:43, 1 user, load average: 62,01, 58,01, 48,15\n262 processes: 230 sleeping, 32 running, 0 zombie, 0 stopped\nCPU states: 15,4% user, 84,5% system, 0,0% nice, 0,0% idle\nMem: 1544452K av, 492688K used, 1051764K free, 62064K shrd, 36688K buff\nSwap: 2096472K av, 0K used, 2096472K free 171564K cached\n\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND\n 3075 orion 18 0 14584 14M 2928 R 1,3 0,9 0:02 java\n 3073 orion 17 0 14584 14M 2928 R 1,2 0,9 0:01 java\n 3074 postgres 15 0 21864 21M 21156 R 1,2 1,4 0:02 postmaster\n 3093 orion 18 0 14584 14M 2928 R 1,2 0,9 0:01 java\n 1710 orion 17 0 77860 75M 2976 S 1,1 4,9 0:27 java\n 3000 root 17 0 1184 1184 836 R 1,1 0,0 0:06 top\n 3076 postgres 17 0 21192 20M 20516 R 1,1 1,3 0:01 postmaster\n 3095 postgres 16 0 21696 21M 20996 R 1,1 1,4 0:01 postmaster\n 2463 postgres 11 0 45404 44M 43936 S 1,0 2,9 0:29 postmaster\n 1509 orion 12 0 77812 75M 2976 R 0,9 4,9 0:28 java\n 1511 orion 11 0 77812 75M 2976 R 0,9 4,9 0:29 java\n 1585 orion 18 0 77860 75M 2976 R 0,9 4,9 0:25 java\n 2240 orion 13 0 77900 75M 65396 S 0,9 4,9 0:23 java\n 2652 orion 11 0 77900 75M 63556 S 0,9 4,9 0:13 java\n 2734 orion 20 0 77900 75M 55948 R 0,9 4,9 0:09 java\n 3031 postgres 20 0 33344 32M 32188 R 0,9 2,1 0:03 postmaster\n 3126 postgres 13 0 27988 27M 27300 S 0,9 1,8 0:00 postmaster\n 2173 orion 9 0 77900 75M 65396 S 0,8 4,9 0:18 java\n 2182 orion 12 0 77900 75M 65396 S 0,8 4,9 0:21 java\n 2184 orion 14 0 77900 75M 65396 S 0,8 4,9 0:22 java\n 2484 orion 10 0 77900 75M 63556 S 0,8 4,9 0:17 java\n 2505 orion 11 0 77900 75M 63556 S 0,8 4,9 0:11 java\n 2687 orion 10 0 77900 75M 55948 S 0,8 4,9 0:11 java\n 2886 postgres 14 0 28032 27M 26520 S 0,8 1,8 0:09 postmaster\n 2959 postgres 16 0 33128 32M 31984 R 0,8 2,1 0:04 postmaster\n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n\nI was used to handle the load perfectly well and now its messy and i have no clue why.\nThe 2 things i have changed is the os: kernel 2.4.7-10 now and Postgresql 7.2RC2 instead of 7.2b5\n\nI wonder why i have a system cpu state so high, leading to a very very high load average.\nfor this snapshot i started orion with no argument ( java -jar orion.jar)\nAs for postgres ( max backend set to the default 32):\n tcpip_socket = true\n shared_buffers = 16384\n sort_mem = 4096\n wal_buffers = 2048\n wal_files = 3\n\nAtatched with this mail is the: ps -aux | grep postgres\n\nFirst question looking at this list of process, how is possible to have such many backends with max backend set to 32 ?\n\nthe log from postgres:\n\nDEBUG: database system is shut down\nDEBUG: database system was shut down at 2002-02-02 13:35:26 GMT\nDEBUG: checkpoint record is at 0/71A93E4\nDEBUG: redo record is at 0/71A93E4; undo record is at 0/0; shutdown TRUE\nDEBUG: next transaction id: 358386; next oid: 339529\nDEBUG: database system is ready\nDEBUG: recycled transaction log file 0000000000000007 \nERROR: Cannot insert a duplicate key into unique index eq_perso_tradeskill_pkey\nERROR: Cannot insert a duplicate key into unique index eq_perso_adv_pkey\nFATAL 1: cannot open /usr/local/pgsql/data/global/1262: Too many open files in system\nFATAL 1: cannot open /usr/local/pgsql/data/global/1262: Too many open files in system\n\n\n\nIf you have an idea of what is going on or how i could track down this issue..\n\nThx in advance\nBest regards", "msg_date": "Sat, 2 Feb 2002 18:26:18 +0100", "msg_from": "\"Christian Meunier\" <vchris@club-internet.fr>", "msg_from_op": true, "msg_subject": "Really weird behaviour with 7.2 RC2" }, { "msg_contents": "\"Christian Meunier\" <vchris@club-internet.fr> writes:\n> FATAL 1: cannot open /usr/local/pgsql/data/global/1262: Too many open file=\n> s in system\n> FATAL 1: cannot open /usr/local/pgsql/data/global/1262: Too many open file=\n> s in system\n\nLooks like you need to increase the size of the kernel file table.\nI forget which /proc file you need to set to do that in Linux, but\nit's surely covered in the \"kernel resources\" part of our\nAdministrator's Guide.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 02 Feb 2002 13:16:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Really weird behaviour with 7.2 RC2 " } ]
[ { "msg_contents": "Hi, I just want you guys to know how great PostgreSQL really is.\n\nAt www.dmn.com, we use PostgreSQL and Oracle. The original plan called for\nOracle, but over time systems have been scaled, and to do the scaling, we use\nPostgreSQL on Linux.\n\nWe have a music information system, using a commercial database data and public\nmusic information, we handle a couple million queries a day. In addition, this\nsystem, using custom code and Postgres on two load balanced machines, can\nidentify 200 different pieces of music a second and still handle the sites\nneeds.\n\nWe have a recommendations system which is based on PostgreSQL.\n\nOur back end administration system is PostgreSQL.\n\nIf you get a chance to hop over to http://www.dmn.com, and see a real slick\nwebsite irrevocably tied to PostgreSQL. In over a year of using PostgreSQL on\nthis site, we have never had a live database failure. \n\nIf we had used Oracle to accomplish what we have done with PostgreSQL, we would\nhave been out of business trying to pay Oracle licenses. Just thought all you\npeople who work long and hard on PostgreSQL would like to hear about a real\nlive site that has made a real commitment to PostgreSQL, and thanks for your\nefforts.\n", "msg_date": "Sat, 02 Feb 2002 18:42:45 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "FYI: How we use PostgreSQL" } ]
[ { "msg_contents": "At my old job, we used PostgreSQL exclusively. I was doing programming,\nand it was great.\n\nAt my new job I've had a chance to work with SQL Server and Oracle. I\ndon't like either of them from a SQL point of view. They just don't\ncompare (at least for what should be easy stuff, like dates). However,\nthey did have a strong point where I see PostgreSQL lacking at the\nmoment.\n\n(hint, if you read the subject, you've guessed it)\n\nE.g., Oracle 8i had a very nice management console (and from the glimpse\nof 9i I got, it's even better). You could look at things like database\nschemas via a tree view, database physical layouts, user information,\nand connection information. (I don't have it in front of me at the\nmoment, so I'm sure there is more.) In particular I was very interested\nin the ability to view not only what query a connection was doing, but\nwhich operations it did that took a long time to process (full table\nscans, etc), which one it is working on now, and how long that one will\ntake. I know a while ago someone was working on a way to get similar\nkinds of information by attaching to the backends via gdb or something\nequally dangerous/hackish/error-prone. When would such an ability be\nput into the system itself? (I believe Oracle does it through system\ntables, which I would think might be good for PostgreSQL, as it would be\nhard, and slow, to query each backend every time.)\n\nThe other ability that Oracle had that I was impressed with was the\nability to do partitioning. You could break a database up into pieces\nand put them on, say, different drives, files, whatever. This seems\nlike a good idea, and one I don't believe PostgreSQL has now. I suppose\nif you wanted to put a single table on another drive, you could move it\nand symlink it, but that sounds like another dirty hack. The other\nthing it could do was take a single table and partition it into separate\nphysical files based on ranges in a column. This could be used for\narchiving, for example.\n\nI know that there exist some pretty nifty third party solutions for\ndistributed and/or replicated databases (as listed on freshmeat.net),\nbut having a separate program responsible for it seems like a bad idea\nfor maintenance.\n\nBy now you are probably saying 'If you want these features, why don't\nyou implement them?'. Well, I really wouldn't know where to begin. \nI've been on this mailing list since last July with thoughts of working\non PostgreSQL, but more than anything it's convinced me that I wouldn't\nknow where to begin. ;{ I've purused the source and even read a few of\nthe interals documents, but I still don't think I would know what really\nneeds to be done. (Not to mention that this isn't a short list of easy\nfeatures.)\n\nWhat are all your thoughts on these items? Is PostgreSQL not at a point\nwhere it should be thinking of this stuff? I know you're adding some\nnew features and tweaks to the engine still, but I think the above\nfeatures would make alot of people more interested in PostgreSQL. Many\npeople still think that open source products are just not\nuser-friendly. I think these features would go a long way towards that.\n\n--Kevin\n", "msg_date": "Sun, 03 Feb 2002 00:25:04 -0500", "msg_from": "Kevin <TenToThe8th@yahoo.com>", "msg_from_op": true, "msg_subject": "Management tool support and scalibility" }, { "msg_contents": "Kevin writes:\n\n> E.g., Oracle 8i had a very nice management console (and from the glimpse\n> of 9i I got, it's even better). You could look at things like database\n> schemas via a tree view, database physical layouts, user information,\n> and connection information. (I don't have it in front of me at the\n> moment, so I'm sure there is more.)\n\nMost of that information is now accessible via system views[1], so the\nproblem reduces mainly to writing a GUI for that. In my mind, the problem\nwith writing such a GUI is that there isn't an easy choice of toolkit, and\nmost of us (active PostgreSQL developers) aren't well-versed in writing\nGUIs. Not that that's an excuse.\n\nA point aside: There's a MySQL GUI[2], which seems to be doing exactly\nwhat you have in mind.\n\n[1] http://developer.postgresql.org/docs/postgres/monitoring-stats.html\n[2] http://www.mysql.com/downloads/gui-mysqlgui.html\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sun, 3 Feb 2002 01:04:21 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Management tool support and scalibility" }, { "msg_contents": "On Sun, 2002-02-03 at 18:25, Kevin wrote:\n> \n> E.g., Oracle 8i had a very nice management console (and from the glimpse\n> of 9i I got, it's even better). You could look at things like database\n> schemas via a tree view, database physical layouts, user information,\n> and connection information. (I don't have it in front of me at the\n> moment, so I'm sure there is more.) In particular I was very interested\n> in the ability to view not only what query a connection was doing, but\n> which operations it did that took a long time to process (full table\n> scans, etc), which one it is working on now, and how long that one will\n> take. I know a while ago someone was working on a way to get similar\n> kinds of information by attaching to the backends via gdb or something\n> equally dangerous/hackish/error-prone. When would such an ability be\n> put into the system itself? (I believe Oracle does it through system\n> tables, which I would think might be good for PostgreSQL, as it would be\n> hard, and slow, to query each backend every time.)\n\nI believe that TOra is starting to have support for PostgreSQL now,\nalthough I haven't managed to get it working for myself yet :-)\n\nSome of the guys in our office use it for Oracle management and seem to\nthink pretty highly of it. I know it supports MySQL as well - can't\nwait until the PostgreSQL support is fully available.\n\nRegards,\n\t\t\t\t\tAndrew.\n-- \n--------------------------------------------------------------------\nAndrew @ Catalyst .Net.NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)916-7201 MOB: +64(21)635-694 OFFICE: +64(4)499-2267\n Are you enrolled at http://schoolreunions.co.nz/ yet?\n\n", "msg_date": "03 Feb 2002 21:27:02 +1300", "msg_from": "Andrew McMillan <andrew@catalyst.net.nz>", "msg_from_op": false, "msg_subject": "Re: Management tool support and scalibility" }, { "msg_contents": "On 3 Feb 2002, Andrew McMillan wrote:\n\n> On Sun, 2002-02-03 at 18:25, Kevin wrote:\n> >\n> > E.g., Oracle 8i had a very nice management console (and from the glimpse\n> > of 9i I got, it's even better). You could look at things like database\n> > schemas via a tree view, database physical layouts, user information,\n> > and connection information. (I don't have it in front of me at the\n> > moment, so I'm sure there is more.) In particular I was very interested\n> > in the ability to view not only what query a connection was doing, but\n> > which operations it did that took a long time to process (full table\n> > scans, etc), which one it is working on now, and how long that one will\n> > take. I know a while ago someone was working on a way to get similar\n> > kinds of information by attaching to the backends via gdb or something\n> > equally dangerous/hackish/error-prone. When would such an ability be\n> > put into the system itself? (I believe Oracle does it through system\n> > tables, which I would think might be good for PostgreSQL, as it would be\n> > hard, and slow, to query each backend every time.)\n>\n> I believe that TOra is starting to have support for PostgreSQL now,\n> although I haven't managed to get it working for myself yet :-)\n\ndoes it require KDE/Gnome stuff ?\n\n>\n> Some of the guys in our office use it for Oracle management and seem to\n> think pretty highly of it. I know it supports MySQL as well - can't\n> wait until the PostgreSQL support is fully available.\n>\n> Regards,\n> \t\t\t\t\tAndrew.\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Sun, 3 Feb 2002 11:39:19 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: Management tool support and scalibility" }, { "msg_contents": "On Sun, 2002-02-03 at 21:39, Oleg Bartunov wrote:\n> > I believe that TOra is starting to have support for PostgreSQL now,\n> > although I haven't managed to get it working for myself yet :-)\n> \n> does it require KDE/Gnome stuff ?\n\nLib QT, but not any KDE, AFAIC tell.\n\nCheers,\n\t\t\t\t\tAndrew.\n-- \n--------------------------------------------------------------------\nAndrew @ Catalyst .Net.NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)916-7201 MOB: +64(21)635-694 OFFICE: +64(4)499-2267\n Are you enrolled at http://schoolreunions.co.nz/ yet?\n\n", "msg_date": "03 Feb 2002 21:51:44 +1300", "msg_from": "Andrew McMillan <andrew@catalyst.net.nz>", "msg_from_op": false, "msg_subject": "Re: Management tool support and scalibility" }, { "msg_contents": "On Sun, Feb 03, 2002 at 09:27:02PM +1300, Andrew McMillan wrote:\n> I believe that TOra is starting to have support for PostgreSQL now,\n\nIt does, through qt3.\n\n> although I haven't managed to get it working for myself yet :-)\n\nHow about using Debian GNU/Linux? There's a tora package available. :-)\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Sun, 3 Feb 2002 10:57:42 +0100", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Management tool support and scalibility" }, { "msg_contents": "On Sun, Feb 03, 2002 at 11:39:19AM +0300, Oleg Bartunov wrote:\n> does it require KDE/Gnome stuff ?\n\nOnly QT3.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Sun, 3 Feb 2002 12:32:03 +0100", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Management tool support and scalibility" }, { "msg_contents": "This email demonstrates how important GUIs are for database users. It seems \nlike users are judging PostgreSQL on the ability to create, view and modify \nPostgreSQL objects.\n\nDave Page wrote a very nice GUI called pgAdmin II \n(http://padmin.postgresql.org).\n\nIt gives access to all PostgreSQL features. It is a must-have, especially if \nyou want to use PostgreSQL 7.2 and its CREATE OR REPLACE FUNCTION which are \nsupported.\n\nI would also like to take the opportunity to point out (again) how important \nare on the to-do list : CREATE OR REPLACE VIEW, CREATE OR REPLACE TRIGGER. In \naddition, we would like to have a better CREATE TABLE AS with a choice of \npreserving/dropping linked objects (primary key, triggers, rules) and \nhopefully an ALTER TABLE ALTER COLUMN clause.\n\nWhy not concentrate on these very simple features before going further? This \nwould bring a bunch of people from beginner tools (MySQL) as well as advanced \nones (Oracle, MS SQL Server) to PostgreSQL.\n\nBest regards,\nJean-Michel POURE\n", "msg_date": "Mon, 4 Feb 2002 08:33:56 +0100", "msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>", "msg_from_op": false, "msg_subject": "Re: Management tool support and scalibility" }, { "msg_contents": "On Sun, 2002-02-03 at 11:57, Michael Meskes wrote:\n\n> How about using Debian GNU/Linux? There's a tora package available. :-)\n\nTried, and it looks great, but I've only managed to use it with MySql,\nsince the Debian package libqt3-pgsql is not available. Anyone knows\nwhy?\n\nI will try to later to build the package myself or to use tora via ODBC.\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-22-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n\n", "msg_date": "05 Feb 2002 16:00:51 +0200", "msg_from": "Alessio Bragadini <alessio@albourne.com>", "msg_from_op": false, "msg_subject": "Re: Management tool support and scalibility" }, { "msg_contents": "On Tue, Feb 05, 2002 at 04:00:51PM +0200, Alessio Bragadini wrote:\n> On Sun, 2002-02-03 at 11:57, Michael Meskes wrote:\n> \n> > How about using Debian GNU/Linux? There's a tora package available. :-)\n> \n> Tried, and it looks great, but I've only managed to use it with MySql,\n> since the Debian package libqt3-pgsql is not available. Anyone knows\n> why?\n\nlibqt3-psql is available...I think that's it. It's in non-us. I\nhaven't connected to a PG db with it yet, though. \n \n> I will try to later to build the package myself or to use tora via ODBC.\n\nKen Kennedy\t| http://www.kenzoid.com\t| kenzoid@io.com", "msg_date": "Tue, 5 Feb 2002 11:08:29 -0500", "msg_from": "kkennedy@kenzoid.com (Ken Kennedy)", "msg_from_op": false, "msg_subject": "Re: Management tool support and scalibility" }, { "msg_contents": "On Tue, Feb 05, 2002 at 04:00:51PM +0200, Alessio Bragadini wrote:\n> On Sun, 2002-02-03 at 11:57, Michael Meskes wrote:\n> \n> > How about using Debian GNU/Linux? There's a tora package available. :-)\n> \n> Tried, and it looks great, but I've only managed to use it with MySql,\n> since the Debian package libqt3-pgsql is not available. Anyone knows\n> why?\n\nWhere did you guys find the Debian TOra package? I've hunted around and\ncan't seem to find it.\n\nRoss\n", "msg_date": "Tue, 5 Feb 2002 12:04:28 -0600", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: Management tool support and scalibility" }, { "msg_contents": "On Tue, Feb 05, 2002 at 12:04:28PM -0600, Ross J. Reedstrom wrote:\n> > Tried, and it looks great, but I've only managed to use it with MySql,\n> > since the Debian package libqt3-pgsql is not available. Anyone knows\n> > why?\n\nSorry, I missed this one. libqt3-psql as all the other PostgreSQL stuff is\navailable only via non-US. Just look at\nhttp://nonus.debian.org/debian/pool/non-US/main/libq/libqt3-psql/\n\n> Where did you guys find the Debian TOra package? I've hunted around and\n> can't seem to find it.\n\nIt's not in testing aka woody yet. You can only get it from unstable aka sid\nunder pool/main/t/tora.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Tue, 5 Feb 2002 19:29:26 +0100", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Management tool support and scalibility" }, { "msg_contents": "Michael Meskes writes:\n\n> Sorry, I missed this one. libqt3-psql as all the other PostgreSQL stuff is\n> available only via non-US.\n\nWhy?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 6 Feb 2002 00:21:17 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Management tool support and scalibility" }, { "msg_contents": "On Wed, 2002-02-06 at 18:21, Peter Eisentraut wrote:\n> Michael Meskes writes:\n> \n> > Sorry, I missed this one. libqt3-psql as all the other PostgreSQL stuff is\n> > available only via non-US.\n> \n> Why?\n\nIt links against encryption libraries - US export regulations.\n\nCheers,\n\t\t\t\tAndrew.\n-- \n--------------------------------------------------------------------\nAndrew @ Catalyst .Net.NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)916-7201 MOB: +64(21)635-694 OFFICE: +64(4)499-2267\n Are you enrolled at http://schoolreunions.co.nz/ yet?\n\n", "msg_date": "07 Feb 2002 07:41:38 +1300", "msg_from": "Andrew McMillan <andrew@catalyst.net.nz>", "msg_from_op": false, "msg_subject": "Re: Management tool support and scalibility" } ]
[ { "msg_contents": ">> Hi, I just want you guys to know how great PostgreSQL really is.\n\nAgreed! We have a similar success story with our client Flower.com. This\nsystem is completely based on postgresql, and although we are generally\ndoing more simple queries on the frontend site - postgresql backs all of\nour CRM and Analytical apps without a hitch.\n\nPostgres has saved our client's money and it keeps morale up at our\ncompany because everyone enjoys postgres so much.\n\nExcellent Work Postgres Dev Team!\n\nBTW, Mark - we have been using your msessions on our Linux cluster - it's\ngreat keep up the good work as well!\n\nSincerely,\n\nRyan Mahoney\nCTO Payment Alliance, Inc.\n\n\n", "msg_date": "Sun, 3 Feb 2002 14:49:36 -0500 (EST)", "msg_from": "<ryan@paymentalliance.net>", "msg_from_op": true, "msg_subject": "Re: FYI: How we use PostgreSQL" }, { "msg_contents": "> Agreed! We have a similar success story with our client Flower.com. This\n> system is completely based on postgresql, and although we are generally\n> doing more simple queries on the frontend site - postgresql backs all of\n> our CRM and Analytical apps without a hitch.\n>\n> Postgres has saved our client's money and it keeps morale up at our\n> company because everyone enjoys postgres so much.\n>\n> Excellent Work Postgres Dev Team!\n\nWell if we're doing the 'success stories' thing...\n\nI'm the Lead Programmer for the Family Health Network - a group responsible\nfor www.dietclub.com.au and www.calorieking.com and others. Both these\nsites are rapidly getting more and more popular. PostgreSQL is an essential\npart of our enterprise - as we are a small business and Oracle is\nridiculously expensive. Plus, I'd be forced to use smelly old Linux instead\nof our much-beloved FreeBSD :)\n\nWe started out using MySQL, but soon realised that it simply could not do\nthings that Posgres could - and put waaay too much work in the hands of our\nweb programmers. (App programmers shouldn't have to worry about referential\nintegrity, etc.).\n\nWe run 7.1.3 on both servers, and are going to go to 7.2 in Australia when\nit comes out. If it's a nice, stable release (which it sure looks like it's\ngoing to be!) it will be rolled out in the USA as well.\n\nThe things that we need most from postgres in the future is better schema\nmanipulation. I know the standard response to questions like 'why can't I\ndrop a column' and 'why can't I change a column to be NULL' is usually\n\"design your schema right in the first place\", however our industry moves\nvery quickly and our initial schemas always need some refinement as new\nfeatures are added.\n\nI can't emphasise enough how great Postgres has been for us!!! (I've\nconverted all my MySQL friends :) )\n\nChris\n\n", "msg_date": "Mon, 4 Feb 2002 09:14:24 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: FYI: How we use PostgreSQL" } ]
[ { "msg_contents": "Try pgAdmin II (http://pgadmin.postgresql.org). It runs on Windows but has\nplenty of features. It's main limitation is that it can only do things that\nare possible via ODBC so you can't control the Postmaster or configure the\nserver with it (yet).\n\nRegards, Dave.\n\n> -----Original Message-----\n> From: Kevin [mailto:TenToThe8th@yahoo.com] \n> Sent: 03 February 2002 05:25\n> To: PGSQL Hackers\n> Subject: [HACKERS] Management tool support and scalibility\n> \n> \n> At my old job, we used PostgreSQL exclusively. I was doing \n> programming, and it was great.\n> \n> At my new job I've had a chance to work with SQL Server and \n> Oracle. I don't like either of them from a SQL point of \n> view. They just don't compare (at least for what should be \n> easy stuff, like dates). However, they did have a strong \n> point where I see PostgreSQL lacking at the moment.\n> \n> (hint, if you read the subject, you've guessed it)\n> \n> E.g., Oracle 8i had a very nice management console (and from \n> the glimpse of 9i I got, it's even better). You could look \n> at things like database schemas via a tree view, database \n> physical layouts, user information, and connection \n> information. (I don't have it in front of me at the moment, \n> so I'm sure there is more.) In particular I was very \n> interested in the ability to view not only what query a \n> connection was doing, but which operations it did that took a \n> long time to process (full table scans, etc), which one it is \n> working on now, and how long that one will take. I know a \n> while ago someone was working on a way to get similar kinds \n> of information by attaching to the backends via gdb or \n> something equally dangerous/hackish/error-prone. When would \n> such an ability be put into the system itself? (I believe \n> Oracle does it through system tables, which I would think \n> might be good for PostgreSQL, as it would be hard, and slow, \n> to query each backend every time.)\n> \n> The other ability that Oracle had that I was impressed with \n> was the ability to do partitioning. You could break a \n> database up into pieces and put them on, say, different \n> drives, files, whatever. This seems like a good idea, and \n> one I don't believe PostgreSQL has now. I suppose if you \n> wanted to put a single table on another drive, you could move \n> it and symlink it, but that sounds like another dirty hack. \n> The other thing it could do was take a single table and \n> partition it into separate physical files based on ranges in \n> a column. This could be used for archiving, for example.\n> \n> I know that there exist some pretty nifty third party \n> solutions for distributed and/or replicated databases (as \n> listed on freshmeat.net), but having a separate program \n> responsible for it seems like a bad idea for maintenance.\n> \n> By now you are probably saying 'If you want these features, \n> why don't you implement them?'. Well, I really wouldn't know \n> where to begin. \n> I've been on this mailing list since last July with thoughts \n> of working on PostgreSQL, but more than anything it's \n> convinced me that I wouldn't know where to begin. ;{ I've \n> purused the source and even read a few of the interals \n> documents, but I still don't think I would know what really \n> needs to be done. (Not to mention that this isn't a short \n> list of easy\n> features.)\n> \n> What are all your thoughts on these items? Is PostgreSQL not \n> at a point where it should be thinking of this stuff? I know \n> you're adding some new features and tweaks to the engine \n> still, but I think the above features would make alot of \n> people more interested in PostgreSQL. Many people still \n> think that open source products are just not user-friendly. \n> I think these features would go a long way towards that.\n> \n> --Kevin\n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to \n> majordomo@postgresql.org\n> \n", "msg_date": "Sun, 3 Feb 2002 20:43:29 -0000 ", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: Management tool support and scalibility" } ]
[ { "msg_contents": "\n> > > Syntax error on line 222 of\n> > /export/home/rajkumar/apache/conf/httpd.conf:\n> > > Cannot load /export/home/rajkumar/apache/libexec/libphp4.so into\n> > server: ld.so.1: > /export/home/rajkumar/apache/bin/httpd: fatal:\n> > relocation error: file /usr/local/lib/libpq.so.2: symbol main:\n> > >referenced symbol not found\n> \n> This looks more like a problem in the libphp link process \n> and/or the httpd\n> dynamic loading process.\n\nI have seen a similar failure on AIX recently. It seems that \nphp (and tcl, which was my case) use a somewhat different way to load \nthe shared libs than PostgreSQL does.\n\nWhile PostgreSQL does not care, they need a shared lib that was linked \nwith a switch to specify no entry (like -bnoentry on AIX).\nMaybe that has to do with our RTLD_LAZY ?\n\nAndreas\n", "msg_date": "Mon, 4 Feb 2002 09:49:07 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: libpq - main symbol unresolved." } ]
[ { "msg_contents": "\n... can a few of you go take a peak and let me know if anything is\nwrong/missing?\n\n", "msg_date": "Mon, 4 Feb 2002 10:28:18 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "v7.2 rolled last night ..." }, { "msg_contents": "On Mon, 4 Feb 2002, Marc G. Fournier wrote:\n\n>\n> ... can a few of you go take a peak and let me know if anything is\n> wrong/missing?\n\nAppears complete and builds ok.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 4 Feb 2002 09:50:24 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: v7.2 rolled last night ..." }, { "msg_contents": "> ... can a few of you go take a peak and let me know if anything is\n> wrong/missing?\n\nJust a quick report: looks good and all regression tests passed on my\nLinux (variant of RedHat 6.2).\n--\nTatsuo Ishii\n", "msg_date": "Tue, 05 Feb 2002 00:12:09 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: v7.2 rolled last night ..." }, { "msg_contents": "El lun, 04-02-2002 a las 08:28, Marc G. Fournier escribi�:\n> \n> ... can a few of you go take a peak and let me know if anything is\n> wrong/missing?\n\nI can see in the /pub/beta directory:\npostgresql-7.2.tar.gz\n\nWhen will you release it? so I could send a message to the PostgreSQL\nlist in Spanish and to cofradia.org, giving the good news.\n\n-- \nSaludos,\n\nRoberto Andrade Fonseca\nrandrade@abl.com.mx\n", "msg_date": "04 Feb 2002 09:43:59 -0600", "msg_from": "Roberto Andrade Fonseca <randrade@abl.com.mx>", "msg_from_op": false, "msg_subject": "Re: v7.2 rolled last night ..." }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> ... can a few of you go take a peak and let me know if anything is\n> wrong/missing?\n\nLooks like you didn't insert a tag into the CVS repository?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 04 Feb 2002 10:51:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v7.2 rolled last night ... " }, { "msg_contents": "Looks good and all checks out on OBSD intel / sparc.\n\n- Brandon\n\n\n----------------------------------------------------------------------------\n c: 646-456-5455 h: 201-798-4983\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n", "msg_date": "Mon, 4 Feb 2002 12:19:46 -0500 (EST)", "msg_from": "bpalmer <bpalmer@crimelabs.net>", "msg_from_op": false, "msg_subject": "Re: v7.2 rolled last night ..." }, { "msg_contents": "Also checks out on Solaris8/Sparc.\n\n- Brandon\n\n----------------------------------------------------------------------------\n c: 646-456-5455 h: 201-798-4983\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n", "msg_date": "Mon, 4 Feb 2002 13:14:21 -0500 (EST)", "msg_from": "bpalmer <bpalmer@crimelabs.net>", "msg_from_op": false, "msg_subject": "Re: v7.2 rolled last night ..." }, { "msg_contents": "\nd'oh, knew I missed a step :(\n\ntag'ng that now ... thanks ...\n\n\nOn Mon, 4 Feb 2002, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > ... can a few of you go take a peak and let me know if anything is\n> > wrong/missing?\n>\n> Looks like you didn't insert a tag into the CVS repository?\n>\n> \t\t\tregards, tom lane\n>\n\n", "msg_date": "Mon, 4 Feb 2002 15:05:31 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: v7.2 rolled last night ... " }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n\n> ... can a few of you go take a peak and let me know if anything is\n> wrong/missing?\n\nAll regression tests are succesful on Red Hat Linux with current\nupdates (as long as the locale is set to \"C\", when different, flaws in\nthe test design shows up).\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "04 Feb 2002 18:13:25 -0500", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: v7.2 rolled last night ..." }, { "msg_contents": "teg@redhat.com (Trond Eivind Glomsr�d) writes:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> \n> > ... can a few of you go take a peak and let me know if anything is\n> > wrong/missing?\n> \n> All regression tests are successful on Red Hat Linux with current\n> updates \n\nRHL 7.2 (bah, forgot to type the version number. It's included in the\nentry in the regresion database, though).\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "04 Feb 2002 18:21:43 -0500", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: v7.2 rolled last night ..." }, { "msg_contents": "> On Mon, 4 Feb 2002, Tom Lane wrote:\n> \n> > \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > > ... can a few of you go take a peak and let me know if anything is\n> > > wrong/missing?\n\nAre you going to wrap a new package or to release it as is?\n\nI'm going to give a presentation at a big exhibition in Japan the day\nafter tommorow. I just want to know 7.2 has been officially released\nor not by the time...\n--\nTatsuo Ishii\n", "msg_date": "Tue, 05 Feb 2002 12:34:13 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: v7.2 rolled last night ... " }, { "msg_contents": "Tatsuo Ishii writes:\n\n> Are you going to wrap a new package or to release it as is?\n>\n> I'm going to give a presentation at a big exhibition in Japan the day\n> after tommorow. I just want to know 7.2 has been officially released\n> or not by the time...\n\nWell, there's a CVS tag and the release notes have a release date of\nFebruary 4th, so I consider it a release. Maybe it's supposed to be a\nsecret?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 4 Feb 2002 23:30:20 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: v7.2 rolled last night ... " }, { "msg_contents": "Hello,\n\n> ... can a few of you go take a peak and let me know if anything is\n> wrong/missing?\n\nmake -j 2\ndoes not work.\n\n--\nDenis\n", "msg_date": "Tue, 5 Feb 2002 19:55:40 +0600", "msg_from": "Denis Perchine <dyp@perchine.com>", "msg_from_op": false, "msg_subject": "Re: v7.2 rolled last night ..." }, { "msg_contents": "Builds fine on alphaev67-dec-osf4.0g, compiled by cc -std\n(Compaq Tru64 4.0g, Compaq cc)\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-22-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n\n", "msg_date": "05 Feb 2002 16:58:02 +0200", "msg_from": "Alessio Bragadini <alessio@albourne.com>", "msg_from_op": false, "msg_subject": "Re: v7.2 rolled last night ..." }, { "msg_contents": "Denis Perchine writes:\n\n> Hello,\n>\n> > ... can a few of you go take a peak and let me know if anything is\n> > wrong/missing?\n>\n> make -j 2\n> does not work.\n\nYou should know better than to make unsupported claims of \"does not work\".\nFWIW, it \"does work\" here.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 5 Feb 2002 10:29:25 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: v7.2 rolled last night ..." }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tatsuo Ishii writes:\n>> Are you going to wrap a new package or to release it as is?\n\n> Well, there's a CVS tag and the release notes have a release date of\n> February 4th, so I consider it a release. Maybe it's supposed to be a\n> secret?\n\nNo, it's supposed to be a release ;-). I think the only reason Marc\ndidn't put out an announcement yet is that he likes to give the mirrors\na full day to get sync'd up before people start hitting the ftp servers.\nI expect to see an announce come by any minute now (right Marc?)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 05 Feb 2002 10:53:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v7.2 rolled last night ... " }, { "msg_contents": "On Tuesday 05 February 2002 21:29, Peter Eisentraut wrote:\n> Denis Perchine writes:\n> > Hello,\n> >\n> > > ... can a few of you go take a peak and let me know if anything is\n> > > wrong/missing?\n> >\n> > make -j 2\n> > does not work.\n>\n> You should know better than to make unsupported claims of \"does not work\".\n> FWIW, it \"does work\" here.\n\nparallel make errors are mostly timing dependent. Here is the example. RH 6.2\n\n[ec@linux03 postgresql-7.2]$ make -j 2\nmake -C doc all\nmake[1]: Entering directory `/home/ec/1/postgresql-7.2/doc'\ngzip -d -c man.tar.gz | /bin/tar xf -\ngzip -d -c man.tar.gz | /bin/tar xf -\nfor file in man1/*.1; do \\\n mv $file $file.bak && \\\n sed -e 's/\\\\fR(l)/\\\\fR(7)/' $file.bak >$file && \\\n rm $file.bak || exit; \\\ndone\nfor file in man1/*.1; do \\\n mv $file $file.bak && \\\n sed -e 's/\\\\fR(l)/\\\\fR(7)/' $file.bak >$file && \\\n rm $file.bak || exit; \\\ndone\nrm: cannot remove `man1/createlang.1.bak': No such file or directory\nmake[1]: *** [man1/.timestamp] Error 1\nmake[1]: *** Waiting for unfinished jobs....\nmake[1]: *** Waiting for unfinished jobs....\nmake[1]: *** Waiting for unfinished jobs....\nmake[1]: Leaving directory `/home/ec/1/postgresql-7.2/doc'\nmake: *** [all] Error 2\n", "msg_date": "Tue, 5 Feb 2002 21:58:53 +0600", "msg_from": "Denis Perchine <dyp@perchine.com>", "msg_from_op": false, "msg_subject": "Re: v7.2 rolled last night ..." }, { "msg_contents": "Denis Perchine writes:\n\n> [ec@linux03 postgresql-7.2]$ make -j 2\n> make -C doc all\n> make[1]: Entering directory `/home/ec/1/postgresql-7.2/doc'\n> gzip -d -c man.tar.gz | /bin/tar xf -\n> gzip -d -c man.tar.gz | /bin/tar xf -\n> for file in man1/*.1; do \\\n> mv $file $file.bak && \\\n> sed -e 's/\\\\fR(l)/\\\\fR(7)/' $file.bak >$file && \\\n> rm $file.bak || exit; \\\n> done\n> for file in man1/*.1; do \\\n> mv $file $file.bak && \\\n> sed -e 's/\\\\fR(l)/\\\\fR(7)/' $file.bak >$file && \\\n> rm $file.bak || exit; \\\n> done\n> rm: cannot remove `man1/createlang.1.bak': No such file or directory\n\nI see. We had fixed one case of these flawed multiple-target rules, but I\nguess there are more. I've identified some other places that could cause\nsimilar problems. Expect a fix in the next release.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 5 Feb 2002 17:34:48 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: v7.2 rolled last night ..." }, { "msg_contents": "Hello,\n\nAnother problem with 7.2.\n\nI have considered to migrate from 7.1.3 to 7.2. I have dumped a database using\npg_dump -Fc. When I tried to restore it using pg_restore, it gives me an \nerror. The problem was that it creates a view before a table it refers to.\n\nIf you need more info, just ask,\n\n--\nDenis\n\n", "msg_date": "Wed, 6 Feb 2002 15:38:00 +0600", "msg_from": "Denis Perchine <dyp@perchine.com>", "msg_from_op": false, "msg_subject": "Re: v7.2 rolled last night ..." }, { "msg_contents": "At 03:38 PM 2/6/02 +0600, Denis Perchine wrote:\n>Hello,\n>\n>Another problem with 7.2.\n>\n>I have considered to migrate from 7.1.3 to 7.2. I have dumped a database\nusing\n>pg_dump -Fc. When I tried to restore it using pg_restore, it gives me an \n>error. The problem was that it creates a view before a table it refers to.\n\nTry dumping it with the 7.2 version of pg_dump, if possible. From memory\nthis ordering problem was fixed in a 7.1 patch, but if you are really on\n7.1.3, that seems unlikely.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 06 Feb 2002 20:59:20 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: v7.2 rolled last night ..." } ]
[ { "msg_contents": "First just let you know i solved my past issue, i used IPCHAINS to redirect port 80 to a non root port, look like there is some ressource system leaking in IPCHAINS, i moved from it to IPTABLE and all works fine now.\n\nBack to postgres :)\n\nlet's say i have one table with the following schema:\n\nTable test(num int not null primary key,.........);\n i have a sequence to handle the num: create sequence test_num_seq;\n\nWhen i insert a new tuple i use: insert into test values(nextval('test_num_seq'),......);\n\nWhen i look at my postgres log, time to time i see:\n2002-02-04 08:11:47 ERROR: Cannot insert a duplicate key into unique index test_pkey\n\nIt's really rare (considering the traffic i handle) but it shouldnt happen at all.\n\nHa using 7.2RC2 btw\nBest regards\n\n\n\n\n\n\n\nFirst just let you know i solved my past issue, i \nused IPCHAINS to redirect port 80 to a non root port, look like there is some \nressource system leaking in IPCHAINS, i moved from it to IPTABLE and all works \nfine now.\n \nBack to postgres :)\n \nlet's say i have one table with the following \nschema:\n \nTable test(num int not null primary \nkey,.........);\n i have a sequence to handle the num: \ncreate sequence test_num_seq;\n \nWhen i insert a new tuple i use: insert into test \nvalues(nextval('test_num_seq'),......);\n \nWhen i look at my postgres log, time to time i \nsee:\n2002-02-04 08:11:47 ERROR:  Cannot insert a \nduplicate key into unique index test_pkey\n \nIt's really rare (considering the traffic i handle) \nbut it shouldnt happen at all.\n \nHa using 7.2RC2 btw\nBest regards", "msg_date": "Mon, 4 Feb 2002 17:06:43 +0100", "msg_from": "\"Christian Meunier\" <vchris@club-internet.fr>", "msg_from_op": true, "msg_subject": "Bug with sequence ?" }, { "msg_contents": "\"Christian Meunier\" <vchris@club-internet.fr> writes:\n> When i insert a new tuple i use: insert into test values(nextval('test_num_=\n> seq'),......);\n> When i look at my postgres log, time to time i see:\n> 2002-02-04 08:11:47 ERROR: Cannot insert a duplicate key into unique index=\n> test_pkey\n\nYou sure that's the *only* way that the num column gets inserted or\nupdated?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 04 Feb 2002 11:21:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug with sequence ? " }, { "msg_contents": "Well correcting the report:\n\nby the past, i was used to have this trouble ( using 7.2b5 or prior and\nipchains)\n\nthe only 2 occurences i have now in my log are not linked to sequence so it\nmust occur sometime somehow but it has nothing to do with postgres.\n\nHowever by the past, i had this trouble with sequence, i dont know if i get\nrid off this issue updating to 7.2RC2 or moving to IPTABLE instead of\nipchains.\n\n\n\n\n\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Christian Meunier\" <vchris@club-internet.fr>\nCc: <pgsql-hackers@postgresql.org>\nSent: Monday, February 04, 2002 5:21 PM\nSubject: Re: [HACKERS] Bug with sequence ?\n\n\n> \"Christian Meunier\" <vchris@club-internet.fr> writes:\n> > When i insert a new tuple i use: insert into test\nvalues(nextval('test_num_=\n> > seq'),......);\n> > When i look at my postgres log, time to time i see:\n> > 2002-02-04 08:11:47 ERROR: Cannot insert a duplicate key into unique\nindex=\n> > test_pkey\n>\n> You sure that's the *only* way that the num column gets inserted or\n> updated?\n>\n> regards, tom lane\n>\n\n", "msg_date": "Mon, 4 Feb 2002 19:13:27 +0100", "msg_from": "\"Christian Meunier\" <vchris@club-internet.fr>", "msg_from_op": true, "msg_subject": "Re: Bug with sequence ? " } ]
[ { "msg_contents": "\"Dave Cramer\" <davec@fastcrypt.com> writes:\n> Mine manifests with the sequence being updated, and then another process\n> comes along and tries to do the same operation and somehow gets the same\n> sequence.\n\nAll I can say is I've been over the sequence code with a flinty eye,\nand I'm darned if I can see anything wrong with it. I don't think\nwe are going to make much progress on this without being able to\ninvestigate a failure with a debugger.\n\nA self-contained example that shows a failure (even if only rarely)\nwould be really useful. Anything less really isn't useful.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 04 Feb 2002 12:29:06 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Bug with sequence ? " } ]
[ { "msg_contents": "Last week I said:\n> Hmm ... what that says is that unlinking pg_internal.init in\n> setRelfilenode is the wrong place. The right place is *after*\n> committing your transaction and *before* sending shared cache inval\n> messages. You can't unlink before you commit, or someone may rebuild\n> using the old information. (A backend that's already logged into the\n> PROC array when you send SI inval will find out about the changes via SI\n> inval. One that is not yet logged in must be prevented from reading the\n> now-obsolete pg_internal.init file. The startup sequence logs into PROC\n> before trying to read pg_internal.init, so that part is done in the\n> right order.) So we need a flag that will cause the unlink to happen at\n> the right time in post-commit cleanup.\n\nThis is correct as far as it goes, but it only prevents one possible\nfailure scenario, namely a new backend reading an obsolete\npg_internal.init file and then not seeing any SI update messages to\nforce it to update the obsolete relcache entries.\n\nIt occurs to me that there's a second failure scenario, which is that\npg_internal.init is already gone and there is someone busy constructing\na new init file (using now-obsolete information) at the time you commit\nwhatever you've been doing to the catalogs. Your attempt to unlink\npg_internal.init will not help, since it doesn't exist anyway (the other\nguy is writing a temp file name, not pg_internal.init). After the other\nguy finishes creating pg_internal.init, he'll see your SI updates and\nupdate his own cache entries --- but now pg_internal.init is out there\nwith bad info, and subsequently-started backends will take it as gospel.\n\nTo make this all work reliably, I think we need the following mechanism:\n\n1. A backend preparing a new pg_internal.init must follow this\nprocedure:\n\ta. Write the file using a temp file name, same as now.\n\tb. Grab an LWLock (a new lock that will be used only to protect\n\t pg_internal.init creation).\n\tc. Check to see if any SI messages have come in and not been\n\t processed; if so, give up and delete the temp file. The\n\t next backend to start will have to try again.\n\td. Rename the temp file into place.\n\te. Release the LWLock.\n\n2. The committer of catalog changes that affect system relcache entries\nmust attempt to unlink pg_internal.init *both* before and after sending\nhis post-commit SI update messages. He must grab the LWLock mentioned\nabove while doing the second of these unlinks.\n\nThe lock is needed to prevent the case where a committer sends SI\nmessages and then (fails to) unlink pg_internal.init between steps c\nand d of someone else's pg_internal.init preparation process.\n\nComments? Can anyone see any remaining race conditions?\n\nBTW, rather than launching this in the present ad-hoc places, I'm\ninclined to set things up so that inval.c sets the flag for post-commit \npg_internal.init unlink anytime it decides to queue a SI relcache inval\nmessage for a system relation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 04 Feb 2002 14:56:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Unlinking pg_internal.init, take 2" } ]
[ { "msg_contents": "Hi,\n\nwe've got a message from Poul about problem with our contrib modules\ntsearch and I dont' know yet if this could be really tsearch problem,\nit's time to go to the bed :-) Could be this problem related to the\npartial index he used ?\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n---------- Forwarded message ----------\nDate: Mon, 04 Feb 2002 23:37:49 +0100\nFrom: Poul L. Christiansen <poulc@cs.auc.dk>\nTo: teodor@stack.net, oleg@sai.msu.su\nSubject: contrib/tsearch in postgresql 7.2\n\nHi\n\nI'm testing your tsearch module for postgresql 7.2 (just downloaded the\nfinal release a few hours ago) and I get this error:\n\nERROR: MemoryContextAlloc: invalid request size 0\n\nwhen I create an index on an txtidx column.\n\nSQL:\ncreate index article_articletextftinon12k88 on article using\ngist(indexarticletext) where paperid=1000 and\nchar_length(rawarticletext) <12880;\n\nAnd I have located the reason for this error: If the field I populated\nthe txtidx column from is greater than 12850 it get the error. When the\nsize is less than 12850, it works fine.\n\nI'm using PostgreSQL on Mandrake 8.1, with Linux kernel 2.4.17\n\nThe error also appears on PostgreSQL 7.2beta4 on Linux 2.2\n\nIs there really a 12850 byte size limitation ? If there is, it should be\ndocumented ;)\n\nPoul L. Christiansen\n\n", "msg_date": "Tue, 5 Feb 2002 01:57:29 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "contrib/tsearch in postgresql 7.2 (fwd)" }, { "msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> we've got a message from Poul about problem with our contrib modules\n> tsearch and I dont' know yet if this could be really tsearch problem,\n> it's time to go to the bed :-) Could be this problem related to the\n> partial index he used ?\n\nSeems unlikely. But in any case, you'd be more likely to get help\nby giving a complete script for reproducing the problem. I'm not\nfeeling like reverse-engineering Poul's table schema and then guessing\nat his data too...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 04 Feb 2002 18:14:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: contrib/tsearch in postgresql 7.2 (fwd) " }, { "msg_contents": "On Mon, 4 Feb 2002, Tom Lane wrote:\n\n> Oleg Bartunov <oleg@sai.msu.su> writes:\n> > we've got a message from Poul about problem with our contrib modules\n> > tsearch and I dont' know yet if this could be really tsearch problem,\n> > it's time to go to the bed :-) Could be this problem related to the\n> > partial index he used ?\n>\n> Seems unlikely. But in any case, you'd be more likely to get help\n> by giving a complete script for reproducing the problem. I'm not\n> feeling like reverse-engineering Poul's table schema and then guessing\n> at his data too...\n>\n> \t\t\tregards, tom lane\n\nyes, the problem is persistent without a partial index, he said.\nI already ask him about test suite.\n\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 5 Feb 2002 02:18:36 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "Re: contrib/tsearch in postgresql 7.2 (fwd) " }, { "msg_contents": "We found the problem and will submit a patch for 7.2.1.\nWe were confused by 'index toasting' and thought index is realy toasted\nbut actually it's only compressed. So, we have to rollback our\nchanges for intarray and tsearch :-( We were happy to make gist index\nnon-lossy, but it's not the case for long data. Probably it'd be\npossible to define in compile time if index will be lossy.\n\nTom, does index compressing is a temporal solution ? Do you intend\nto implement real toasting ?\n\n\tOleg\n\nOn Mon, 4 Feb 2002, Tom Lane wrote:\n\n> Oleg Bartunov <oleg@sai.msu.su> writes:\n> > we've got a message from Poul about problem with our contrib modules\n> > tsearch and I dont' know yet if this could be really tsearch problem,\n> > it's time to go to the bed :-) Could be this problem related to the\n> > partial index he used ?\n>\n> Seems unlikely. But in any case, you'd be more likely to get help\n> by giving a complete script for reproducing the problem. I'm not\n> feeling like reverse-engineering Poul's table schema and then guessing\n> at his data too...\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 5 Feb 2002 13:56:39 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "Re: contrib/tsearch in postgresql 7.2 (fwd) " } ]
[ { "msg_contents": "I re-wrote RServ.pm to C, and wrote a replication daemon. It works, but it\nworks like the whole rserv project. I don't like it.\n\nOK, what the hell do we need to do to get PostgreSQL replicating?\n", "msg_date": "Mon, 04 Feb 2002 19:10:32 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Replication" }, { "msg_contents": "On Mon, 4 Feb 2002, mlw wrote:\n\nI've developed a replacement for Rserv and we are planning on releasing \nit as open source(ie as a contrib module). \n\nLike Rserv its trigger based but its much more flexible.\nThe key adventages it has over Rserv is that it has\n-Support for multiple slaves\n-It Perserves transactions while doing the mirroring. Ie If rows A,B are \noriginally added in the same transaction they will be mirrored in the same \ntransaction.\n\nWe have plans on adding filtering based on data/selective mirroring as \nwell. (Ie only rows with COUNTRY='Canada' go to \nslave A, and rows with COUNTRY='China' go to slave B).\nBut I'm not sure when I'll get to that.\n\nSupport for conflict resolution(If allow edits to be made on the slaves) \nwould be nice.\n\nI hope to be able to send a tarball with the source to the pgpatches list \nwithin the next few days.\n\nWe've been using the system operationally for a number of months and have\nbeen happy with it.\n\n> I re-wrote RServ.pm to C, and wrote a replication daemon. It works, but it\n> works like the whole rserv project. I don't like it. \n> OK, what the hell do we need to do to get PostgreSQL replicating?\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \nSteven Singer ssinger@navtechinc.com\nAircraft Performance Systems Phone: 519-747-1170 ext 282\nNavtech Systems Support Inc. AFTN: CYYZXNSX SITA: YYZNSCR\nWaterloo, Ontario ARINC: YKFNSCR\n\n", "msg_date": "Tue, 5 Feb 2002 00:52:43 +0000 (GMT)", "msg_from": "Steven <ssinger@navtechinc.com>", "msg_from_op": false, "msg_subject": "Re: Replication" }, { "msg_contents": ">\n> OK, what the hell do we need to do to get PostgreSQL replicating?\n\nI hope you understand that replication, done right, is a massive\nproject. I know that Darren any myself (and the rest of the pg-repl\nfolks) have been waiting till 7.2 went gold till we did anymore work. I\nthink we hope to have master / slave replicatin working for 7.3 and then\ntarget multimaster for 7.4. At least that's the hope.\n\n- Brandon\n\n----------------------------------------------------------------------------\n c: 646-456-5455 h: 201-798-4983\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n", "msg_date": "Mon, 4 Feb 2002 19:57:34 -0500 (EST)", "msg_from": "bpalmer <bpalmer@crimelabs.net>", "msg_from_op": false, "msg_subject": "Re: Replication" }, { "msg_contents": "Steven wrote:\n> \n> On Mon, 4 Feb 2002, mlw wrote:\n> \n> I've developed a replacement for Rserv and we are planning on releasing\n> it as open source(ie as a contrib module).\n> \n> Like Rserv its trigger based but its much more flexible.\n> The key adventages it has over Rserv is that it has\n> -Support for multiple slaves\n> -It Perserves transactions while doing the mirroring. Ie If rows A,B are\n> originally added in the same transaction they will be mirrored in the same\n> transaction.\n\nI did a similar thing. I took the rserv trigger \"as is,\" but rewrote the\nreplication support code. What I eventually did was write a \"snapshot daemon\"\nwhich created snapshot files. Then a \"slave daemon\" which would check the last\nsnapshot applied and apply all the snapshots, in order, as needed. One would\nrun one of these daemons per slave server.\n", "msg_date": "Mon, 04 Feb 2002 20:47:51 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: Replication" }, { "msg_contents": "bpalmer wrote:\n> \n> >\n> > OK, what the hell do we need to do to get PostgreSQL replicating?\n> \n> I hope you understand that replication, done right, is a massive\n> project. I know that Darren any myself (and the rest of the pg-repl\n> folks) have been waiting till 7.2 went gold till we did anymore work. I\n> think we hope to have master / slave replicatin working for 7.3 and then\n> target multimaster for 7.4. At least that's the hope.\n\nI do know how hard replication is. I also understand how important it is.\n\nIf you guys have a project going, and need developers, I am more than willing.\n", "msg_date": "Mon, 04 Feb 2002 20:49:36 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: Replication" }, { "msg_contents": "\nDBMirror doesn't use snapshot's instead it records a log of transactions \nthat are committed to the database in a pair of tables. \nIn the case of an INSERT this is the row that is being added.\nIn the case of a delete the primary key of the row being deleted.\n\nAnd in the case of an UPDATE, the primary key before the update along with \nall of the data the row should have after an update.\n\nThen for each slave database a perl script walks though the transactions \nthat are pending for that host and reconstructs SQL to send the row edits \nto that host. A record of the fact that transaction Y has been sent to \nhost X is also kept.\n\nWhen transaction X has been sent to all of the hosts that are in the \nsystem it is then deleted from the Pending tables.\n\nI suspect that all of the information I'm storing in the Pending tables is \nalso being stored by Postgres in its log but I haven't investigated how \nthe information could be extracted(or how long it is kept for). That \nwould reduce the extra storage overhead that the replication system \nimposes.\n\nAs I remember(Its been a while since I've looked at it) RServ uses OID's \nin its tables to point to the data that needs to be replicated. We tried \na similar approach but found difficulties with doing partial updates.\n\n\n\n\n\n\nOn Mon, 4 Feb 2002, mlw wrote:\n\n> I did a similar thing. I took the rserv trigger \"as is,\" but rewrote the\n> replication support code. What I eventually did was write a \"snapshot daemon\"\n> which created snapshot files. Then a \"slave daemon\" which would check the last\n> snapshot applied and apply all the snapshots, in order, as needed. One would\n> run one of these daemons per slave server.\n\n\n\n\n \n\n-- \nSteven Singer ssinger@navtechinc.com\nAircraft Performance Systems Phone: 519-747-1170 ext 282\nNavtech Systems Support Inc. AFTN: CYYZXNSX SITA: YYZNSCR\nWaterloo, Ontario ARINC: YKFNSCR\n\n", "msg_date": "Tue, 5 Feb 2002 02:27:35 +0000 (GMT)", "msg_from": "Steven <ssinger@navtechinc.com>", "msg_from_op": false, "msg_subject": "Re: Replication" }, { "msg_contents": "On Mon, 4 Feb 2002, mlw wrote:\n\n> I re-wrote RServ.pm to C, and wrote a replication daemon. It works, but it\n> works like the whole rserv project. I don't like it.\n> \n> OK, what the hell do we need to do to get PostgreSQL replicating?\n\nThe trigger model is not a very sophisticated one. I think I have a better\n-- though more complicated -- one. This model would be able to handle\nmultiple masters and master->slave.\n\nFirst of all, all machines in the cluster would have to be aware all the\nmachines in the cluster. This would have to be stored in a new system\ntable.\n\nThe FE/BE protocol would need to be modified to accepted parsed node trees\ngenerated by pg_analyze_and_rewrite(). These could then be dispatched by \nthe executing server, inside of pg_exec_query_string, to all other servers\nin the cluster (excluding itself). Naturally, this dispatch would need to\nbe non-blocking.\n\npg_exec_query_string() would need to check that nodetags to make sure\nselects and perhaps some commands are not dispatched.\n\nBefore the executing server runs finish_xact_command(), it would check\nthat the query was successfully executed on all machines otherwise\nabort. Such a system would need a few configuration options: whether or\nnot you abort on failed replication to slaves, the ability to replicate\nonly certain tables, etc.\n\nNaturally, this would slow down writes to the system (possibly a lot\ndepending on the performance difference between the executing machine and\nthe least powerful machine in the cluster), but most usages of postgresql\nare read intensive, not write.\n\nAny reason this model would not work?\n\nGavin\n\n", "msg_date": "Thu, 7 Feb 2002 18:27:44 +1100 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: Replication" }, { "msg_contents": "Gavin Sherry wrote:\n> Naturally, this would slow down writes to the system (possibly a lot\n> depending on the performance difference between the executing machine and\n> the least powerful machine in the cluster), but most usages of postgresql\n> are read intensive, not write.\n> \n> Any reason this model would not work?\n\nWhat, then is the purpose of replication to multiple masters?\n\nI can think of only two reasons why you want replication. (1) Redundancy, make\nsure that if one server dies, then another server has the same data and is used\nseamlessly. (2) Increase performance over one system.\n\nIn reason (1) I submit that a server load balance which sits on top of\nPostgreSQL, and executes writes on both servers while distributing reads would\nbe best. This is a HUGE project. The load balancer must know EXACTLY how the\nsystem is configured, which includes all functions and everything. \n\nIn reason (2) your system would fail to provide the scalability that would be\nneeded. If writes take a long time, but reads are fine, what is the difference\nbetween the trigger based replicator?\n\nI have in the back of my mind, an idea of patching into the WAL stuff, and\nusing that mechanism to push changes out to the slaves.\n\nWhere one machine is still the master, but no trigger stuff, just a WAL patch.\nPerhaps some shared memory paradigm to manage WAL visibility? I'm not sure\nexactly, the idea hasn't completely formed yet.\n", "msg_date": "Thu, 07 Feb 2002 07:52:23 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: Replication" }, { "msg_contents": "\n\nWhat you describe sounds like a form of a two-stage commit protocol.\n\nIf the command worked on two of the replicated databases but failed on a \nthird then the executing server would have to be able to undo the command\non the replicated databases as well as itself.\n\nThe problems with two stage commit type approches to replication are \n1) Speed as you mentioned. Write speed isn't a concern for some \napplications but it is very important in others.\n\nand \n2) All of the databases must be able to communicate with each other at \nall times in order for any edits to work. If the servers are \nconnected over some sort of WAN that periodically has short outages this \nis a problem. Also if your using replication because you want to be able \nto take down one of the databases for short periods of time without \nbringing down the others your in trouble.\n\n\nbtw: I posted the alternative to Rserv that I mentioned the other day to \nthe pg-patches mailing list. If anyone is intreasted you should be able \nto grab it off the archives.\n\nOn Thu, 7 Feb 2002, Gavin Sherry wrote:\n\n> \n> First of all, all machines in the cluster would have to be aware all the\n> machines in the cluster. This would have to be stored in a new system\n> table.\n> \n> The FE/BE protocol would need to be modified to accepted parsed node trees\n> generated by pg_analyze_and_rewrite(). These could then be dispatched by \n> the executing server, inside of pg_exec_query_string, to all other servers\n> in the cluster (excluding itself). Naturally, this dispatch would need to\n> be non-blocking.\n> \n> pg_exec_query_string() would need to check that nodetags to make sure\n> selects and perhaps some commands are not dispatched.\n> \n> Before the executing server runs finish_xact_command(), it would check\n> that the query was successfully executed on all machines otherwise\n> abort. Such a system would need a few configuration options: whether or\n> not you abort on failed replication to slaves, the ability to replicate\n> only certain tables, etc.\n> \n> Naturally, this would slow down writes to the system (possibly a lot\n> depending on the performance difference between the executing machine and\n> the least powerful machine in the cluster), but most usages of postgresql\n> are read intensive, not write.\n> \n> Any reason this model would not work?\n> \n> Gavin\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \nSteven Singer ssinger@navtechinc.com\nAircraft Performance Systems Phone: 519-747-1170 ext 282\nNavtech Systems Support Inc. AFTN: CYYZXNSX SITA: YYZNSCR\nWaterloo, Ontario ARINC: YKFNSCR\n\n", "msg_date": "Thu, 7 Feb 2002 17:48:51 +0000 (GMT)", "msg_from": "Steven Singer <ssinger@navtechinc.com>", "msg_from_op": false, "msg_subject": "Re: Replication" }, { "msg_contents": "I'm not that familiar with the whole replication issues in PostgreSQL,\nhowever, I would be partial to replication that was based upon the\nplayback of the (a?) journal file. (I believe that the WAL is a\njournal file.)\n\nBy being based upon a journal file, it would be possible to accomplish\ntwo significant items. First, it would be possible to \"restore\" a\ndatabase to an exact state just before a failure. Most commercial\ndatabases provide the ability to do this. Banks, etc. log the journal\nfiles directly to tape to provide a complete transaction history such\nthat they can rebuild their database from any given snapshot. (Note\nthat the journal file needs to be \"editable\" as a failure may be\n\"delete from x\" with a missing where clause.)\n\nThis leads directly into the second advantage, the ability to have a\nreplicated database operating anywhere, over any connection on any\nserver. Speed of writes would not be a factor. In essence, as long\nas the replicated database had a snapshot of the database and then was\nprovided with all journal files since the snapshot, it would be\npossible to build a current database. If the replicant got behind in\nthe processing, it would catch up when things slowed down.\n\nIn my opionion, the first advantage is in many ways most important.\nReplication becomes simply the restoration of the database in realtime\non a second server. The \"replication\" task becomes the definition of\na protocol for distributing the journal file. At least one major\ndatabase vendor does replication (shadowing) in exactly this mannor.\n\nMaybe I'm all wet and the journal file and journal playback already\nexists. If so, IMHO, basing replication off of this would be the\nright direction.\n\n\nOn Thu, 07 Feb 2002 07:52:23 EST, mlw wrote:\n> \n> I have in the back of my mind, an idea of patching into the WAL stuff, and\n> using that mechanism to push changes out to the slaves.\n> \n> Where one machine is still the master, but no trigger stuff, just a WAL patch.\n> Perhaps some shared memory paradigm to manage WAL visibility? I'm not sure\n> exactly, the idea hasn't completely formed yet.\n> \n\n\n", "msg_date": "Thu, 07 Feb 2002 17:35:43 -0500", "msg_from": "F Harvell <fharvell@fts.net>", "msg_from_op": false, "msg_subject": "Re: Replication " }, { "msg_contents": "\n >\n > The problems with two stage commit type approches to replication are\n\nIMHO the biggest problem with two phased commit is it doesn't scale.\nThe more servers\nyou add to the replica the slower it goes. Also there's the potential\nfor dead locks across\nserver boundaries.\n\n >\n > 2) All of the databases must be able to communicate with each other at\n > all times in order for any edits to work. If the servers are\n > connected over some sort of WAN that periodically has short outages this\n > is a problem. Also if your using replication because you want to be \nable\n > to take down one of the databases for short periods of time without\n > bringing down the others your in trouble.\n\nAll true for two phased commit protocol. To have multi master\nreplication, you must have all\nsystems communicating, but you can use a multicast group communication\nsystem instead of\n2PC. Using total order messaging, you can ensure all changes are\ndelivered to all servers in the\nreplica in the same order. This group communication system also allows\nfailures to be detected\nwhile other servers in the replica continue processing.\n\nA few of us are working with this theory, and trying to integrate with\n7.2. There is a working\nmodel for 6.4, but its very limited. (insert, update, and deletes) We\nare currently hosted at\n\nhttp://gborg.postgresql.org/project/pgreplication/projdisplay.php\nBut the site has been down the last 2 days. I've contacted the web\nmaster, but haven't seen\nany results yet. If any one knows what going on with gborg, I'd\nappreciate a status.\n\nDarren\n\n", "msg_date": "Fri, 08 Feb 2002 00:29:22 -0500", "msg_from": "Darren Johnson <darren.johnson@cox.net>", "msg_from_op": false, "msg_subject": "Re: Replication" }, { "msg_contents": "Darren,\nGiven that different replication strategies will probably be developed \nfor PG, do you envisage DBAs to be able to select the type of replication \nfor their installation? I.e. Replication being selectable rther like \nstorage structures?\n\nWould be a killer bit of flexibility, given how enormous the impact of \nreplication will be to corporate adoption of PG.\n\nBrad \n\n\n>>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\nOn 2/8/02, 5:29:22 AM, Darren Johnson <darren.johnson@cox.net> wrote \nregarding Re: [HACKERS] Replication:\n\n\n> >\n> > The problems with two stage commit type approches to replication are\n\n> IMHO the biggest problem with two phased commit is it doesn't scale.\n> The more servers\n> you add to the replica the slower it goes. Also there's the potential\n> for dead locks across\n> server boundaries.\n\n> >\n> > 2) All of the databases must be able to communicate with each other at\n> > all times in order for any edits to work. If the servers are\n> > connected over some sort of WAN that periodically has short outages this\n> > is a problem. Also if your using replication because you want to be\n> able\n> > to take down one of the databases for short periods of time without\n> > bringing down the others your in trouble.\n\n> All true for two phased commit protocol. To have multi master\n> replication, you must have all\n> systems communicating, but you can use a multicast group communication\n> system instead of\n> 2PC. Using total order messaging, you can ensure all changes are\n> delivered to all servers in the\n> replica in the same order. This group communication system also allows\n> failures to be detected\n> while other servers in the replica continue processing.\n\n> A few of us are working with this theory, and trying to integrate with\n> 7.2. There is a working\n> model for 6.4, but its very limited. (insert, update, and deletes) We\n> are currently hosted at\n\n> http://gborg.postgresql.org/project/pgreplication/projdisplay.php\n> But the site has been down the last 2 days. I've contacted the web\n> master, but haven't seen\n> any results yet. If any one knows what going on with gborg, I'd\n> appreciate a status.\n\n> Darren\n\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n", "msg_date": "Fri, 08 Feb 2002 11:09:36 GMT", "msg_from": "Bradley Kieser <brad@kieser.net>", "msg_from_op": false, "msg_subject": "Re: Replication" }, { "msg_contents": "> \n> Given that different replication strategies will probably be developed \n> for PG, do you envisage DBAs to be able to select the type of replication \n> for their installation? I.e. Replication being selectable rther like \n> storage structures?\n\nI can't speak for other replication solutions, but we are using the \n--with-replication or\n-r parameter when starting postmaster. Some day I hope there will be \nparameters for\nmaster/slave partial/full and sync/async, but it will be some time \nbefore we cross those\nbridges.\n\nDarren\n\n\n\n", "msg_date": "Fri, 08 Feb 2002 11:23:13 -0500", "msg_from": "Darren Johnson <darren.johnson@cox.net>", "msg_from_op": false, "msg_subject": "Re: Replication" }, { "msg_contents": "I've been looking into database replication theory lately and have found\nsome interesting papers discussing various approaches. (Here's\none paper that struck me as being very helpful,\nhttp://citeseer.nj.nec.com/460405.html ) So far I favour an\neager replication system which is predicated on a read local/write all\navailable. The system should not depend on two phase commit or primary\ncopy algorithms. The former leads to the whole system being as quick as\nthe slowest machine. In addition, 2 phase commit involves 2n messages for\neach transaction which does not scale well at all. This idea will also\nhave to take into account a crashed node which did not ack a transaction.\nThe primary copy algorithms I've seen suffer from a single point of\nfailure and potential bottlenecks at the primary node.\n\nInstead I like the master to master or peer to peer algorithm as discussed\nin the above paper. This approach accounts for network partitions, nodes\nleaving and joining a cluster and the ability to commit a transaction once\nthe communication module has determined the total order of the said\ntransaction, i.e. no need for waiting for acks. This scales well and\nresearch has shown it to increase the number of transactions/second a\ndatabase cluster can handle over a single node.\n\nPostgres-R is another interesting approach which I think should be taken\nseriously. Anyone interested can read a paper on this at\nhttp://citeseer.nj.nec.com/330257.html\n\nAnyways, my two cents\n\nRandall Jonasz\nSoftware Engineer\nClick2net Inc.\n\n\nOn Thu, 7 Feb 2002, mlw wrote:\n\n> Gavin Sherry wrote:\n> > Naturally, this would slow down writes to the system (possibly a lot\n> > depending on the performance difference between the executing machine and\n> > the least powerful machine in the cluster), but most usages of postgresql\n> > are read intensive, not write.\n> >\n> > Any reason this model would not work?\n>\n> What, then is the purpose of replication to multiple masters?\n>\n> I can think of only two reasons why you want replication. (1) Redundancy, make\n> sure that if one server dies, then another server has the same data and is used\n> seamlessly. (2) Increase performance over one system.\n>\n> In reason (1) I submit that a server load balance which sits on top of\n> PostgreSQL, and executes writes on both servers while distributing reads would\n> be best. This is a HUGE project. The load balancer must know EXACTLY how the\n> system is configured, which includes all functions and everything.\n>\n> In reason (2) your system would fail to provide the scalability that would be\n> needed. If writes take a long time, but reads are fine, what is the difference\n> between the trigger based replicator?\n>\n> I have in the back of my mind, an idea of patching into the WAL stuff, and\n> using that mechanism to push changes out to the slaves.\n>\n> Where one machine is still the master, but no trigger stuff, just a WAL patch.\n> Perhaps some shared memory paradigm to manage WAL visibility? I'm not sure\n> exactly, the idea hasn't completely formed yet.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n>\n\n", "msg_date": "Fri, 8 Feb 2002 14:34:34 -0500 (EST)", "msg_from": "Randall Jonasz <rjonasz@trueimpact.com>", "msg_from_op": false, "msg_subject": "Re: Replication" }, { "msg_contents": "I've not looked at the first paper, but I wil.\n\n> Postgres-R is another interesting approach which I think should be taken\n> seriously. Anyone interested can read a paper on this at\n> http://citeseer.nj.nec.com/330257.html\n\nI would point you to the info on gborg, but it seems to be down at the\nmoment.\n\n- Brandon\n\n----------------------------------------------------------------------------\n c: 646-456-5455 h: 201-798-4983\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n", "msg_date": "Fri, 8 Feb 2002 15:12:00 -0500 (EST)", "msg_from": "bpalmer <bpalmer@crimelabs.net>", "msg_from_op": false, "msg_subject": "Re: Replication" }, { "msg_contents": "\n> I've been looking into database replication theory lately and have found\n> some interesting papers discussing various approaches. (Here's\n> one paper that struck me as being very helpful,\n> http://citeseer.nj.nec.com/460405.html )\n\n\nHere is another one from that same group, that addresses the WAN issues.\n\n> http://www.cnds.jhu.edu/pub/papers/cnds-2002-1.pdf\n\n\nenjoy,\n\nDarren\n\n\n\n", "msg_date": "Fri, 08 Feb 2002 16:11:43 -0500", "msg_from": "Darren Johnson <darren.johnson@cox.net>", "msg_from_op": false, "msg_subject": "Re: Replication" }, { "msg_contents": "> > I have in the back of my mind, an idea of patching into the WAL stuff, and\n> > using that mechanism to push changes out to the slaves.\n> >\n> > Where one machine is still the master, but no trigger stuff, just a WAL patch.\n> > Perhaps some shared memory paradigm to manage WAL visibility? I'm not sure\n> > exactly, the idea hasn't completely formed yet.\n> >\n\nFWIW, Sybase Replication Server does just such a thing. \n\nThey have a secondary log marker (prevents the log from truncating past \nthe oldest unreplicated transaction). A thread within the system called \nthe \"rep agent\" (but it use to be a separate process call the LTM), reads \nthe log and forwards it to the rep server, once the rep server has the \nwhole transaction and it is written to a stable device (aka synced to \ndisk) the rep server responds to the LTM telling him it's OK to move the \nlog marker forward.\n\nAnyway, once the replication server proper has the transaction it uses a \npublish/subscribe methodology to see who wants get the update.\n\nBidirectional replication is done by making two oneway replications. The \nwhole thing is table based, it marks the tables as replicated or not in \nthe database to save the trip to the repserver on un replicated tables.\n\nPlus you can take parts of a database (replicate all rows where the \ncountry is \"us\" to this server and all the rows with \"uk\" to that server). \nOr opposite you can roll up smaller regional databases to bigger ones, \nit's very flexible.\n\n\nCheers,\n\nBrian\n\n", "msg_date": "Fri, 8 Feb 2002 19:17:00 -0500 (EST)", "msg_from": "Brian Bruns <camber@ais.org>", "msg_from_op": false, "msg_subject": "Re: Replication" }, { "msg_contents": "\nThread added to TODO.detail/replication.\n\nmlw wrote:\n> I re-wrote RServ.pm to C, and wrote a replication daemon. It works, but it\n> works like the whole rserv project. I don't like it.\n> \n> OK, what the hell do we need to do to get PostgreSQL replicating?\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 21 Feb 2002 00:28:38 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replication" }, { "msg_contents": "\nThread added to TODO.detail/replication.\n\nmlw wrote:\n> I re-wrote RServ.pm to C, and wrote a replication daemon. It works, but it\n> works like the whole rserv project. I don't like it.\n> \n> OK, what the hell do we need to do to get PostgreSQL replicating?\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 21 Feb 2002 00:28:38 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replication" } ]
[ { "msg_contents": "Please add a documentation section for BINARY cursors for each and every\ndata type.\n\nI am not asking for the internal format of a double or any other data\ntype.\n\nRather, for the non-obvious types (e.g. Numeric, time-with-timezone,\netc.) what exactly is returned by PQgetvalue()?\n\nLacking this information, it is extremely frustrating to try to use\nbinary cursors.\n\nIt could be a very simple table:\n\ndata type | returns\n----------+------------------------------------------------\nfloat | pointer to native float\n----------+------------------------------------------------\nint | pointer to native int\n----------+------------------------------------------------\nnumeric | pointer to struct <foo> as defined in <module>\n----------+------------------------------------------------\ntimestamp | pointer to struct <bar> as defined in <module>\n----------+------------------------------------------------\n\netc.\n\nFor sure, I am not the only one who is having trouble with this\nstuff.[1]\nI see some macros that might be useful in the header files. Of course,\nthese macros are also completely undocumented.\n\nFor instance, when would I use DatumGetNumeric() verses\nDataumGetNumericCopy()? It seems like the second performs an allocation\n(as a wild guess).\n\nFor a binary cursor, somehow, I need to know exactly what data type\npointer is returned in a call to PQgetvalue(). I seem to be a poor\nguesser.\n\n\n[1] A post from a PostgreSQL site:\n\"David McCombs <davidmc@newcottage.com>\n2001-12-14 13:57:44-06 \nLost one day struggling with the BINARY CURSOR and gave up: \n- After reverse engineering postgres source - \n(there is NO documentation on this!): \nFor all types it's pretty clear what the BE gives back, \nBUT with the Timestamp value (double) it really wasn't clear \nwhat the value means. \n\nPostgres really should provide FE functions to turn the Timestamp into a\ntm struct. \n\nIt should also provide an FE function to unpack the packed values of the\nNumeric and Decimal (synonomous) types. \n\nI have decided to give in and use the character return data, only to\nconvert it to the native type. \n\nThe major issue for me with this is that I KNOW that precision is going\nto be lost on the floating point and double values. I have already seen\nthis in the return string representation of the data. \n\nAlso there is the performance issue of formatting to strings, converting\nto a native type, then re-formatting again for a web page (or XML\nstream.) \n\nDavid McCombs 12/14/2001\"\n\n", "msg_date": "Mon, 4 Feb 2002 16:46:40 -0800", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Request for documentation" } ]
[ { "msg_contents": "Suggestion:\nBecause the Ecpg project uses a global sqlca, it can be accessed by only\none thread at a time.\nIf (instead) we had a user supplied sqlca, it could be used by multiple\nthreads.\nWhy not have the user allocate the sqlca or pass it in as a parameter\n(maybe they want an auto sqlca).\nIn any case: a single, global sqlca is a very bad thing. (IMO-YMMV).\n\nQuestion:\nWhy no sqlda structure? Every other embedded SQL I have used has the\nsqlda {and it is darn useful}.\n", "msg_date": "Mon, 4 Feb 2002 16:51:38 -0800", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Ecpg and reentrancy" }, { "msg_contents": "On Mon, Feb 04, 2002 at 04:51:38PM -0800, Dann Corbit wrote:\n> Because the Ecpg project uses a global sqlca, it can be accessed by only\n> one thread at a time.\n\nThat's correct.\n\n> If (instead) we had a user supplied sqlca, it could be used by multiple\n> threads.\n\nHow do you want to supply it? \n\n> Why not have the user allocate the sqlca or pass it in as a parameter\n> (maybe they want an auto sqlca).\n\nThat's not exactly the way other RDBMS handle it. Thus compatibility might\nbecome a problem. \n\n> In any case: a single, global sqlca is a very bad thing. (IMO-YMMV).\n\nOkay, let's talk about a better way.\n\n> Question:\n> Why no sqlda structure? Every other embedded SQL I have used has the\n> sqlda {and it is darn useful}.\n\nNoone volunteered to implement it. :-)\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Tue, 5 Feb 2002 15:33:04 +0100", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Ecpg and reentrancy" } ]
[ { "msg_contents": "Are there any plans to merge the sources from the experimental threaded\nserver and the forked server so that a compile switch could choose the\nmodel?\n", "msg_date": "Mon, 4 Feb 2002 18:23:10 -0800", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Threaded PosgreSQL server" }, { "msg_contents": "\nIf someone wanted to submit appropriate patches for the v7.3 development\ntree, that merge cleanly, I can't see why this wouldn't be a good thing\n...\n\n\nOn Mon, 4 Feb 2002, Dann Corbit wrote:\n\n> Are there any plans to merge the sources from the experimental threaded\n> server and the forked server so that a compile switch could choose the\n> model?\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n", "msg_date": "Mon, 4 Feb 2002 23:21:34 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "\n\nI would love to see this happen but they are already quite different and\ndrifting further apart every day. I am trying integrate parts of the real\nPostgreSQL into threaded postgres as time permits.\n\nI think threaded postgres could serve as a vehicle for testing the\nrelative value of using threads, but trying to merge patches would be a\nmajor task. I found the interesting marketing white paper the covers\nPostgreSQL, Illustra, Informix, DSA ( using threads ), and Datablade\nextensions. If\nnothing else, it shows that PostgreSQL extension model can be used in\nthreaded environment. \n\nwww.databaseassociates.com/pdf/infobj.pdf \n\nMyron Scott\nmkscott@sacadia.com\n\n\nOn Mon, 4 Feb 2002, Marc G. Fournier wrote:\n\n> \n> If someone wanted to submit appropriate patches for the v7.3 development\n> tree, that merge cleanly, I can't see why this wouldn't be a good thing\n> ...\n> \n> \n> On Mon, 4 Feb 2002, Dann Corbit wrote:\n> \n> > Are there any plans to merge the sources from the experimental threaded\n> > server and the forked server so that a compile switch could choose the\n> > model?\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n", "msg_date": "Mon, 4 Feb 2002 19:52:44 -0800 (PST)", "msg_from": "<mkscott@sacadia.com>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "\n\nI would have to contend that the two will never been merged into one \nsource base. If the threaded server is done correctly, then many of the \ninternal structures and logic will be radically different. I have to \ncommend Mr. Scott for continuing on with this work when it was pretty \nobvious from previous discussions that this would not be \"well received\".\n\n\nOn Mon, 4 Feb 2002 mkscott@sacadia.com wrote:\n\n> \n> \n> I would love to see this happen but they are already quite different and\n> drifting further apart every day. I am trying integrate parts of the real\n> PostgreSQL into threaded postgres as time permits.\n> \n> I think threaded postgres could serve as a vehicle for testing the\n> relative value of using threads, but trying to merge patches would be a\n> major task. I found the interesting marketing white paper the covers\n> PostgreSQL, Illustra, Informix, DSA ( using threads ), and Datablade\n> extensions. If\n> nothing else, it shows that PostgreSQL extension model can be used in\n> threaded environment. \n> \n> www.databaseassociates.com/pdf/infobj.pdf \n> \n> Myron Scott\n> mkscott@sacadia.com\n> \n> \n> On Mon, 4 Feb 2002, Marc G. Fournier wrote:\n> \n> > \n> > If someone wanted to submit appropriate patches for the v7.3 development\n> > tree, that merge cleanly, I can't see why this wouldn't be a good thing\n> > ...\n> > \n> > \n> > On Mon, 4 Feb 2002, Dann Corbit wrote:\n> > \n> > > Are there any plans to merge the sources from the experimental threaded\n> > > server and the forked server so that a compile switch could choose the\n> > > model?\n> > >\n\n\n-- \n//========================================================\\\\\n|| D. Hageman <dhageman@dracken.com> ||\n\\\\========================================================//\n\n", "msg_date": "Mon, 4 Feb 2002 23:25:03 -0600 (CST)", "msg_from": "\"D. Hageman\" <dhageman@dracken.com>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> If someone wanted to submit appropriate patches for the v7.3 development\n> tree, that merge cleanly, I can't see why this wouldn't be a good thing\n> ...\n\nI would resist it. I do not think we need the portability and\nreliability headaches that would come with it. Furthermore,\nan #ifdef'd implementation would be the worst of all possible\nworlds, as it would do major damage to readability of the code.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 05 Feb 2002 10:48:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server " }, { "msg_contents": "Dann Corbit wrote:\n> \n> Are there any plans to merge the sources from the experimental threaded\n> server and the forked server so that a compile switch could choose the\n> model?\n\nJust a question, in order to elighten my thought. Does the current experimental\nthreaded server disable multi-process model? Or does it *add* the functionality\nas a compile switch? (This would be the other way round as the one you pointed\nout.)\n\nI think it is important as to evaluate resistance to go multithreading.\n\nIf they disabled the original method, I agree with Tom. If they *merged* both\nflawlessly, I would try to consider it for the current tree.\n\nAny comments?\n\nRegards,\nHaroldo.\n", "msg_date": "Tue, 05 Feb 2002 15:25:13 -0300", "msg_from": "Haroldo Stenger <hstenger@adinet.com.uy>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "On Tue, 5 Feb 2002, Haroldo Stenger wrote:\n\n> Dann Corbit wrote:\n> >\n> > Are there any plans to merge the sources from the experimental threaded\n> > server and the forked server so that a compile switch could choose the\n> > model?\n>\n> Just a question, in order to elighten my thought. Does the current experimental\n> threaded server disable multi-process model? Or does it *add* the functionality\n> as a compile switch? (This would be the other way round as the one you pointed\n> out.)\n>\n> I think it is important as to evaluate resistance to go multithreading.\n>\n> If they disabled the original method, I agree with Tom. If they *merged* both\n> flawlessly, I would try to consider it for the current tree.\n>\n> Any comments?\n\nThat's kinda what I was hoping ... is it something that could be\nseamlessly integrated to have minimal impact on the code itself ... even\nif there was some way of having a 'thread.c' vs 'non-thread.c' that could\nbe link'd in, with wrapper functions?\n\nTha again, has anyone looked at the apache project? Apache2 has several\n\"process models\" ... prefork being one (like ours), or a 'worker', which\nis a prefork/threaded model where you can have n child processes, with m\n'threads' inside of each ... not sure if something like that coul be\nretrofit'd into what we have, but ... ?\n\n", "msg_date": "Tue, 5 Feb 2002 15:36:41 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "\n\n> \n> On Tue, 5 Feb 2002, Haroldo Stenger wrote:\n> \n> > Just a question, in order to elighten my thought. Does the current experimental\n> > threaded server disable multi-process model? Or does it *add* the functionality\n> > as a compile switch? (This would be the other way round as the one you pointed\n> > out.)\n> >\n\nCurrently, exper. threaded postgres can have multiple processes using\nmultiple threads with the same shared memory. There is no forking\ninvolved in the process though. Shared memory, mutexes, and conditonal\nlocks go global or private to the process based on a run-time flag.\n\n\n> \n> That's kinda what I was hoping ... is it something that could be\n> seamlessly integrated to have minimal impact on the code itself ... even\n> if there was some way of having a 'thread.c' vs 'non-thread.c' that could\n> be link'd in, with wrapper functions?\n> \n\nThe first basic problem is that global variables are scattered throughout\nthe source as well as some static stack variables. Hunting these down and\nfinding a home for them is, in and of itself, a major task. For example,\nflex\nproduces code that is not thread safe, you have to modify that too. The\ncurrent work around in exper. thrreaded postgres is not pretty, one\n\"environment\" structure that holds all the normal postgres globals in\nthread local storage. This makes compile time choices impractical I\nthink.\n\n\nCheers,\n\nMyron\nmkscott@sacadia.com\n\n\n", "msg_date": "Tue, 5 Feb 2002 22:06:52 -0800 (PST)", "msg_from": "<mkscott@sacadia.com>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "Le Mardi 5 F�vrier 2002 20:36, Marc G. Fournier a �crit :\n> Apache2 has several \"process models\" ... prefork being one (like ours), or \na 'worker', which is a prefork/threaded model where you can have n child \nprocesses, with m 'threads' inside of each ... not sure if something like \nthat coul be retrofit'd into what we have, but ... ?\n\nWhy not try to link Cygwin staticly?\n\nBest regards,\nJean-Michel POURE\n", "msg_date": "Wed, 6 Feb 2002 08:17:44 +0100", "msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "On Tue, 5 Feb 2002 mkscott@sacadia.com wrote:\n\n> The first basic problem is that global variables are scattered\n> throughout the source as well as some static stack variables. Hunting\n> these down and finding a home for them is, in and of itself, a major\n> task. For example, flex produces code that is not thread safe, you have\n> to modify that too. The current work around in exper. thrreaded\n> postgres is not pretty, one \"environment\" structure that holds all the\n> normal postgres globals in thread local storage. This makes compile\n> time choices impractical I think.\n\nOkay, but this has been discussed in the past concerning threading ... the\nfirst make work that would have to be done was 'cleaning the code' so that\nit was thread-safe ...\n\nBasically, if we were to look at moving *towards* a fork/thread model in\nthe future, what can we learn and incorporate from the work already being\ndone? How much of the work in the threaded server is cleaning up the code\nto be thread-safe, that would benefit the base code itself and start us\ndown that path?\n\nRight now, from everythign I've heard, making the code thread-safe is one\nbig onerous task ... but if we were to start incorporating changes from\nthe 'thread work' that is being done now, into the base server, and ppl\nstart thinking thread-safe when they are coding new stuff, over time, this\ntask becomes smaller ...\n\n\n", "msg_date": "Wed, 6 Feb 2002 11:20:12 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "On Tue, Feb 05, 2002 at 03:36:41PM -0400, Marc G. Fournier wrote:\n> Tha again, has anyone looked at the apache project? Apache2 has several\n> \"process models\" ... prefork being one (like ours), or a 'worker', which\n> is a prefork/threaded model where you can have n child processes, with m\n> 'threads' inside of each ... not sure if something like that coul be\n> retrofit'd into what we have, but ... ?\n\nWe could even use the nice Apache Portable Runtime, which is a\nplatform-independant layer over threading/networking/shm/etc (there's a\nsummary here: http://apr.apache.org/docs/apr/modules.html).\nThis might improve PostgreSQL on non-UNIX platforms, namely Win32.\n\nHowever, I think using threads is only a good idea if it gets us a\nsubstantial performance increase. From what I've seen, that isn't the\ncase; and even if the time to create a connection is a bottleneck, there\nare other, more conservative ways of improving it (e.g. pre-forking,\npersistent backends, and IIRC some work Tom Lane was doing to reduce\nbackend startup time).\n\nAnd given the complexity and reduced reliability that threads bring, I\nthink the only advantage would be buzzword-compliance -- which isn't a\npriority, personally.\n\nCheers,\n\nNeil\n\n", "msg_date": "Wed, 6 Feb 2002 14:33:26 -0500", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "nconway@klamath.dyndns.org (Neil Conway) writes:\n\n> However, I think using threads is only a good idea if it gets us a\n> substantial performance increase. From what I've seen, that isn't the\n> case; and even if the time to create a connection is a bottleneck, there\n> are other, more conservative ways of improving it (e.g. pre-forking,\n> persistent backends, and IIRC some work Tom Lane was doing to reduce\n> backend startup time).\n\nThe one place where it could be a clear win would be in splitting\nsingle very large queries over multiple CPUs. This would probably\nrequire an even larger redesign of the whole system than moving to a\nquery-per-thread rather than per-process model. I think \"real\"\nmulti-master replication and clustering is a better goal in the short\nterm...\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n", "msg_date": "06 Feb 2002 15:24:04 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "Doug McNaught wrote:\n> The one place where it could be a clear win would be in splitting\n> single very large queries over multiple CPUs. This would probably\n> require an even larger redesign of the whole system than moving to a\n> query-per-thread rather than per-process model. I think \"real\"\n> multi-master replication and clustering is a better goal in the short\n> term...\n\nAgreed.\n\nThough, starting to think & code thread safe would be nice too. \n\nRegards,\nHaroldo.\n", "msg_date": "Wed, 06 Feb 2002 17:59:32 -0300", "msg_from": "Haroldo Stenger <hstenger@adinet.com.uy>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "\n\nOn Wed, 6 Feb 2002, Marc G. Fournier wrote:\n\n> Right now, from everythign I've heard, making the code thread-safe is one\n> big onerous task ... but if we were to start incorporating changes from\n> the 'thread work' that is being done now, into the base server, and ppl\n> start thinking thread-safe when they are coding new stuff, over time, this\n> task becomes smaller ...\n> \n\nI agree, once the move is made to thread-safe it becomes much easier to\nmaintain thread-safe code. I also very much like the idea of multiple\nthread/process models that could be chosen from. I think the question has\nalways been the\ninital cost vs. benefit. The group has not seen much to be gained for\nthe amount of initial work involved. After working with the code, I too\nfelt it wasn't worth it. \n\nAfter revisiting the threaded code after a long break I now see some real\nbenefits to threading. For example, I was able to incorporate Tom Lane's\nlazy_vacuum code to do relation clean up automatically when a threshold of\npage writes occurred. I was also able to use the freespace information to\nbe shared among threads in the process without touching shared mem. As a\nresult, a pgbench run with 20 clients and over 1,000,000\ntrasactions maintained a more or less constant tps with manual\nvacuum commands and far less heap expansion. You can do this with\nprocesses (planned for 7.3 I think) but I\nthink it was much easier with threads. Other things may open up with\nthreads as well like Java stored procedures. Anyway, now I think it is\nworth it.\n\n\nMyron\nmkscott@sacadia.com\n\n", "msg_date": "Wed, 6 Feb 2002 13:00:01 -0800 (PST)", "msg_from": "<mkscott@sacadia.com>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "Haroldo Stenger writes:\n\n> Though, starting to think & code thread safe would be nice too.\n\nThe thing about thread-safeness is that it's only actually useful when\nyou're using threads. Otherwise it wastes everybody's time -- the\nprogrammer's, the computer's, and the user's.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 6 Feb 2002 17:03:20 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "On Wed, 6 Feb 2002 mkscott@sacadia.com wrote:\n\n> After revisiting the threaded code after a long break I now see some\n> real benefits to threading. For example, I was able to incorporate Tom\n> Lane's lazy_vacuum code to do relation clean up automatically when a\n> threshold of page writes occurred. I was also able to use the freespace\n> information to be shared among threads in the process without touching\n> shared mem. As a result, a pgbench run with 20 clients and over\n> 1,000,000 trasactions maintained a more or less constant tps with manual\n> vacuum commands and far less heap expansion. You can do this with\n> processes (planned for 7.3 I think) but I think it was much easier with\n> threads. Other things may open up with threads as well like Java stored\n> procedures. Anyway, now I think it is worth it.\n\nAre there code clean-ups that have gone into the thread'd code that could\nbe incorporated into the existing code base that would start us down that\npath? For instance, based my limited understanding of threaded servers, I\nbelieve that 'global variables' are generally considered \"A Real Bad\nThing\" ... in one of your email's, you mentioned:\n\n\"The first basic problem is that global variables are scattered throughout\nthe source as well as some static stack variables. Hunting these down and\nfinding a home for them is, in and of itself, a major task. For example,\nflex produces code that is not thread safe, you have to modify that too.\nThe current work around in exper. thrreaded postgres is not pretty, one\n\"environment\" structure that holds all the normal postgres globals in\nthread local storage. This makes compile time choices impractical I\nthink.\"\n\nNow, what is a 'clean' solution to this? Making sure that all variables\nare passed through to various functions, maybe through a struct construct?\nSo, can we start there and work our way through the code? Start simple\n... take one of the global(s), put it into the struct and take it out of\nglobal space and make sure that its passed appropriately through all the\nrequired functions ... add in the next one, and do another trace?\n\nSomeone, or a group of ppl, with thread knowledge needs to start this\nforward ... once the clean up begins, even without any thread code thrown\nin, it shouldn't be too difficult to keep it clean to go to 'the next\nstep', no?\n\n\n\n", "msg_date": "Wed, 6 Feb 2002 18:03:29 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "On Wed, 6 Feb 2002, Peter Eisentraut wrote:\n\n> Haroldo Stenger writes:\n>\n> > Though, starting to think & code thread safe would be nice too.\n>\n> The thing about thread-safeness is that it's only actually useful when\n> you're using threads. Otherwise it wastes everybody's time -- the\n> programmer's, the computer's, and the user's.\n\nThe thing is, there are several areas where using threads would be a\nbenefit, from what I've read on this list over the years ... as time goes\non, less and less of the OSs in use dont' have threads, so we have to\nstart *somewhere* to work towards that sort of hybrid system ...\n\n\n\n", "msg_date": "Wed, 6 Feb 2002 18:05:35 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "Peter Eisentraut wrote:\n> \n> Haroldo Stenger writes:\n> \n> > Though, starting to think & code thread safe would be nice too.\n> \n> The thing about thread-safeness is that it's only actually useful when\n> you're using threads. Otherwise it wastes everybody's time -- the\n> programmer's, the computer's, and the user's.\n\nYes I see. The scenario under which I see doing it to be useful, is thinking in\nadding multi-threading for PG v 7.5 say, and preparing the road. But maybe it's\na worthless effort. Many developers are pointing it. Let's forget about threads\nfor now.\n\nBy the way, my original question about how integrated the multi-threading fork\nreached, remained unanswered. I will assume it went threading, dropping forever\nthe original behaviour, so deciding me towards not considering threading a\nviable option (for now).\n\nRegards,\nHaroldo.\n", "msg_date": "Wed, 06 Feb 2002 19:24:20 -0300", "msg_from": "Haroldo Stenger <hstenger@adinet.com.uy>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "\"Marc G. Fournier\" wrote:\n> The thing is, there are several areas where using threads would be a\n> benefit, from what I've read on this list over the years ... as time goes\n> on, less and less of the OSs in use dont' have threads, so we have to\n> start *somewhere* to work towards that sort of hybrid system ...\n\nYes.\n\nBut, maybe things like full-fledged replication, savepoints/nested transactions,\nout-of-transaction-scope cursors, and others must have priority over this; and\nthat mutating PG thread safe, will slow down a 7.3 release a lot, something not\nwanted by many here.\n\nLet's make a pro cons list of thread related aspectcs here. We saw a lot of\ncons. Write some pros explicitely. We're not in a hurry anyway.\n\nRegards,\nHaroldo,\n", "msg_date": "Wed, 06 Feb 2002 19:31:34 -0300", "msg_from": "Haroldo Stenger <hstenger@adinet.com.uy>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "\n\nOn Wed, 6 Feb 2002, Haroldo Stenger wrote:\n\n> \n> By the way, my original question about how integrated the multi-threading fork\n> reached, remained unanswered. I will assume it went threading, dropping forever\n> the original behaviour, so deciding me towards not considering threading a\n> viable option (for now).\n\nYes, you can use postmaster and fork for a connection...or at least you\ncould prior to some recent changes. I haven't tested it that way for\nawhile but it should work.\n\n\nMyron\nmkscott@sacadia.com\n\n\n", "msg_date": "Wed, 6 Feb 2002 14:59:14 -0800 (PST)", "msg_from": "<mkscott@sacadia.com>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "mkscott@sacadia.com wrote:\n> \n> On Wed, 6 Feb 2002, Haroldo Stenger wrote:\n> \n> >\n> > By the way, my original question about how integrated the multi-threading fork\n> > reached, remained unanswered. I will assume it went threading, dropping forever\n> > the original behaviour, so deciding me towards not considering threading a\n> > viable option (for now).\n> \n> Yes, you can use postmaster and fork for a connection...or at least you\n> could prior to some recent changes. I haven't tested it that way for\n> awhile but it should work.\n\nI find it very interesting. So you are telling us you were successfull in\nkeeping both functionalities? So why don't you tell us what of an effort was it\nto convert the code to thread-safe? Just to compose a community view of the\nissue, and make a rational decision...\n\nRegards,\nHaroldo.\n", "msg_date": "Wed, 06 Feb 2002 20:39:37 -0300", "msg_from": "Haroldo Stenger <hstenger@adinet.com.uy>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "On Wed, 6 Feb 2002, Haroldo Stenger wrote:\n\n> \"Marc G. Fournier\" wrote:\n> > The thing is, there are several areas where using threads would be a\n> > benefit, from what I've read on this list over the years ... as time goes\n> > on, less and less of the OSs in use dont' have threads, so we have to\n> > start *somewhere* to work towards that sort of hybrid system ...\n>\n> Yes.\n>\n> But, maybe things like full-fledged replication, savepoints/nested\n> transactions, out-of-transaction-scope cursors, and others must have\n> priority over this; and\n\nIf this are priorities for some, we do welcome patches from them to make\nit happen ... it is an open source project ... I am trying to encourage\none person how has obviously spent a good deal of time on the whole\nthreaded issue to start working at using his experience with PgSQL and\nThreading to see what, if anything, can be done to try and keep his work\nand ours from diverging too far ...\n\n> that mutating PG thread safe, will slow down a 7.3 release a lot,\n> something not wanted by many here.\n\nDepends on how it is handled ...\n\n", "msg_date": "Wed, 6 Feb 2002 22:03:48 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "\n\n\"Marc G. Fournier\" wrote:\n> \n> On Wed, 6 Feb 2002, Haroldo Stenger wrote:\n> \n> > \"Marc G. Fournier\" wrote:\n> > > The thing is, there are several areas where using threads would be a\n> > > benefit, from what I've read on this list over the years ... as time goes\n> > > on, less and less of the OSs in use dont' have threads, so we have to\n> > > start *somewhere* to work towards that sort of hybrid system ...\n> >\n> > Yes.\n> >\n> > But, maybe things like full-fledged replication, savepoints/nested\n> > transactions, out-of-transaction-scope cursors, and others must have\n> > priority over this; and\n> \n> If this are priorities for some, we do welcome patches from them to make\n> it happen ... it is an open source project ... I am trying to encourage\n> one person how has obviously spent a good deal of time on the whole\n> threaded issue to start working at using his experience with PgSQL and\n> Threading to see what, if anything, can be done to try and keep his work\n> and ours from diverging too far ...\n\nYes, that was my very original thinking. We shouldn't waste programmers or code.\nBut we're trying to make an idea of cost/benefit/risk. Let's go on with this\ndiscussion, basing it on the pros outlined by the threaded fork knowledge\nholders, right? Maybe they are tired, maybe they spent too much effort, and\ndon't want to do it again. Should that be the case, at least let us obtain\ninformation from the *developent process* of their work in order to measure the\nimpact on current source tree with current programming force.\n\n> \n> > that mutating PG thread safe, will slow down a 7.3 release a lot,\n> > something not wanted by many here.\n> \n> Depends on how it is handled ...\n\nHow do you see it not slowing down, when key developers said their view is that\nmultithreading will pose a major obstacle? Are you envisioning any special\napproach not already talked about?\n\nRegards,\nHaroldo.\n", "msg_date": "Wed, 06 Feb 2002 23:21:06 -0300", "msg_from": "Haroldo Stenger <hstenger@adinet.com.uy>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "> Let's make a pro cons list of thread related aspectcs here. We saw a lot of\n> cons. Write some pros explicitely. We're not in a hurry anyway.\n\nI think in addition to pros/cons, an important question is:\nHow has threading influenced other DBMS's? I know MySQL uses threading, at \nleast in the development version; how much has it helped? Is the utility of a \ndatabase based partly on the presence of threading? Take Oracle, MsSQL, and \nothers; which have threading and which seem to gain from threading?\n\nI don't follow the other DB's as closely, so I don't know the answers.\n\nI suspect that looking at other databases will give us a clue about the \nmagnitude of the pros, rather than just the areas of influence.\n\nRegards,\n\tJeff\n", "msg_date": "Wed, 6 Feb 2002 18:25:06 -0800", "msg_from": "Jeff Davis <list-pgsql-hackers@dynworks.com>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "\n\nOn Wed, 6 Feb 2002, Marc G. Fournier wrote:\n\n> Are there code clean-ups that have gone into the thread'd code that could\n> be incorporated into the existing code base that would start us down that\n> path? \n\nI don't think existing code is much help. So much has changed since 7.0.2\nthat the current threaded code is propbably only good for investigating\nthe benefits of threading and maybe some porting techniques.\n\n> For instance, based my limited understanding of threaded servers, I\n> believe that 'global variables' are generally considered \"A Real Bad\n> Thing\" ... in one of your email's, you mentioned:\n> \n> \"The first basic problem is that global variables are scattered throughout\n> the source ...\"\n> \n> Now, what is a 'clean' solution to this? \n\nThe current threaded postgres is messy because I just packed all the\nglobal variables, including those produced be flex, into a 5K structure.\nEverytime threaded code needed a \"global\", it called a function to\nretrieve a pointer from thread local storage. When I profiled the code I\nsaw way too many calls to grab the environment structure and I modified\nsome hotspots to pass the structure down the call chain. Ideally, I think\nthat the \"environment\" structure could be optimized for size and passed\ndown the call chain to reduce the number of times thread local storage is\naccessed. This is also bad because when anyone working on a segment of\ncode needs a global, they need to add it to the \"environment\" structure.\nI don't think this would be a good situation for code maintainers.\n\n> \n> Someone, or a group of ppl, with thread knowledge needs to start this\n> forward ... once the clean up begins, even without any thread code thrown\n> in, it shouldn't be too difficult to keep it clean to go to 'the next\n> step', no?\n> \n\nI came up with a process to find global variables in the code that became\nsomewhat effective and could be applied to the current code. Someone else\nmight have a better way of ding this though.\n\n\nMyron\nmkscott@sacadia.com\n\n", "msg_date": "Wed, 6 Feb 2002 18:51:34 -0800 (PST)", "msg_from": "<mkscott@sacadia.com>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "On Wed, 6 Feb 2002, Haroldo Stenger wrote:\n\n> > Depends on how it is handled ...\n>\n> How do you see it not slowing down, when key developers said their view\n> is that multithreading will pose a major obstacle? Are you envisioning\n> any special approach not already talked about?\n\nRead my previous emails? To move *any part of PgSQL* to a threaded model\n(even one where each connection is still forked, but parts of each\nconnection are threaded), the mess of global variables needs to be cleaned\nup ... that will be one of the \"major obstacles\" ... if someone with a\nknowledge of making code thread-safe were to submit patches (even very\nlarge ones) that start to clean this up, it could be broken down into more\nmanageable chunks ...\n\nThe second major obstacle that has been identified is cross-platform\ncomapability ... as I mentioned already, and another has also, Apache2 has\ntheir APR code that might help us reduce that obstacle to a more\nmanageable level, since, I believe, the Apache license wouldn't restrict\nus to being able to use/distribute the code ... this is definitely\nsomething that we'd have to look into to make sure though ...\n\nThe point is that nobody is even implying that this is a \"for v7.3\"\nproject ... there have been several projects that have been initiated over\nthe years that have straddled releases, and we have alot of very good\ndevelopers, and testers, that will make sure that any changes are \"for the\ngood\" ...\n\n\n", "msg_date": "Wed, 6 Feb 2002 23:27:55 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "On Wed, 6 Feb 2002 mkscott@sacadia.com wrote:\n\n> I came up with a process to find global variables in the code that\n> became somewhat effective and could be applied to the current code.\n> Someone else might have a better way of ding this though.\n\nIs this something that could be added to the distribution similar to some\nof the other development tools? Is it a shell script?\n\n\n", "msg_date": "Wed, 6 Feb 2002 23:30:18 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "On Wed, 6 Feb 2002, Haroldo Stenger wrote:\n\n> > > that mutating PG thread safe, will slow down a 7.3 release a lot,\n> > > something not wanted by many here.\n> > \n> > Depends on how it is handled ...\n> \n> How do you see it not slowing down, when key developers said their view is that\n> multithreading will pose a major obstacle? Are you envisioning any special\n> approach not already talked about?\n\nExcuse my butting in, but it large part we are talking about changing \nthings like:\n\nif (PqSomeStaticOrGlobalVariable) { ... }\n\nto \n\nif (MyPort->PqSomeVariable) { ... }\n\nconverting to thread safety should not, at least for this kind of low \nhanging fruit, have any negative performance impact. And from my vantage \npoint it takes out a whole lot of \"where did that come from and who set it \nwhen?\" kinda questions when reading the code. Of course I'm just getting \nmy feet wet so feel free to correct my first impressions.\n\nBrian\n\n", "msg_date": "Wed, 6 Feb 2002 22:32:51 -0500 (EST)", "msg_from": "Brian Bruns <camber@ais.org>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "On Wed, 6 Feb 2002, Brian Bruns wrote:\n\n> On Wed, 6 Feb 2002, Haroldo Stenger wrote:\n>\n> > > > that mutating PG thread safe, will slow down a 7.3 release a lot,\n> > > > something not wanted by many here.\n> > >\n> > > Depends on how it is handled ...\n> >\n> > How do you see it not slowing down, when key developers said their view is that\n> > multithreading will pose a major obstacle? Are you envisioning any special\n> > approach not already talked about?\n>\n> Excuse my butting in, but it large part we are talking about changing\n> things like:\n>\n> if (PqSomeStaticOrGlobalVariable) { ... }\n>\n> to\n>\n> if (MyPort->PqSomeVariable) { ... }\n>\n> converting to thread safety should not, at least for this kind of low\n> hanging fruit, have any negative performance impact. And from my vantage\n> point it takes out a whole lot of \"where did that come from and who set it\n> when?\" kinda questions when reading the code. Of course I'm just getting\n> my feet wet so feel free to correct my first impressions.\n\nThis is one way that it could be accomplish ... I think one of the more\nproper ways would be to convert the Global variables to proper function\ncalls ... a combination of the two would most likely be optimal ...\n\n\n", "msg_date": "Wed, 6 Feb 2002 23:39:27 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "\n\nBrian Bruns wrote:\n> \n> On Wed, 6 Feb 2002, Haroldo Stenger wrote:\n> \n> > > > that mutating PG thread safe, will slow down a 7.3 release a lot,\n> > > > something not wanted by many here.\n> > >\n> > > Depends on how it is handled ...\n> >\n> > How do you see it not slowing down, when key developers said their view is that\n> > multithreading will pose a major obstacle? Are you envisioning any special\n> > approach not already talked about?\n> \n> Excuse my butting in, but it large part we are talking about changing\n> things like:\n> \n> if (PqSomeStaticOrGlobalVariable) { ... }\n> \n> to\n> \n> if (MyPort->PqSomeVariable) { ... }\n> \n> converting to thread safety should not, at least for this kind of low\n> hanging fruit, have any negative performance impact. And from my vantage\n> point it takes out a whole lot of \"where did that come from and who set it\n> when?\" kinda questions when reading the code. Of course I'm just getting\n> my feet wet so feel free to correct my first impressions.\n\nJust that when I said \"will slow down a 7.3 release a lot\", I was referring to\n*the date of the release*, not its inherent performance, the code to be\nmulti-threaded or not. It was a software engineering sort of consideration. \n\nRegards,\nHaroldo.\n", "msg_date": "Thu, 07 Feb 2002 02:00:11 -0300", "msg_from": "Haroldo Stenger <hstenger@adinet.com.uy>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "\n\nOn Wed, 6 Feb 2002, Marc G. Fournier wrote:\n\n> \n> Is this something that could be added to the distribution similar to some\n> of the other development tools? Is it a shell script?\n> \n\nNo, but I suppose it could and should be, I just used a combination of the\ncommands nm and grep to find all the global symbols in the object files of\neach subsection then went through the code and determined if they needed\nto be moved. \n\nMyron\n\n\n", "msg_date": "Wed, 6 Feb 2002 21:30:07 -0800 (PST)", "msg_from": "<mkscott@sacadia.com>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "\n\nOn Wed, 6 Feb 2002, Jeff Davis wrote:\n\n> I think in addition to pros/cons, an important question is:\n> How has threading influenced other DBMS's? I know MySQL uses threading, at \n> least in the development version; how much has it helped? Is the utility of a \n\nI think threads was or is a big deal Informix (now IBM) Dynamic Server. \nWith a combination of multiple processes and threads it is able to spread a\nquery among multiple processors and recruit more resources for complex\nqueries.\n\n\nMyron\n\n", "msg_date": "Wed, 6 Feb 2002 21:39:45 -0800 (PST)", "msg_from": "<mkscott@sacadia.com>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "Here I'll respectfully compile the opinions that I found of impact over a\ndicision:\n\nRevisited key developer opinion 1:\n\nTom Lane wrote:\n> > If someone wanted to submit appropriate patches for the v7.3 development\n> > tree, that merge cleanly, I can't see why this wouldn't be a good thing\n> \n> I would resist it. I do not think we need the portability and\n> reliability headaches that would come with it. Furthermore,\n> an #ifdef'd implementation would be the worst of all possible\n> worlds, as it would do major damage to readability of the code.\n\nRevisited key developer opinion 2:\n\nPeter Eisentraut wrote:\n> > Though, starting to think & code thread safe would be nice too.\n> \n> The thing about thread-safeness is that it's only actually useful when\n> you're using threads. Otherwise it wastes everybody's time -- the\n> programmer's, the computer's, and the user's.\n\n\nSo at least for Tom Lane and Peter E., threads are hard to implement. For Tom,\nwe would enter a world of portability and reliability headaches. For Peter,\nunless we *want* threads, we don't have to start *now* coding thread safe.\nPlease correct me if I'm wrong.\n\nZeugswetter Andreas SB SD wrote:\n> > If someone wanted to submit appropriate patches for the v7.3 development\n> > tree, that merge cleanly, I can't see why this wouldn't be a good thing\n> \n> I thought that the one thread instead of one process per client model\n> would only be an advantage for the \"native Windows port\" ?\n> \n> Imho a useful threaded model on unix would involve a separation of threads\n> and clients. ( 1 CPU thread per physical CPU, several IO threads)\n> But that would involve a complete redesign.\n\nFor Andreas, for a threaded PG to be useful under a Unix environment, a complete\nPG redesign would be needed.\n\n\n\"Marc G. Fournier\" wrote:\n> \n> On Wed, 6 Feb 2002, Haroldo Stenger wrote:\n> \n> > > Depends on how it is handled ...\n> >\n> > How do you see it not slowing down, when key developers said their view\n> > is that multithreading will pose a major obstacle? Are you envisioning\n> > any special approach not already talked about?\n> \n> Read my previous emails? To move *any part of PgSQL* to a threaded model\n> (even one where each connection is still forked, but parts of each\n> connection are threaded), the mess of global variables needs to be cleaned\n> up ... that will be one of the \"major obstacles\" ... if someone with a\n> knowledge of making code thread-safe were to submit patches (even very\n> large ones) that start to clean this up, it could be broken down into more\n> manageable chunks ...\n> \n\nYes, I liked too the idea of multiple process, running multiple threads each,\ndistributed under some wise criteria.\n\nI wonder if cleaning up the mess of global variables, seems not convenient from\nPeter's or Tom's point of view. Standard wisdom says globals should be avoided.\nIn current PG's case, they should be reworked in a way or another.\n\n> The second major obstacle that has been identified is cross-platform\n> comapability ... as I mentioned already, and another has also, Apache2 has\n> their APR code that might help us reduce that obstacle to a more\n> manageable level, since, I believe, the Apache license wouldn't restrict\n> us to being able to use/distribute the code ... this is definitely\n> something that we'd have to look into to make sure though ...\n\nI agree with cross-polinization among open source projects. BTW, this practice\nshould be encouraged, and not called \"stealing\", not even as a joke, as I've\nseen it called for example for the TCP/IP Linux stack code (99% sure this was\nthe one module), which came from the *BSD projects, in its very first version.\nAlso mentioning that BSD -> GPL was possible, but not the other way round; I\ndon't mean to start a war or anything, just exposing facts.\n\n> The point is that nobody is even implying that this is a \"for v7.3\"\n> project ... there have been several projects that have been initiated over\n> the years that have straddled releases, and we have alot of very good\n> developers, and testers, that will make sure that any changes are \"for the\n> good\" ...\n\nYes, I agree. If starting to think & code thread safe *now* proves *not* to be a\nwaste of everybody's time, that's the path to follow. This very point is the one\nunder technical examination, right? \n\nRegards,\nHaroldo.\n", "msg_date": "Thu, 07 Feb 2002 02:43:42 -0300", "msg_from": "Haroldo Stenger <hstenger@adinet.com.uy>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "On Wed, 2002-02-06 at 23:00, mkscott@sacadia.com wrote:\n> \n> \n> On Wed, 6 Feb 2002, Marc G. Fournier wrote:\n> \n> > Right now, from everythign I've heard, making the code thread-safe is one\n> > big onerous task ... but if we were to start incorporating changes from\n> > the 'thread work' that is being done now, into the base server, and ppl\n> > start thinking thread-safe when they are coding new stuff, over time, this\n> > task becomes smaller ...\n> > \n> \n> I agree, once the move is made to thread-safe it becomes much easier to\n> maintain thread-safe code. I also very much like the idea of multiple\n> thread/process models that could be chosen from. I think the question has\n> always been the\n> inital cost vs. benefit. The group has not seen much to be gained for\n> the amount of initial work involved. After working with the code, I too\n> felt it wasn't worth it. \n> \n> After revisiting the threaded code after a long break I now see some real\n> benefits to threading. For example, I was able to incorporate Tom Lane's\n> lazy_vacuum code to do relation clean up automatically when a threshold of\n> page writes occurred. \n\nCould you please explain why it was easier to do with your threaded\nversion than with the standard version ?\n\n> I was also able to use the freespace information to\n> be shared among threads in the process without touching shared mem. As a\n> result, a pgbench run with 20 clients and over 1,000,000\n> trasactions maintained a more or less constant tps with manual\n> vacuum commands and far less heap expansion.\n\nDo you mean that \"it ran at more or less the same speed as when running\ncomcurrent manual VACUUMs\" ?\n\nBtw, have you tried comparing pgbench runs on threaded model vs forked\nmodel. IIRC your code can run both ways.\n\n> You can do this with processes (planned for 7.3 I think) but I\n> think it was much easier with threads. Other things may open up with\n> threads as well like Java stored procedures. Anyway, now I think it is\n> worth it.\n\nIn my experience any code cleanup will eventually pay off (if the\nproject lives long enough :)\n\n---------\nHannu\n\n\n\n\n", "msg_date": "07 Feb 2002 12:03:56 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "On Thu, Feb 07, 2002 at 12:03:56PM +0200, Hannu Krosing wrote:\n\n> Btw, have you tried comparing pgbench runs on threaded model vs forked\n> model. IIRC your code can run both ways.\n\n It depend on OS. For example do fork and create thread is very \n simular on Linux. May be ..can be some speed difference between locking \n and access to shared memory?\n\n IMHO in thread version is problem with backend crash (user's bugs in \n PL .etc).\n\n> > You can do this with processes (planned for 7.3 I think) but I\n> > think it was much easier with threads. Other things may open up with\n> > threads as well like Java stored procedures. Anyway, now I think it is\n> > worth it.\n\n Are all current PL interpereters thread safe?\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Thu, 7 Feb 2002 11:49:39 +0100", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "On Thu, 7 Feb 2002, Haroldo Stenger wrote:\n\n>\n>\n> Brian Bruns wrote:\n> >\n> > On Wed, 6 Feb 2002, Haroldo Stenger wrote:\n> >\n> > > > > that mutating PG thread safe, will slow down a 7.3 release a lot,\n> > > > > something not wanted by many here.\n> > > >\n> > > > Depends on how it is handled ...\n> > >\n> > > How do you see it not slowing down, when key developers said their view is that\n> > > multithreading will pose a major obstacle? Are you envisioning any special\n> > > approach not already talked about?\n> >\n> > Excuse my butting in, but it large part we are talking about changing\n> > things like:\n> >\n> > if (PqSomeStaticOrGlobalVariable) { ... }\n> >\n> > to\n> >\n> > if (MyPort->PqSomeVariable) { ... }\n> >\n> > converting to thread safety should not, at least for this kind of low\n> > hanging fruit, have any negative performance impact. And from my vantage\n> > point it takes out a whole lot of \"where did that come from and who set it\n> > when?\" kinda questions when reading the code. Of course I'm just getting\n> > my feet wet so feel free to correct my first impressions.\n>\n> Just that when I said \"will slow down a 7.3 release a lot\", I was referring to\n> *the date of the release*, not its inherent performance, the code to be\n> multi-threaded or not. It was a software engineering sort of consideration.\n\nAgain, if we go at it as 'threaded for v7.3', then most probably ... but I\nwould not allow that to happen, nor would any of the *core* developers ...\nwhat I am, and have been, advocating is starting down the 'thread-safe'\npath ... as has actually been discussed before, there are sections of\nPostgreSQL that could make use of threading without the whole system\n*being* threaded ... stuff that, right now, are done sequentially that\ncould be done in parralel if threading was available ...\n\n", "msg_date": "Thu, 7 Feb 2002 08:30:04 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "On Thu, 7 Feb 2002, Haroldo Stenger wrote:\n\n> Here I'll respectfully compile the opinions that I found of impact over a\n> dicision:\n>\n> Revisited key developer opinion 1:\n>\n> Tom Lane wrote:\n> > > If someone wanted to submit appropriate patches for the v7.3 development\n> > > tree, that merge cleanly, I can't see why this wouldn't be a good thing\n> >\n> > I would resist it. I do not think we need the portability and\n> > reliability headaches that would come with it. Furthermore,\n> > an #ifdef'd implementation would be the worst of all possible\n> > worlds, as it would do major damage to readability of the code.\n\nPut this into context ... I had suggested someone submit'ng #ifdef'd code\nthat could implement threaded, not that someone submit'd code to clean up\na mess that nobody *really* wants to clean up due to time and lack of\nvisibility/glory *grin*\n\n\n> > Revisited key developer opinion 2:\n>\n> Peter Eisentraut wrote:\n> > > Though, starting to think & code thread safe would be nice too.\n> >\n> > The thing about thread-safeness is that it's only actually useful when\n> > you're using threads. Otherwise it wastes everybody's time -- the\n> > programmer's, the computer's, and the user's.\n>\n>\n> So at least for Tom Lane and Peter E., threads are hard to implement.\n> For Tom, we would enter a world of portability and reliability\n> headaches. For Peter, unless we *want* threads, we don't have to start\n> *now* coding thread safe. Please correct me if I'm wrong.\n\nyes and no ... Tom is/was looking at it from an 'implement it for all the\nsystems we currently support' point of view, without looking at (and Tom,\nfeel free to correct me if I'm wrong) what has been implemented outside of\nour project to simplify the portability and reliability issues associated\nwith supporting both a fork and fork/thread model ... with the work that\nthe Apache group has done in this regard, and the fact that their license\nis not restrictive, both issues may (or may not) be moot, but someone has\nto investigate that ...\n\nIn Peter's case ... I'm sorry, but I was always taught in programming that\n\"global variables should be avoided at all costs\" ... right now, all I'm\nadvocating *right now* is making our variables thread safe, which, from my\nunderstanding, means getting rid of the global variables ... not sure how\nthat affects the users themselves, but, from a programmers standpoint, the\n'time' is what the person cleaning the code has to put into it ... once\nits cleaned up, any new code or changes should just automatically be\n\"global variables aren't permitted\"\n\nBoth Tom and Peter have better/more important things on their plates then\nto go through the code and clean up the global variables ...\n\nEventually, I would like to see, where possible, threaded code put in so\nthat each connection is *still* forked, but parts of the connection that\ncould deal with more parralel processing making use of threads to speed it\nup ...\n\n> I wonder if cleaning up the mess of global variables, seems not\n> convenient from Peter's or Tom's point of view. Standard wisdom says\n> globals should be avoided. In current PG's case, they should be reworked\n> in a way or another.\n\nCorrect, and that is what I am currently advocating ... if we get that\ncleaned up, so that 'threaded' is possible, nothing stops the next step\nbeing someone submit'ng a simple patch that uses threading to 'read from\ndisk while processing what has been read in, as it is being read in' ...\nthe point is, until we clean out the *time consuming, but relatively easy*\nanti-thread issues we have, even if that is over several releases, nothing\nelse is going to happen cause \"its too big of a job\" ... what I would like\nto see is someone submitting large patches that clean the global\nvariables, one global at a time ... I say large, because I would imagine\nthat pretty much any global is going to hit a *large* number of files to\nremove it, and add it back in as an arg to functions ...\n\nI can't see anyone convincingly argue against such patches, since, IMHO,\nglobal variables are a remenent of when we took over the code from\nBerkeley, I can't see any of the core developers actually *approving* of\nthem being there except the work involved in removing them ... :)\n\n\n", "msg_date": "Thu, 7 Feb 2002 09:52:42 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "Haroldo Stenger wrote:\n<snip>\n> \n> I agree with cross-polinization among open source projects. BTW, this practice\n> should be encouraged, and not called \"stealing\", not even as a joke, as I've\n> seen it called for example for the TCP/IP Linux stack code (99% sure this was\n> the one module), which came from the *BSD projects, in its very first version.\n> Also mentioning that BSD -> GPL was possible, but not the other way round; I\n> don't mean to start a war or anything, just exposing facts.\n> \n> > The point is that nobody is even implying that this is a \"for v7.3\"\n> > project ... there have been several projects that have been initiated over\n> > the years that have straddled releases, and we have alot of very good\n> > developers, and testers, that will make sure that any changes are \"for the\n> > good\" ...\n> \n> Yes, I agree. If starting to think & code thread safe *now* proves *not* to be a\n> waste of everybody's time, that's the path to follow. This very point is the one\n> under technical examination, right?\n\nSo, with this thought in mind of \"starting to think & code thread safe\",\nwe should start putting together a set of reference guidlines,\nespecially drawing on the experience of people whom have good, solid\nexperience with threaded, multi-process, cross-platform coding. It\nshould take into account the people who are reading it, may not be as\nexperienced in this um... specialised area of coding too.\n\nWe've identified \"global variables\" needing to be done in a better and\nmore consistent way.\n\nSo, what else do coders need to do when \"thinking and coding thread\nsafe\", that we can make into a guidline for forthcoming PostgreSQL\ncoding?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> Regards,\n> Haroldo.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Fri, 08 Feb 2002 01:31:00 +1100", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "\n\n> Again, if we go at it as 'threaded for v7.3', then most probably ... but I\n> would not allow that to happen, nor would any of the *core* developers ...\n> what I am, and have been, advocating is starting down the 'thread-safe'\n> path ... as has actually been discussed before, there are sections of\n> PostgreSQL that could make use of threading without the whole system\n> *being* threaded ... stuff that, right now, are done sequentially that\n> could be done in parralel if threading was available ...\n\n\nHow about doing what Marc suggests and start moving toward reentrant\nfunctions in postgres. \n\nThis could be done by creating a global private\nmemory area that is accessed much like shared memory is now with a hash\ntable setting aside memory for various code subsections. We could put\nall the global variables there with little impact on current functionality\nand, if done right, speed. I think I have a good idea as to where most of\nthe \"difficult\" globals are and could start working on moving them once\nthe global memory area was set up. We can worry about threads vs.\nprocesses later.\n\n\ncomments?\n\nMyron\n\n\n\n", "msg_date": "Thu, 7 Feb 2002 08:13:02 -0800 (PST)", "msg_from": "<mkscott@sacadia.com>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "\n\nOn Thu, 7 Feb 2002, mlw wrote:\n\n> \n> Going from a \"process model\" to a \"threaded model\" is a HUGE\n> undertaking. In the process model, all data is assumed to be private,\n> and shared data must be explicitly shared. In a threaded model all data\n> is implicitly shared and private data must be explicitly made private.\n> Do not under estimate what this means or how hard it is to convert one\n> to the other.\n\nAgreed.\n\n> \n> Also:\n> \n> Think of file handles. In a threaded version of postgreSQL, all\n> connections will be competing for file handles. I think the limit in\n> Linux is 1024.\n> \n\nYes, but because the current file manager is built with three layers of\nabsraction OS FD --> Postgres Vfd --> Postgres Storage Manager it is\npossible to manage and configure this very nicely. For threaded postgres,\neach thread has its own storage manager which share Vfd's to sharing max.\nThis prevents too many threads from trying to seek on the same OS FD. The\nVfd's manage OS FD resources.\n\n> All threads will be competing for memory mapping. As systems get more\n> and more RAM, on the x86 and other 32 bit machines, process space is\n> limited to 2 to 3 gig. If you have 8 gig in your system, PostgreSQL\n> won't be able to use it.\n> \n\nYou should be able to set up several processes in shared memory for the\ndb. 5 processes * 256 client threads per process = 1280 clients or\nsomething like that. \n\n> As I have said before, multithreading queries within a connection\n> process would be pretty cool, on a low load server, \n\nI think this would be possible now if I knew how to spin out subqueries\nfrom the query tree.\n\n\nMyron\nmkscott@sacadia.com\n\n", "msg_date": "Thu, 7 Feb 2002 09:10:19 -0800 (PST)", "msg_from": "<mkscott@sacadia.com>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "Justin Clift wrote:\n> \n> Haroldo Stenger wrote:\n> <snip>\n> >\n> > I agree with cross-polinization among open source projects. BTW, this practice\n> > should be encouraged, and not called \"stealing\", not even as a joke, as I've\n> > seen it called for example for the TCP/IP Linux stack code (99% sure this was\n> > the one module), which came from the *BSD projects, in its very first version.\n> > Also mentioning that BSD -> GPL was possible, but not the other way round; I\n> > don't mean to start a war or anything, just exposing facts.\n> >\n> > > The point is that nobody is even implying that this is a \"for v7.3\"\n> > > project ... there have been several projects that have been initiated over\n> > > the years that have straddled releases, and we have alot of very good\n> > > developers, and testers, that will make sure that any changes are \"for the\n> > > good\" ...\n> >\n> > Yes, I agree. If starting to think & code thread safe *now* proves *not* to be a\n> > waste of everybody's time, that's the path to follow. This very point is the one\n> > under technical examination, right?\n> \n> So, with this thought in mind of \"starting to think & code thread safe\",\n> we should start putting together a set of reference guidlines,\n> especially drawing on the experience of people whom have good, solid\n> experience with threaded, multi-process, cross-platform coding. It\n> should take into account the people who are reading it, may not be as\n> experienced in this um... specialised area of coding too.\n> \n> We've identified \"global variables\" needing to be done in a better and\n> more consistent way.\n> \n> So, what else do coders need to do when \"thinking and coding thread\n> safe\", that we can make into a guidline for forthcoming PostgreSQL\n> coding?\n\nGoing from a \"process model\" to a \"threaded model\" is a HUGE\nundertaking. In the process model, all data is assumed to be private,\nand shared data must be explicitly shared. In a threaded model all data\nis implicitly shared and private data must be explicitly made private.\nDo not under estimate what this means or how hard it is to convert one\nto the other.\n\nAlso:\n\nThink of file handles. In a threaded version of postgreSQL, all\nconnections will be competing for file handles. I think the limit in\nLinux is 1024.\n\nAll threads will be competing for memory mapping. As systems get more\nand more RAM, on the x86 and other 32 bit machines, process space is\nlimited to 2 to 3 gig. If you have 8 gig in your system, PostgreSQL\nwon't be able to use it.\n\nAs I have said before, multithreading queries within a connection\nprocess would be pretty cool, on a low load server, this could make a\nbig performance increase, but it may be easier to create a couple I/O\nthreads per connection process and devise some queuing mechanism for\ndisk reads/write. In essence provide an asynchronous I/O system. This\nwould give you the some of the performance of multithreading a query,\nwhile not requiring a complete thread-safe implementation.\n\nI think threading connections is a VERY bad idea. I am dubious that the\namount of work will result in a decent return on investment.\n", "msg_date": "Thu, 07 Feb 2002 12:13:03 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "...\n> As I have said before, multithreading queries within a connection\n> process would be pretty cool, on a low load server, this could make a\n> big performance increase, but it may be easier to create a couple I/O\n> threads per connection process and devise some queuing mechanism for\n> disk reads/write. In essence provide an asynchronous I/O system. This\n> would give you the some of the performance of multithreading a query,\n> while not requiring a complete thread-safe implementation.\n\nThe other use case would be a high load server with only one or a few\nconnections (big queries, few clients); see below.\n\n> I think threading connections is a VERY bad idea. I am dubious that the\n> amount of work will result in a decent return on investment.\n\nAgreed. A subset area which *might* be a benefit for the use case above\nis to allow threading of subqueries, which might happen after the\noptimizer section of code. That is a (pretty big) fraction of the code,\nnot all of it, and it would still continue the benefits of the\nprocess-per-client model while allowing a client to spread across\nmultiple processors.\n\nThe other area which could be exploited with restructuring to allow\npost-optimizer threading is distributed databases, where each of those\nsubqueries could be rerouted to another server.\n\nA first cut would be to allow read-only distributed databases; that\nmight demote the nomenclature for this to federated databases, but it is\nstill an interesting capability.\n\n - Thomas\n", "msg_date": "Thu, 07 Feb 2002 17:22:56 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "On Thu, 2002-02-07 at 19:13, mlw wrote:\n> Justin Clift wrote:\n> \n> Also:\n> \n> Think of file handles. In a threaded version of postgreSQL, all\n> connections will be competing for file handles. I think the limit in\n> Linux is 1024.\n\n From what I've seen we are more likely to hit the per-system file handle\nlimit when all separate forks open the same files over and over again,\nso as the number of processes grows we will be worse off than usin the\nsame file handles for all connections in threaded mode. \n\n> I think threading connections is a VERY bad idea. I am dubious that the\n> amount of work will result in a decent return on investment.\n \nThis whole thread started with a notion that this has already been done\nonce and the idea was to investigate what could be brought over to main\nforked-only (the threaded version could be forked at the same time)\ncodebase.\n\n----------------\nHannu\n\n\n", "msg_date": "07 Feb 2002 20:40:53 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "On Thu, 2002-02-07 at 12:49, Karel Zak wrote:\n\n> IMHO in thread version is problem with backend crash (user's bugs in \n> PL .etc).\n\nThe current behaviour for crashing one backend is also \"terminate all\nbackends as something bad may have happened to shared memory\".\n\n----------\nHannu\n\n\n", "msg_date": "07 Feb 2002 20:45:23 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "On Thu, 7 Feb 2002, mlw wrote:\n\n> As I have said before, multithreading queries within a connection\n> process would be pretty cool, on a low load server, this could make a\n> big performance increase, but it may be easier to create a couple I/O\n> threads per connection process and devise some queuing mechanism for\n> disk reads/write. In essence provide an asynchronous I/O system. This\n> would give you the some of the performance of multithreading a query,\n> while not requiring a complete thread-safe implementation.\n>\n> I think threading connections is a VERY bad idea. I am dubious that the\n> amount of work will result in a decent return on investment.\n\nI don't believe anyone (or, at least I hope not) is advocating threading\nconnections ... with systems getting more and more CPUs, and more and more\nRAM, what I'm advocating is looking at taking pieces from within the\nconnection itself and threading those, to improve performance ... from\nwhat I can tell with Apache2 itself, there is no \"thread only\" model that\nthey are advocating ... the closest is their 'worker' where you can have\nmultiple connections threaded in multiple processes, so, in theory, you\ncould limit to a large number of threads and a very low number of\nprocesses ...\n\n", "msg_date": "Thu, 7 Feb 2002 14:54:47 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "\n\nOn Thu, 7 Feb 2002, Marc G. Fournier wrote:\n\n> \n> I don't believe anyone (or, at least I hope not) is advocating threading\n> connections ... with systems getting more and more CPUs, and more and more\n> RAM, what I'm advocating is looking at taking pieces from within the\n> connection itself and threading those, to improve performance ... from\n> what I can tell with Apache2 itself, there is no \"thread only\" model that\n> they are advocating ... the closest is their 'worker' where you can have\n> multiple connections threaded in multiple processes, so, in theory, you\n> could limit to a large number of threads and a very low number of\n> processes ...\n\nMaking postgres functions thread-safe increases the\nflexibility of the codebase. Whether threading connections, sub-queries,\nincreasing processor utilization, or some other unforseen optimization,\nhaving reentrant and thread-safe code leaves the door open for new ideas.\nYes, writing reenterant code can be restrictive and a little more complex,\nbut not much, the big work is the upfront cost of porting. I have done it\ndone it once and gained a great deal on projects that I am working on.\n\nMyron\nmkscott@sacadia.com\n\n", "msg_date": "Thu, 7 Feb 2002 11:25:08 -0800 (PST)", "msg_from": "<mkscott@sacadia.com>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "On Thu, 7 Feb 2002, mlw wrote:\n> \n> Going from a \"process model\" to a \"threaded model\" is a HUGE\n> undertaking. In the process model, all data is assumed to be private,\n> and shared data must be explicitly shared. In a threaded model all data\n> is implicitly shared and private data must be explicitly made private.\n> Do not under estimate what this means or how hard it is to convert one\n> to the other.\n> \n\nI agree with the first and last sentance ... the rest of the paragraph is \n... well we argued this before - look in the archives.\n\n> Also:\n> \n> Think of file handles. In a threaded version of postgreSQL, all\n> connections will be competing for file handles. I think the limit in\n> Linux is 1024.\n\nDepends on how it is done.\n\n> All threads will be competing for memory mapping. As systems get more\n> and more RAM, on the x86 and other 32 bit machines, process space is\n> limited to 2 to 3 gig. If you have 8 gig in your system, PostgreSQL\n> won't be able to use it.\n\nDepends on how it is done.\n\n> I think threading connections is a VERY bad idea. I am dubious that the\n> amount of work will result in a decent return on investment.\n\nDepends on how it is done. We should be careful to assume that threading \npostgresql instantly equates to threading connections. That is only *ONE* \npossible type of threading architecture one could choose. Making broad \ngeneralized statements doesn't accomplish anything in this debate ... \ninstead be more focused with your comments so one can make heads or tails \nout of them.\n\n-- \n//========================================================\\\\\n|| D. Hageman <dhageman@dracken.com> ||\n\\\\========================================================//\n\n", "msg_date": "Thu, 7 Feb 2002 14:16:54 -0600 (CST)", "msg_from": "\"D. Hageman\" <dhageman@dracken.com>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "\"D. Hageman\" wrote:\n> \n> On Thu, 7 Feb 2002, mlw wrote:\n> >\n> > Going from a \"process model\" to a \"threaded model\" is a HUGE\n> > undertaking. In the process model, all data is assumed to be private,\n> > and shared data must be explicitly shared. In a threaded model all data\n> > is implicitly shared and private data must be explicitly made private.\n> > Do not under estimate what this means or how hard it is to convert one\n> > to the other.\n> >\n> \n> I agree with the first and last sentance ... the rest of the paragraph is\n> ... well we argued this before - look in the archives.\n\nyes, I know.\n> \n> > Also:\n> >\n> > Think of file handles. In a threaded version of postgreSQL, all\n> > connections will be competing for file handles. I think the limit in\n> > Linux is 1024.\n> \n> Depends on how it is done.\n\nHow does it depend? If you have one process with multiple threads, you\nwill bump up against the process limit of file handles.\n\n> \n> > All threads will be competing for memory mapping. As systems get more\n> > and more RAM, on the x86 and other 32 bit machines, process space is\n> > limited to 2 to 3 gig. If you have 8 gig in your system, PostgreSQL\n> > won't be able to use it.\n> \n> Depends on how it is done.\n\nAgain, How does it depend? If you have one process, there is a limit to\nthe amount of memory it can access. 3gig (2gig on older Windows) of\nprocess space it is a classic limitation to x86 operating systems.\n\n> \n> > I think threading connections is a VERY bad idea. I am dubious that the\n> > amount of work will result in a decent return on investment.\n> \n> Depends on how it is done. We should be careful to assume that threading\n> postgresql instantly equates to threading connections. That is only *ONE*\n> possible type of threading architecture one could choose. Making broad\n> generalized statements doesn't accomplish anything in this debate ...\n> instead be more focused with your comments so one can make heads or tails\n> out of them.\n\nThere are, AFAIK two reasons to thread PostgreSQL:\n\n(1) Run the multiple connections in their own thread with the assumption\nthat this is more efficient for [n] reasons.\n(2) Run a single query across multiple threads, thus parallelizing the\nquery engine.\n\nThere is a mutant of this as well: (1a) You could have multiple\nprocesses each with [n] connection threads.\n\nAs far as PostgreSQL is concerned, I am dubious that (1) or (1a) will\nprovide any real benefit for the amount of work required to accomplish\nit. Work on \"pre-forking\" would be FAR more productive.\n\nThe idea of parallelizing queries could be very worth while. However,\nthat being said, creating a set of I/O threads that get blocks from disk\ndevices asynchronously, my be enough with a very limited amount of work.\n\nI guess all I am saying, is that a person's time is really the only\nlimited resource. Tom, Bruce, Marc, Peter and everyone else have a\nlimited amount of time. If I could influence how those guys spend their\ntime, I would hope they spent time working on improving the\nfunctionality of PostgreSQL, not the tedium of making it thread safe.\n\n\n\n\n\n\n> \n> --\n> //========================================================\\\\\n> || D. Hageman <dhageman@dracken.com> ||\n> \\\\========================================================//\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n", "msg_date": "Thu, 07 Feb 2002 16:39:03 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "On Thu, 7 Feb 2002, mlw wrote:\n\n<SNIP a bunch crap that will hopefully be implicitly explained \nand understand by the comments below>\n\n> There are, AFAIK two reasons to thread PostgreSQL:\n> \n> (1) Run the multiple connections in their own thread with the assumption\n> that this is more efficient for [n] reasons.\n> (2) Run a single query across multiple threads, thus parallelizing the\n> query engine.\n\n(3) Parallelize house keeping (for example vacuums) of the database. I \nthink they are going to call this processes or something slated for the \nnext version? \n\n(4) Replication\n\n(5) Referential Integritity cleanups\n\n(6) EXOTIC FEATURES: crossdb\n\nOh yeah ... and we might be able to drop the whole startup time section \nfrom the TODO list. It all depends on how one wants to implement the \nthreads into postgresql. Then again ... maybe a task of this endeavor \nwould be more appropriately forked off and proceeded on as a seperate \nproject (as it kinda as already been done).\n\n> I guess all I am saying, is that a person's time is really the only\n> limited resource. Tom, Bruce, Marc, Peter and everyone else have a\n> limited amount of time. If I could influence how those guys spend their\n> time, I would hope they spent time working on improving the\n> functionality of PostgreSQL, not the tedium of making it thread safe.\n\nThe people that do the biggest amount of coding should definately code \nwhat they feel is the best to work on - NO one is arguing that. If a few \nof them want to assist in this endeavor then they should do that as well. \nMost importantly - we shouldn't belittle the efforts of those that do see \nthe vision of how this could be beneficial in the long run. My point is \nthat I see more people wasting time complaining then it would take to make \nup a list of coding practices to follow for future work that will make the \npostgresql code base better. (Come on ... the first thing a programmer is \ntaught is that global variables are BAD).\n\n-- \n//========================================================\\\\\n|| D. Hageman <dhageman@dracken.com> ||\n\\\\========================================================//\n\n", "msg_date": "Thu, 7 Feb 2002 16:10:36 -0600 (CST)", "msg_from": "\"D. Hageman\" <dhageman@dracken.com>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "\"D. Hageman\" <dhageman@dracken.com> writes:\n\n> (3) Parallelize house keeping (for example vacuums) of the database. I \n> think they are going to call this processes or something slated for the \n> next version? \n> \n> (4) Replication\n> \n> (5) Referential Integritity cleanups\n> \n> (6) EXOTIC FEATURES: crossdb\n\nI fail to see how threads are required for any of these. They could\njust as well be done with a separate process(es) in the current model.\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n", "msg_date": "07 Feb 2002 17:30:27 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "\"D. Hageman\" <dhageman@dracken.com> writes:\n> (Come on ... the first thing a programmer is \n> taught is that global variables are BAD).\n\nReality check time: I don't believe there are very many\ngratuitously-static variables in the backend. Most of the ones I can\nthink of offhand are associated with data structures that are actually\nglobal, or at least would be of interest to more than one thread.\n(For example, the catcache/relcache data structures are referenced from\nstatic variables. You would very likely want these caches to be shared\nacross as many threads as possible. The data structures associated with\nconfiguration variables would need to be shared by all threads executing\non behalf of a particular client connection. Etc.) So the hard part of\nmaking the code \"thread safe\" is figuring out what we want to do with\npotentially-sharable data structures: can they be shared, if so across\nwhat scope, and what sort of locking penalty will we pay for sharing\nthem?\n\nMaybe I'm missing something, but I don't think that a \"coding practices\"\ndocument will do much of anything to improve our threading situation.\nIt might be worth having on other grounds, but not that one.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Feb 2002 17:41:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server " }, { "msg_contents": "On 7 Feb 2002, Doug McNaught wrote:\n\n> \"D. Hageman\" <dhageman@dracken.com> writes:\n> \n> > (3) Parallelize house keeping (for example vacuums) of the database. I \n> > think they are going to call this processes or something slated for the \n> > next version? \n> > \n> > (4) Replication\n> > \n> > (5) Referential Integritity cleanups\n> > \n> > (6) EXOTIC FEATURES: crossdb\n> \n> I fail to see how threads are required for any of these. They could\n> just as well be done with a separate process(es) in the current model.\n> \n\nOh, I didn't realize the conversation was about what threads was \n\"required\" for completing. My mistake ... *cough* *cough*\n\n-- \n//========================================================\\\\\n|| D. Hageman <dhageman@dracken.com> ||\n\\\\========================================================//\n\n", "msg_date": "Thu, 7 Feb 2002 17:31:03 -0600 (CST)", "msg_from": "\"D. Hageman\" <dhageman@dracken.com>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "On Thu, 7 Feb 2002, Tom Lane wrote:\n\n> \n> Maybe I'm missing something, but I don't think that a \"coding practices\"\n> document will do much of anything to improve our threading situation.\n> It might be worth having on other grounds, but not that one.\n> \n\nYou aren't missing anything. A document of coding practices with points \non using thread-safe functions etc. isn't going to revolutionize anything. \nHowever, it has the potential of being the best way to begin and soften \nthe cries of the luddites (which is the biggest problem at the momment).\n\n-- \n//========================================================\\\\\n|| D. Hageman <dhageman@dracken.com> ||\n\\\\========================================================//\n\n", "msg_date": "Thu, 7 Feb 2002 17:36:52 -0600 (CST)", "msg_from": "\"D. Hageman\" <dhageman@dracken.com>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server " }, { "msg_contents": "On Thu, 7 Feb 2002 mkscott@sacadia.com wrote:\n\n> Making postgres functions thread-safe increases the flexibility of the\n> codebase. Whether threading connections, sub-queries, increasing\n> processor utilization, or some other unforseen optimization, having\n> reentrant and thread-safe code leaves the door open for new ideas. Yes,\n> writing reenterant code can be restrictive and a little more complex,\n> but not much, the big work is the upfront cost of porting. I have done\n> it done it once and gained a great deal on projects that I am working\n> on.\n\nWould be willing to take what you've learnt and work with the current CVS\ntree towards making her thread-safe? Even small steps regularly taken\nbrings us closer to being able to use even *some* threading in the backend\n...\n\n", "msg_date": "Thu, 7 Feb 2002 20:44:31 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "On Thu, 7 Feb 2002, mlw wrote:\n\n> How does it depend? If you have one process with multiple threads, you\n> will bump up against the process limit of file handles.\n\nSo? Use an OS that doesn't impose such limits, or lets you increase them?\n\n> Again, How does it depend? If you have one process, there is a limit to\n> the amount of memory it can access. 3gig (2gig on older Windows) of\n> process space it is a classic limitation to x86 operating systems.\n\nBut, we aren't talking about one *big* process with many threads ... we\nare talking several processes that make use of threads to speed up various\nprocesses ... kinda like programming in C for 99% of a project, but going\nto assembly for stuff that could use that little bit of a boost ...\n\n> I guess all I am saying, is that a person's time is really the only\n> limited resource. Tom, Bruce, Marc, Peter and everyone else have a\n> limited amount of time. If I could influence how those guys spend their\n> time, I would hope they spent time working on improving the\n> functionality of PostgreSQL, not the tedium of making it thread safe.\n\nExcept that, as several ppl have pointed out, that 'tedium' could result\nin functionality that we really don't have right now ... right now, with a\n\"non-threaded, single process per connection\", you really aren't making\n*as efficient of use* of a multi-CPU environment ... how many queries\nspend a good deal of time sitting in an I/O wait state because it has to\nwait untli all the data is read from the drive before it can start\nprocessing? Going to a large database/application, on a Quad+ server,\nwhere you don't have *alot* of queries happening, but those that do are\n*very* large ... that large query is currently stuck on the one CPU while\nthe other 3+ CPUs are sitting idle ... etc, etc ... there is functionality\nthat 'working around' in a non-threaded environment would be more tedious\nthen doing the code clean up itself, and, most likely, not near as\nefficient as it could be ...\n\nThe first step has to be taken *sometime*, and best to encourage it while\nwe have ppl around that have the *knowledge* to take it ... god, I can\nremember when doing the code cleanups to get configure integrated into our\nbuild process (there was a time where configure didn't exist) was a\ntedious process, but how many ppl out there could imagine us without it?\n\n", "msg_date": "Thu, 7 Feb 2002 20:54:06 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "At 04:39 PM 07-02-2002 -0500, mlw wrote:\n>\n>There are, AFAIK two reasons to thread PostgreSQL:\n>\n>(1) Run the multiple connections in their own thread with the assumption\n>that this is more efficient for [n] reasons.\n>(2) Run a single query across multiple threads, thus parallelizing the\n>query engine.\n>\n>There is a mutant of this as well: (1a) You could have multiple\n>processes each with [n] connection threads.\n>\n>As far as PostgreSQL is concerned, I am dubious that (1) or (1a) will\n>provide any real benefit for the amount of work required to accomplish\n>it. Work on \"pre-forking\" would be FAR more productive.\n>\n>The idea of parallelizing queries could be very worth while. However,\n>that being said, creating a set of I/O threads that get blocks from disk\n>devices asynchronously, my be enough with a very limited amount of work.\n\n2) seems to be the only good argument for threads so far. 1) may only be\ntrue on certain O/Ses.\n\nThat said, are those large single queries typically CPU bound or IO bound\nor neither?\n\nIf they are IO bound then given my limited understanding it is not easy to\nsee how spreading the query over additional CPUs is going to help.\n\nI suggest that work on clustering postgresql may result in a more scalable\ngeneral solution than threaded postgresql. Looks to be more difficult, but\nthe benefits seem more tangible.\n\nCheerio,\nLink.\n\n", "msg_date": "Fri, 08 Feb 2002 11:35:31 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "\n\n\nOn Thu, 7 Feb 2002, Marc G. Fournier wrote:\n\n> \n> Would be willing to take what you've learnt and work with the current CVS\n> tree towards making her thread-safe? Even small steps regularly taken\n> brings us closer to being able to use even *some* threading in the backend\n> ...\n> \n\nI can definitely take a stab aat it. Maybe I can make a test case with\nsome globals that are accessed often submit some patches to see what\npeople think. Can I send them to you?\n\nMyron\nmkscott@sacadia.com\n\n", "msg_date": "Thu, 7 Feb 2002 23:14:01 -0800 (PST)", "msg_from": "<mkscott@sacadia.com>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "> I can definitely take a stab aat it. Maybe I can make a test case with\n> some globals that are accessed often submit some patches to see what\n> people think. Can I send them to you?\n\nMaybe we should assign someone (or a team) to be the 'thread strike force'.\nTheir job is to (at their leisure) tidy up various parts of the source code\nin such a way that they should not affect other parts. This should be done\nduring the release cycle, so there is plenty of time to test their changes.\n\nThen, once the whole source tree has had its stylistic improvements, it\nwould become easier to switch to a threaded/mpm model...\n\nChris\n\n", "msg_date": "Fri, 8 Feb 2002 16:09:55 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "On Thu, 7 Feb 2002 mkscott@sacadia.com wrote:\n\n>\n>\n>\n> On Thu, 7 Feb 2002, Marc G. Fournier wrote:\n>\n> >\n> > Would be willing to take what you've learnt and work with the current CVS\n> > tree towards making her thread-safe? Even small steps regularly taken\n> > brings us closer to being able to use even *some* threading in the backend\n> > ...\n> >\n>\n> I can definitely take a stab aat it. Maybe I can make a test case with\n> some globals that are accessed often submit some patches to see what\n> people think. Can I send them to you?\n\nSend them through to pgsql-patches@postgresql.org ... since we are right\nat the start of the development cycle for v7.3, things should be alot\neasier ... pretty much expect to send them in, have them reviewed and\ncommented upon by various developers as to how this shold be done this\nway, and that shouldn't be done this way and have to re-submit ... :)\n\n", "msg_date": "Fri, 8 Feb 2002 09:06:40 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "On Fri, 8 Feb 2002, Christopher Kings-Lynne wrote:\n\n> > I can definitely take a stab aat it. Maybe I can make a test case with\n> > some globals that are accessed often submit some patches to see what\n> > people think. Can I send them to you?\n>\n> Maybe we should assign someone (or a team) to be the 'thread strike force'.\n> Their job is to (at their leisure) tidy up various parts of the source code\n> in such a way that they should not affect other parts. This should be done\n> during the release cycle, so there is plenty of time to test their changes.\n>\n> Then, once the whole source tree has had its stylistic improvements, it\n> would become easier to switch to a threaded/mpm model...\n\nWoo hoo, he caught up with the thread *grin* *poke*\n\nYes, this is exactly what we've been discussing, while some have been\ntrying to tangent off onto side threads ...\n\n", "msg_date": "Fri, 8 Feb 2002 09:07:54 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "\"Marc G. Fournier\" wrote:\n> \n<snip>\n> \n> Woo hoo, he caught up with the thread *grin* *poke*\n> \n> Yes, this is exactly what we've been discussing, while some have been\n> trying to tangent off onto side threads ...\n\nI feel this would benefit from some kind of PostgreSQL specific guide\nfor new coders to follow. Doesn't have to be overdone, but it should at\nleast give people an idea of what stuff to keep in mind when coding.\n\n???\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Sat, 09 Feb 2002 00:37:19 +1100", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "<mkscott@sacadia.com> writes:\n> I can definitely take a stab aat it. Maybe I can make a test case with\n> some globals that are accessed often submit some patches to see what\n> people think. Can I send them to you?\n\nI have a sneaking feeling that what you are going to come up with is a\nmulti-megabyte patch to convert CurrentMemoryContext into a non-global,\nwhich will require changing the parameter list of damn near every\nroutine in the backend.\n\nPersonally I will vote for rejecting such a patch, as it will uglify the\ncode (and break nearly all existing user-written extension functions)\nfar more than is justified by what it accomplishes: exactly zero, in\nterms of near-term usefulness.\n\nI think what's more interesting to discuss at this stage is the\nconsiderations I alluded to before: what are we going to do with the\ncaches and other potentially-sharable datastructures? Without a\ncredible design for those issues, there is no point in sweating the\nsmall-but-annoying stuff.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 08 Feb 2002 11:17:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server " }, { "msg_contents": "On Fri, Feb 08, 2002 at 11:17:51AM -0500, Tom Lane wrote:\n> <mkscott@sacadia.com> writes:\n> > I can definitely take a stab aat it. Maybe I can make a test case with\n> > some globals that are accessed often submit some patches to see what\n> > people think. Can I send them to you?\n> \n> I have a sneaking feeling that what you are going to come up with is a\n> multi-megabyte patch to convert CurrentMemoryContext into a non-global,\n> which will require changing the parameter list of damn near every\n> routine in the backend.\n\n Sorry I not too careful watch this discussion, but if I see that\n you are talking about PostgreSQL memory management and threads I have \n have a note.\n\n I and Dan Horak one year work on Mape project (http://mape.jcu.cz) and \n we already have ported good postgres memory management into thread daemon. \n It works very well and it's transparend solution -- you not must rewrite \n routines that use MamoryContextSwitchTo or palloc() and other stuff, \n because everything is based on thread-specific contexts (see man about \n pthread_key_create). With this solution you not must change to much\n things in current code.\n \n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Fri, 8 Feb 2002 18:15:28 +0100", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server" }, { "msg_contents": "\n\nOn Fri, 8 Feb 2002, Tom Lane wrote:\n\n> I have a sneaking feeling that what you are going to come up with is a\n> multi-megabyte patch to convert CurrentMemoryContext into a non-global,\n> which will require changing the parameter list of damn near every\n> routine in the backend.\n\nWhile working with 7.0.2, I changed the call signature on only about 10\nfunctions. In the MemoryContext example,\nMemorycontextSwitchTo(<Any>MemoryContext) turned into \nMemoryContextSwitchTo(GetEnv()-><Any>MemoryContext). You may be able \nto do this with a #define. While profiling the\ncode, this actually had very little impact on CPU resources. There were\nsome hotspots where it made more sense to pass the global environment to\nthe function but the list is small.\n\n> \n> Personally I will vote for rejecting such a patch, as it will uglify the\n> code (and break nearly all existing user-written extension functions)\n> far more than is justified by what it accomplishes: exactly zero, in\n> terms of near-term usefulness.\n\nI don't think that user functions need be broken. As long as they use\npalloc, a recompile may be all that is needed.\n\n> \n> I think what's more interesting to discuss at this stage is the\n> considerations I alluded to before: what are we going to do with the\n> caches and other potentially-sharable datastructures? Without a\n> credible design for those issues, there is no point in sweating the\n> small-but-annoying stuff.\n\nAs far as caches go, I punted on sharing. Controlling access to the cache\nhash tables looked like alot of work and I thought the contention for this\nresource would be high. So I had each thread build separate cache\nstructures. The one difference was I had the original cache build occur\nfrom memory rather than the file pg_internal.init. So when the first\nthread for a particular db is built, the cache structures are built in\nsystem memory and copied into the appropriate MemoryContext. Each\nsubsequent cache for the db is copied from main memory at thread build.\n\nOne place where sharing worked great was the file manager. I modified\nmd.c to share Vfd's. I made the maximum number of threads that could share\none Vfd configurable so that the number of Vfds created and the contention\nto those Vfd's could be balanced.\n\nIt seems obvious to me that we need to thread slowly and softly into this\narea so I promise I will not to spend a ton of time mangling the whole CVS\ntree, \nthat most definitely, would be a waste of everybody's time. I think I can\nfind an example area that will be a small patch and submit it for review.\nHopefully this can get the ball rolling.\n\n\nMyron\nmkscott@sacadia.com\n\n", "msg_date": "Fri, 8 Feb 2002 09:36:00 -0800 (PST)", "msg_from": "<mkscott@sacadia.com>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server " }, { "msg_contents": "<mkscott@sacadia.com> writes:\n> On Fri, 8 Feb 2002, Tom Lane wrote:\n>> I have a sneaking feeling that what you are going to come up with is a\n>> multi-megabyte patch to convert CurrentMemoryContext into a non-global,\n>> which will require changing the parameter list of damn near every\n>> routine in the backend.\n\n> While working with 7.0.2, I changed the call signature on only about 10\n> functions. In the MemoryContext example,\n> MemorycontextSwitchTo(<Any>MemoryContext) turned into \n> MemoryContextSwitchTo(GetEnv()-><Any>MemoryContext). You may be able \n> to do this with a #define.\n\nOh, I see. Okay, if we can hide the messiness inside #define's then it\nmight not be as bad as I was expecting. That'd also allow the overhead\nto be compiled away when we didn't need/want thread support, which'd be\neven nicer.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 08 Feb 2002 13:45:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Threaded PosgreSQL server " } ]
[ { "msg_contents": "> From: Marc G. Fournier [mailto:scrappy@hub.org]\n> \n> \n> \n> ... can a few of you go take a peak and let me know if anything is\n> wrong/missing?\n> \n> \nI sent the following to pgsql-cygwin but so far had no response. I have\nsuccesfully used v7.2b5 so suspect something on my system has changed. I can\nconnect to 7.2 fine with PGAdmin. I'm probably just in a flap about nothing,\nbut thought I'd comment just in case.\nCheers,\n- Stuart\n\nHi,\n\tI've recently compiled postgresql 7.2 and seem to be having some\nproblems with the psql. (win98 se on a P3 900, cygipc 1.11)\n$ uname -a\nCYGWIN_98-4.10 BX3551TC 1.3.9(0.51/3/2) 2002-01-21 12:48 i686 unknown\nClient:\n$ psql template1\nSegmentation fault (core dumped)\nServer:\nDEBUG: pq_recvbuf: unexpected EOF on client connection\nDEBUG: incomplete startup packet\n\nSorry for lack of detail but I'm busy :( and was wondering if this is a\nrelease problem or just my system.\n- Stuart\n", "msg_date": "Tue, 5 Feb 2002 10:05:42 -0000 ", "msg_from": "\"Henshall, Stuart - WCP\" <SHenshall@westcountrypublications.co.uk>", "msg_from_op": true, "msg_subject": "Re: v7.2 rolled last night ..." }, { "msg_contents": "my cygwin:\n./configure --enable-locale --enable-recode --with-maxbackends=128 --enable-\nodbc --without-tk --disable-rpath\nmake\nmake install\nPostgres is in directory /usr/local/pgsql/\n\n$ /usr/local/pgsql/bin/initdb --debug -D/var/pgsql7.23\nRunning with debug mode on.\n\ninitdb variables:\n PGDATA=/var/pgsql7.23\n datadir=/usr/local/pgsql/share\n PGPATH=/usr/local/pgsql/bin\n MULTIBYTE=\n MULTIBYTEID=0\n POSTGRES_SUPERUSERNAME=Administrator\n POSTGRES_BKI=/usr/local/pgsql/share/postgres.bki\n POSTGRES_DESCR=/usr/local/pgsql/share/postgres.description\n POSTGRESQL_CONF_SAMPLE=/usr/local/pgsql/share/postgresql.conf.sample\n PG_HBA_SAMPLE=/usr/local/pgsql/share/pg_hba.conf.sample\n PG_IDENT_SAMPLE=/usr/local/pgsql/share/pg_ident.conf.sample\nThe files belonging to this database system will be owned by user\n\"Administrator\n\".\nThis user must also own the server process.\n\ncreating directory /var/pgsql7.23... ok\ncreating directory /var/pgsql7.23/base... ok\ncreating directory /var/pgsql7.23/global... ok\ncreating directory /var/pgsql7.23/pg_xlog... ok\ncreating directory /var/pgsql7.23/pg_clog... ok\ncreating template1 database in /var/pgsql7.23/base/1...\n\nand then nothing hapens. Processor was 100% CPU all the night on process\npostgres, but this is all.\nDirectory /var/pgsql7.23/base/1 exists, but nothing is inside.\n Milan Roubal\n roubm9am@barbora.ms.mff.cuni.cz\n\n\n\n----- Original Message -----\nFrom: \"Henshall, Stuart - WCP\" <SHenshall@westcountrypublications.co.uk>\nTo: \"'Marc G. Fournier'\" <scrappy@hub.org>; <pgsql-hackers@postgresql.org>\nSent: Tuesday, February 05, 2002 11:05 AM\nSubject: Re: [HACKERS] v7.2 rolled last night ...\n\n\n> > From: Marc G. Fournier [mailto:scrappy@hub.org]\n> >\n> >\n> >\n> > ... can a few of you go take a peak and let me know if anything is\n> > wrong/missing?\n> >\n> >\n> I sent the following to pgsql-cygwin but so far had no response. I have\n> succesfully used v7.2b5 so suspect something on my system has changed. I\ncan\n> connect to 7.2 fine with PGAdmin. I'm probably just in a flap about\nnothing,\n> but thought I'd comment just in case.\n> Cheers,\n> - Stuart\n>\n> Hi,\n> I've recently compiled postgresql 7.2 and seem to be having some\n> problems with the psql. (win98 se on a P3 900, cygipc 1.11)\n> $ uname -a\n> CYGWIN_98-4.10 BX3551TC 1.3.9(0.51/3/2) 2002-01-21 12:48 i686 unknown\n> Client:\n> $ psql template1\n> Segmentation fault (core dumped)\n> Server:\n> DEBUG: pq_recvbuf: unexpected EOF on client connection\n> DEBUG: incomplete startup packet\n>\n> Sorry for lack of detail but I'm busy :( and was wondering if this is a\n> release problem or just my system.\n> - Stuart\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n", "msg_date": "Tue, 5 Feb 2002 11:44:32 +0100", "msg_from": "\"Milan Roubal\" <roubm9am@barbora.ms.mff.cuni.cz>", "msg_from_op": false, "msg_subject": "Re: v7.2 rolled last night ..." } ]
[ { "msg_contents": "\n> If someone wanted to submit appropriate patches for the v7.3 development\n> tree, that merge cleanly, I can't see why this wouldn't be a good thing\n> ...\n\nI thought that the one thread instead of one process per client model\nwould only be an advantage for the \"native Windows port\" ?\n\nImho a useful threaded model on unix would involve a separation of threads\nand clients. ( 1 CPU thread per physical CPU, several IO threads)\nBut that would involve a complete redesign.\n\nAndreas\n\n> > Are there any plans to merge the sources from the experimental threaded\n> > server and the forked server so that a compile switch could choose the\n> > model?\n", "msg_date": "Tue, 5 Feb 2002 11:45:57 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Threaded PosgreSQL server" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Henshall, Stuart - WCP \n> [mailto:SHenshall@westcountrypublications.co.uk] \n> Sent: 05 February 2002 10:06\n> To: 'Marc G. Fournier'; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] v7.2 rolled last night ...\n> \n> \n> > From: Marc G. Fournier [mailto:scrappy@hub.org]\n> > \n> > \n> > \n> > ... can a few of you go take a peak and let me know if anything is \n> > wrong/missing?\n> > \n> > \n> I sent the following to pgsql-cygwin but so far had no \n> response. I have succesfully used v7.2b5 so suspect something \n> on my system has changed. I can connect to 7.2 fine with \n> PGAdmin. I'm probably just in a flap about nothing, but \n> thought I'd comment just in case. Cheers,\n> - Stuart\n\nStuart, I'll take a look on XP as soon as I can (bit hectic here right now).\nI agree though, it did work before...\n\nRegards, Dave\n", "msg_date": "Tue, 5 Feb 2002 10:49:12 -0000 ", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: v7.2 rolled last night ..." } ]
[ { "msg_contents": "Sounds like you're not running the ipc-daemon.\n\nRegards, Dave.\n\n> -----Original Message-----\n> From: Milan Roubal [mailto:roubm9am@barbora.ms.mff.cuni.cz] \n> Sent: 05 February 2002 10:45\n> To: Henshall, Stuart - WCP; 'Marc G. Fournier'; \n> pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] v7.2 rolled last night ...\n> \n> \n> my cygwin:\n> ./configure --enable-locale --enable-recode \n> --with-maxbackends=128 --enable- odbc --without-tk \n> --disable-rpath make make install Postgres is in directory \n> /usr/local/pgsql/\n> \n> $ /usr/local/pgsql/bin/initdb --debug -D/var/pgsql7.23\n> Running with debug mode on.\n> \n> initdb variables:\n> PGDATA=/var/pgsql7.23\n> datadir=/usr/local/pgsql/share\n> PGPATH=/usr/local/pgsql/bin\n> MULTIBYTE=\n> MULTIBYTEID=0\n> POSTGRES_SUPERUSERNAME=Administrator\n> POSTGRES_BKI=/usr/local/pgsql/share/postgres.bki\n> POSTGRES_DESCR=/usr/local/pgsql/share/postgres.description\n> POSTGRESQL_CONF_SAMPLE=/usr/local/pgsql/share/postgresql.conf.sample\n> PG_HBA_SAMPLE=/usr/local/pgsql/share/pg_hba.conf.sample\n> PG_IDENT_SAMPLE=/usr/local/pgsql/share/pg_ident.conf.sample\n> The files belonging to this database system will be owned by \n> user \"Administrator \". This user must also own the server process.\n> \n> creating directory /var/pgsql7.23... ok\n> creating directory /var/pgsql7.23/base... ok\n> creating directory /var/pgsql7.23/global... ok\n> creating directory /var/pgsql7.23/pg_xlog... ok\n> creating directory /var/pgsql7.23/pg_clog... ok\n> creating template1 database in /var/pgsql7.23/base/1...\n> \n> and then nothing hapens. Processor was 100% CPU all the night \n> on process postgres, but this is all. Directory \n> /var/pgsql7.23/base/1 exists, but nothing is inside.\n> Milan Roubal\n> roubm9am@barbora.ms.mff.cuni.cz\n> \n> \n> \n> ----- Original Message -----\n> From: \"Henshall, Stuart - WCP\" \n> <SHenshall@westcountrypublications.co.uk>\n> To: \"'Marc G. Fournier'\" <scrappy@hub.org>; \n> <pgsql-hackers@postgresql.org>\n> Sent: Tuesday, February 05, 2002 11:05 AM\n> Subject: Re: [HACKERS] v7.2 rolled last night ...\n> \n> \n> > > From: Marc G. Fournier [mailto:scrappy@hub.org]\n> > >\n> > >\n> > >\n> > > ... can a few of you go take a peak and let me know if \n> anything is \n> > > wrong/missing?\n> > >\n> > >\n> > I sent the following to pgsql-cygwin but so far had no response. I \n> > have succesfully used v7.2b5 so suspect something on my system has \n> > changed. I\n> can\n> > connect to 7.2 fine with PGAdmin. I'm probably just in a flap about\n> nothing,\n> > but thought I'd comment just in case.\n> > Cheers,\n> > - Stuart\n> >\n> > Hi,\n> > I've recently compiled postgresql 7.2 and seem to be having some \n> > problems with the psql. (win98 se on a P3 900, cygipc 1.11) \n> $ uname -a\n> > CYGWIN_98-4.10 BX3551TC 1.3.9(0.51/3/2) 2002-01-21 12:48 \n> i686 unknown\n> > Client:\n> > $ psql template1\n> > Segmentation fault (core dumped)\n> > Server:\n> > DEBUG: pq_recvbuf: unexpected EOF on client connection\n> > DEBUG: incomplete startup packet\n> >\n> > Sorry for lack of detail but I'm busy :( and was wondering \n> if this is \n> > a release problem or just my system.\n> > - Stuart\n> >\n> > ---------------------------(end of \n> > broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to \n> majordomo@postgresql.org\n> \n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to \n> majordomo@postgresql.org)\n> \n", "msg_date": "Tue, 5 Feb 2002 10:52:57 -0000 ", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: v7.2 rolled last night ..." } ]
[ { "msg_contents": "Sorry for the false alarm seem to have got it working after a clean compile.\n- Stuart\n\n> -----Original Message-----\n> From: Dave Page [mailto:dpage@vale-housing.co.uk]\n> Sent: 05 February 2002 10:53\n> To: 'Milan Roubal'; 'Henshall, Stuart - WCP'; 'Marc G. Fournier';\n> 'pgsql-hackers@postgresql.org'\n> Subject: RE: [HACKERS] v7.2 rolled last night ...\n> \n> \n> Sounds like you're not running the ipc-daemon.\n> \n> Regards, Dave.\n> \n> > -----Original Message-----\n> > From: Milan Roubal [mailto:roubm9am@barbora.ms.mff.cuni.cz] \n> > Sent: 05 February 2002 10:45\n> > To: Henshall, Stuart - WCP; 'Marc G. Fournier'; \n> > pgsql-hackers@postgresql.org\n> > Subject: Re: [HACKERS] v7.2 rolled last night ...\n> > \n> > \n> > my cygwin:\n> > ./configure --enable-locale --enable-recode \n> > --with-maxbackends=128 --enable- odbc --without-tk \n> > --disable-rpath make make install Postgres is in directory \n> > /usr/local/pgsql/\n> > \n> > $ /usr/local/pgsql/bin/initdb --debug -D/var/pgsql7.23\n> > Running with debug mode on.\n> > \n> > initdb variables:\n> > PGDATA=/var/pgsql7.23\n> > datadir=/usr/local/pgsql/share\n> > PGPATH=/usr/local/pgsql/bin\n> > MULTIBYTE=\n> > MULTIBYTEID=0\n> > POSTGRES_SUPERUSERNAME=Administrator\n> > POSTGRES_BKI=/usr/local/pgsql/share/postgres.bki\n> > POSTGRES_DESCR=/usr/local/pgsql/share/postgres.description\n> > \n> POSTGRESQL_CONF_SAMPLE=/usr/local/pgsql/share/postgresql.conf.sample\n> > PG_HBA_SAMPLE=/usr/local/pgsql/share/pg_hba.conf.sample\n> > PG_IDENT_SAMPLE=/usr/local/pgsql/share/pg_ident.conf.sample\n> > The files belonging to this database system will be owned by \n> > user \"Administrator \". This user must also own the server process.\n> > \n> > creating directory /var/pgsql7.23... ok\n> > creating directory /var/pgsql7.23/base... ok\n> > creating directory /var/pgsql7.23/global... ok\n> > creating directory /var/pgsql7.23/pg_xlog... ok\n> > creating directory /var/pgsql7.23/pg_clog... ok\n> > creating template1 database in /var/pgsql7.23/base/1...\n> > \n> > and then nothing hapens. Processor was 100% CPU all the night \n> > on process postgres, but this is all. Directory \n> > /var/pgsql7.23/base/1 exists, but nothing is inside.\n> > Milan Roubal\n> > roubm9am@barbora.ms.mff.cuni.cz\n> > \n> > \n> > \n> > ----- Original Message -----\n> > From: \"Henshall, Stuart - WCP\" \n> > <SHenshall@westcountrypublications.co.uk>\n> > To: \"'Marc G. Fournier'\" <scrappy@hub.org>; \n> > <pgsql-hackers@postgresql.org>\n> > Sent: Tuesday, February 05, 2002 11:05 AM\n> > Subject: Re: [HACKERS] v7.2 rolled last night ...\n> > \n> > \n> > > > From: Marc G. Fournier [mailto:scrappy@hub.org]\n> > > >\n> > > >\n> > > >\n> > > > ... can a few of you go take a peak and let me know if \n> > anything is \n> > > > wrong/missing?\n> > > >\n> > > >\n> > > I sent the following to pgsql-cygwin but so far had no \n> response. I \n> > > have succesfully used v7.2b5 so suspect something on my \n> system has \n> > > changed. I\n> > can\n> > > connect to 7.2 fine with PGAdmin. I'm probably just in a \n> flap about\n> > nothing,\n> > > but thought I'd comment just in case.\n> > > Cheers,\n> > > - Stuart\n> > >\n> > > Hi,\n> > > I've recently compiled postgresql 7.2 and seem to be having some \n> > > problems with the psql. (win98 se on a P3 900, cygipc 1.11) \n> > $ uname -a\n> > > CYGWIN_98-4.10 BX3551TC 1.3.9(0.51/3/2) 2002-01-21 12:48 \n> > i686 unknown\n> > > Client:\n> > > $ psql template1\n> > > Segmentation fault (core dumped)\n> > > Server:\n> > > DEBUG: pq_recvbuf: unexpected EOF on client connection\n> > > DEBUG: incomplete startup packet\n> > >\n> > > Sorry for lack of detail but I'm busy :( and was wondering \n> > if this is \n> > > a release problem or just my system.\n> > > - Stuart\n> > >\n> > > ---------------------------(end of \n> > > broadcast)---------------------------\n> > > TIP 1: subscribe and unsubscribe commands go to \n> > majordomo@postgresql.org\n> > \n> > \n> > ---------------------------(end of \n> > broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to \n> > majordomo@postgresql.org)\n> > \n> \n", "msg_date": "Tue, 5 Feb 2002 12:03:48 -0000 ", "msg_from": "\"Henshall, Stuart - WCP\" <SHenshall@westcountrypublications.co.uk>", "msg_from_op": true, "msg_subject": "Re: v7.2 rolled last night ..." } ]
[ { "msg_contents": "Yup, looks fine here too - all regression tests pass OK.\n\nRegards, Dave.\n\n> -----Original Message-----\n> From: Henshall, Stuart - WCP \n> [mailto:SHenshall@westcountrypublications.co.uk] \n> Sent: 05 February 2002 12:04\n> To: 'Dave Page'; 'Milan Roubal'; Henshall, Stuart - WCP; \n> 'Marc G. Fournier'; 'pgsql-hackers@postgresql.org'\n> Subject: RE: [HACKERS] v7.2 rolled last night ...\n> \n> \n> Sorry for the false alarm seem to have got it working after a \n> clean compile.\n> - Stuart\n> \n> > -----Original Message-----\n> > From: Dave Page [mailto:dpage@vale-housing.co.uk]\n> > Sent: 05 February 2002 10:53\n> > To: 'Milan Roubal'; 'Henshall, Stuart - WCP'; 'Marc G. Fournier'; \n> > 'pgsql-hackers@postgresql.org'\n> > Subject: RE: [HACKERS] v7.2 rolled last night ...\n> > \n> > \n> > Sounds like you're not running the ipc-daemon.\n> > \n> > Regards, Dave.\n> > \n> > > -----Original Message-----\n> > > From: Milan Roubal [mailto:roubm9am@barbora.ms.mff.cuni.cz]\n> > > Sent: 05 February 2002 10:45\n> > > To: Henshall, Stuart - WCP; 'Marc G. Fournier'; \n> > > pgsql-hackers@postgresql.org\n> > > Subject: Re: [HACKERS] v7.2 rolled last night ...\n> > > \n> > > \n> > > my cygwin:\n> > > ./configure --enable-locale --enable-recode\n> > > --with-maxbackends=128 --enable- odbc --without-tk \n> > > --disable-rpath make make install Postgres is in directory \n> > > /usr/local/pgsql/\n> > > \n> > > $ /usr/local/pgsql/bin/initdb --debug -D/var/pgsql7.23 \n> Running with \n> > > debug mode on.\n> > > \n> > > initdb variables:\n> > > PGDATA=/var/pgsql7.23\n> > > datadir=/usr/local/pgsql/share\n> > > PGPATH=/usr/local/pgsql/bin\n> > > MULTIBYTE=\n> > > MULTIBYTEID=0\n> > > POSTGRES_SUPERUSERNAME=Administrator\n> > > POSTGRES_BKI=/usr/local/pgsql/share/postgres.bki\n> > > POSTGRES_DESCR=/usr/local/pgsql/share/postgres.description\n> > > \n> > POSTGRESQL_CONF_SAMPLE=/usr/local/pgsql/share/postgresql.conf.sample\n> > > PG_HBA_SAMPLE=/usr/local/pgsql/share/pg_hba.conf.sample\n> > > PG_IDENT_SAMPLE=/usr/local/pgsql/share/pg_ident.conf.sample\n> > > The files belonging to this database system will be owned by\n> > > user \"Administrator \". This user must also own the server process.\n> > > \n> > > creating directory /var/pgsql7.23... ok\n> > > creating directory /var/pgsql7.23/base... ok\n> > > creating directory /var/pgsql7.23/global... ok\n> > > creating directory /var/pgsql7.23/pg_xlog... ok\n> > > creating directory /var/pgsql7.23/pg_clog... ok\n> > > creating template1 database in /var/pgsql7.23/base/1...\n> > > \n> > > and then nothing hapens. Processor was 100% CPU all the night\n> > > on process postgres, but this is all. Directory \n> > > /var/pgsql7.23/base/1 exists, but nothing is inside.\n> > > Milan Roubal\n> > > roubm9am@barbora.ms.mff.cuni.cz\n> > > \n> > > \n> > > \n> > > ----- Original Message -----\n> > > From: \"Henshall, Stuart - WCP\"\n> > > <SHenshall@westcountrypublications.co.uk>\n> > > To: \"'Marc G. Fournier'\" <scrappy@hub.org>; \n> > > <pgsql-hackers@postgresql.org>\n> > > Sent: Tuesday, February 05, 2002 11:05 AM\n> > > Subject: Re: [HACKERS] v7.2 rolled last night ...\n> > > \n> > > \n> > > > > From: Marc G. Fournier [mailto:scrappy@hub.org]\n> > > > >\n> > > > >\n> > > > >\n> > > > > ... can a few of you go take a peak and let me know if\n> > > anything is\n> > > > > wrong/missing?\n> > > > >\n> > > > >\n> > > > I sent the following to pgsql-cygwin but so far had no\n> > response. I\n> > > > have succesfully used v7.2b5 so suspect something on my\n> > system has\n> > > > changed. I\n> > > can\n> > > > connect to 7.2 fine with PGAdmin. I'm probably just in a\n> > flap about\n> > > nothing,\n> > > > but thought I'd comment just in case.\n> > > > Cheers,\n> > > > - Stuart\n> > > >\n> > > > Hi,\n> > > > I've recently compiled postgresql 7.2 and seem to be having some\n> > > > problems with the psql. (win98 se on a P3 900, cygipc 1.11) \n> > > $ uname -a\n> > > > CYGWIN_98-4.10 BX3551TC 1.3.9(0.51/3/2) 2002-01-21 12:48\n> > > i686 unknown\n> > > > Client:\n> > > > $ psql template1\n> > > > Segmentation fault (core dumped)\n> > > > Server:\n> > > > DEBUG: pq_recvbuf: unexpected EOF on client connection\n> > > > DEBUG: incomplete startup packet\n> > > >\n> > > > Sorry for lack of detail but I'm busy :( and was wondering\n> > > if this is\n> > > > a release problem or just my system.\n> > > > - Stuart\n> > > >\n> > > > ---------------------------(end of\n> > > > broadcast)---------------------------\n> > > > TIP 1: subscribe and unsubscribe commands go to \n> > > majordomo@postgresql.org\n> > > \n> > > \n> > > ---------------------------(end of\n> > > broadcast)---------------------------\n> > > TIP 2: you can get off all lists at once with the \n> unregister command\n> > > (send \"unregister YourEmailAddressHere\" to \n> > > majordomo@postgresql.org)\n> > > \n> > \n> \n", "msg_date": "Tue, 5 Feb 2002 12:15:04 -0000 ", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: v7.2 rolled last night ..." } ]
[ { "msg_contents": "Hi all,\nstarting with 7.2, now() returns a time with milliseconds. If extracted\nfrom the db and displayed verbatim, it shows up as\n'2002-02-05 10:59:36.717176+02'.\n\nUnfortunately, I have a lot of code that displays the date/time directly\nfrom the db on a web page without any to_char transformation and now\nthat is quite harder to understand. Is there any way to have an implicit\nformatting back that trims the milliseconds on a per-connection\nvariable?\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-22-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n\n", "msg_date": "05 Feb 2002 14:19:26 +0200", "msg_from": "Alessio Bragadini <alessio@albourne.com>", "msg_from_op": true, "msg_subject": "Display of TIMESTAMP in 7.2" }, { "msg_contents": "\nInstead of using now(), you could use timenow(), which displays the date \nand time without the microseconds. \n\nThere's stuff on this in the manual, section 3.4 of the 7.1 documentation.\n\nJ\n\n\nAlessio Bragadini wrote:\n\n> Hi all,\n> starting with 7.2, now() returns a time with milliseconds. If extracted\n> from the db and displayed verbatim, it shows up as\n> '2002-02-05 10:59:36.717176+02'.\n> \n> Unfortunately, I have a lot of code that displays the date/time directly\n> from the db on a web page without any to_char transformation and now\n> that is quite harder to understand. Is there any way to have an implicit\n> formatting back that trims the milliseconds on a per-connection\n> variable?\n> \n\n", "msg_date": "Tue, 05 Feb 2002 10:37:45 -0500", "msg_from": "J Smith <dark_panda@hushmail.com>", "msg_from_op": false, "msg_subject": "Re: Display of TIMESTAMP in 7.2" }, { "msg_contents": "Alessio Bragadini <alessio@albourne.com> writes:\n> starting with 7.2, now() returns a time with milliseconds. If extracted\n> from the db and displayed verbatim, it shows up as\n> '2002-02-05 10:59:36.717176+02'.\n\nUse to_char if you want to control the formatting precisely. Or\nreplace now() with CURRENT_TIMESTAMP(0) (which not only does what\nyou want, but is standards-compliant).\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 05 Feb 2002 11:01:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Display of TIMESTAMP in 7.2 " }, { "msg_contents": "On Tue, 2002-02-05 at 18:01, Tom Lane wrote:\n\n> Use to_char if you want to control the formatting precisely. Or\n> replace now() with CURRENT_TIMESTAMP(0) (which not only does what\n> you want, but is standards-compliant).\n\nI did try current_timestamp() but not current_timestamp(0)...\n\nThank you very much.\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-22-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n\n", "msg_date": "05 Feb 2002 18:06:57 +0200", "msg_from": "Alessio Bragadini <alessio@albourne.com>", "msg_from_op": true, "msg_subject": "Re: Display of TIMESTAMP in 7.2" }, { "msg_contents": "Try using CURRENT_TIMESTAMP instead of now() maybe?\n\nChris\n\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Alessio\n> Bragadini\n> Sent: Tuesday, 5 February 2002 8:19 PM\n> To: PostgreSQL Hackers\n> Subject: [HACKERS] Display of TIMESTAMP in 7.2\n> \n> \n> Hi all,\n> starting with 7.2, now() returns a time with milliseconds. If extracted\n> from the db and displayed verbatim, it shows up as\n> '2002-02-05 10:59:36.717176+02'.\n> \n> Unfortunately, I have a lot of code that displays the date/time directly\n> from the db on a web page without any to_char transformation and now\n> that is quite harder to understand. Is there any way to have an implicit\n> formatting back that trims the milliseconds on a per-connection\n> variable?\n> \n> -- \n> Alessio F. Bragadini\t\talessio@albourne.com\n> APL Financial Services\t\thttp://village.albourne.com\n> Nicosia, Cyprus\t\t \tphone: +357-22-755750\n> \n> \"It is more complicated than you think\"\n> \t\t-- The Eighth Networking Truth from RFC 1925\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n", "msg_date": "Wed, 6 Feb 2002 10:03:21 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Display of TIMESTAMP in 7.2" } ]
[ { "msg_contents": "Hi all,\n\nI'm working on an implementation of the DRDA protocol and am planning on \nmodifying postgresql to support DRDA. (DRDA is an Open Group standard \nprotocol for client to database communications, promoted mostly by IBM). \n\nAnyway, as a first step towards this I was hoping to expand the \ndocumentation of the Frontend/Backend Protocol and create some detailed \ndeveloper oriented documentation of the internals of the networking code.\n\nCan someone point me towards any documentation that might currently be \navailable (I'm aware of the Developer's Guide), and any pointers on \ngetting started would be appreciated as well. ;-)\n\nI also noticed on the TODO list someone has put SQL*Net support as a \nnetwork protocol. Is this a serious plan or just a pipedream? Part of \nwhat I'm aiming to do is make the network protocol stuff fairly modular so \nyou could support the current protocol, and DRDA, and presumably SQL*Net \nor TDS (Microsoft/Sybases protocol), etc...\n\nCheers,\n\nBrian\n\n", "msg_date": "Tue, 5 Feb 2002 08:39:43 -0500 (EST)", "msg_from": "Brian Bruns <camber@ais.org>", "msg_from_op": true, "msg_subject": "DRDA, network protocol, and documentation" }, { "msg_contents": "Brian Bruns <camber@ais.org> writes:\n> I also noticed on the TODO list someone has put SQL*Net support as a \n> network protocol. Is this a serious plan or just a pipedream?\n\nPipedream, I'm afraid. No one has volunteered to actually do the work,\nand I'm not sure that Oracle provides enough documentation to allow a\ncompatible implementation to be built, anyhow...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 05 Feb 2002 11:03:58 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: DRDA, network protocol, and documentation " }, { "msg_contents": "On Tue, 2002-02-05 at 15:39, Brian Bruns wrote:\n> Hi all,\n> \n> I'm working on an implementation of the DRDA protocol and am planning on \n> modifying postgresql to support DRDA. (DRDA is an Open Group standard \n> protoc\n\nHopefully this will bring us PREPARE/EXECUTE support too.\n\nBTW, does DRDA have a notion of LISTEN/NOTIFY ?\n\n> Anyway, as a first step towards this I was hoping to expand the \n> documentation of the Frontend/Backend Protocol and create some detailed \n> developer oriented documentation of the internals of the networking code.\n\nProtocol is quite simple and AFAIK fully documented in Developers Guide.\n\nThe networking code seems to be \"Traffic Cop\" directory in\nsrc/backend/tcop/postgresql.c\n\nYou could click around in lxr I set up for my personal use on my desktop\npc at \nhttp://www.postsql.org/lxr/source/backend/tcop/postgres.c\nand find out where things are defined/used\n\nBut you should probably set up something similar for your own use if you\nwant to do some serious hacking.\n\n> Can someone point me towards any documentation that might currently be \n> available (I'm aware of the Developer's Guide), and any pointers on \n> getting started would be appreciated as well. ;-)\n\nsocket.h is included only in a few places\n\n[hannu@taru src]$ grep -l socket.h */*/*.c\nbackend/libpq/auth.c\nbackend/libpq/hba.c\nbackend/libpq/pqcomm.c\nbackend/postmaster/pgstat.c\nbackend/postmaster/postmaster.c\nbackend/tcop/postgres.c\ninterfaces/libpq/fe-auth.c\ninterfaces/libpq/fe-connect.c\ninterfaces/libpq/fe-misc.c\ninterfaces/odbc/columninfo.c\ninterfaces/odbc/connection.c\ninterfaces/odbc/drvconn.c\ninterfaces/odbc/socket.c\n\n[hannu@taru src]$ grep -l socket.h */*/*/*.c\nbackend/utils/adt/inet_net_ntop.c\nbackend/utils/adt/inet_net_pton.c\nbackend/utils/adt/network.c\n\nYou can probably ignore interfaces/ for start and backend/utils/adt/ is\nalso most likely not about FE/BE communication\n\nThat leaves backend/tcop/ for main activity, backend/postmaster/ for\nsession start and backend/libpq/ for main library.\n\n> I also noticed on the TODO list someone has put SQL*Net support as a \n> network protocol. Is this a serious plan or just a pipedream? Part of \n> what I'm aiming to do is make the network protocol stuff fairly modular so \n> you could support the current protocol, and DRDA, and presumably SQL*Net \n> or TDS (Microsoft/Sybases protocol), etc...\n\nXML-over-HTTP/1.1 would also be really cool, even more so if server\ncould apply XSLT transforms to results on the fly :)\n\n--------------\nHannu\n\n\n\n", "msg_date": "05 Feb 2002 19:33:36 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: DRDA, network protocol, and documentation" }, { "msg_contents": "On Tue, 2002-02-05 at 18:03, Tom Lane wrote:\n> Brian Bruns <camber@ais.org> writes:\n> > I also noticed on the TODO list someone has put SQL*Net support as a \n> > network protocol. Is this a serious plan or just a pipedream?\n> \n> Pipedream, I'm afraid. No one has volunteered to actually do the work,\n> and I'm not sure that Oracle provides enough documentation to allow a\n> compatible implementation to be built, anyhow...\n\nAlso, from what I've been told, there is no such thing as SQL*Net\nprotocol, there is a whole lot of different protocols that quite often\nrefuse to talk to each other.\n\n------------\nHannu\n\n\n", "msg_date": "05 Feb 2002 19:35:30 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: DRDA, network protocol, and documentation" }, { "msg_contents": "Marc Lavergne <mlavergn@richlava.com> writes:\n> This would involve far more than just a SQL*Net listener. Given the the \n> syntactic differences between Oracle and PostgreSQL, a whole \n> compatibility layer beyond the listener would be needed to allow for \n> interoperability. The listener piece would be relatively small potatoes \n> compared to building an Oracle<>PostgreSQL syntax translator or a \n> compatibility layer (the number of \"bug\" reports something like this is \n> capable of generating gives me cold sweats).\n\nYup. There are some people fooling with a more-Oracle-compatible parser\n(eg, Oracle-style outer join syntax) but I fear that just scratches the\nsurface of trying to make something that's plug-compatible enough to\nmake Oracle users happy.\n\nStill, having a SQL*Net listener would be a step forward, if anyone\ncared to work on it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 05 Feb 2002 15:15:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: DRDA, network protocol, and documentation " }, { "msg_contents": "On 5 Feb 2002, Hannu Krosing wrote:\n\n> On Tue, 2002-02-05 at 15:39, Brian Bruns wrote:\n> > Hi all,\n> > \n> > I'm working on an implementation of the DRDA protocol and am planning on \n> > modifying postgresql to support DRDA. (DRDA is an Open Group standard \n> > protoc\n> \n> Hopefully this will bring us PREPARE/EXECUTE support too.\n> \n> BTW, does DRDA have a notion of LISTEN/NOTIFY ?\n\nNot that I'm aware of, although the spec is really big so there are some \npieces I haven't fully groked. There is ongoing standards activities for \nDRDA, but it's closed to TOG members apparently, but from what I can tell \nthey are adding lots of object features, IPv6 stuff, unicode, etc...\n\n> Protocol is quite simple and AFAIK fully documented in Developers Guide.\n\nI ran it through ethereal, and got a pretty good flavor for it.\n\n> \n> The networking code seems to be \"Traffic Cop\" directory in\n> src/backend/tcop/postgresql.c\n\nActually some is in libpq, some is in tcop, some in executor. Seems to be \na bit of a mess if you don't mind me saying so. ;-)\n \nWhat I would propose is a net/ directory that has interface that is called \nby all the other code, and net layer would call the appropriate protocol \nstuff. So, all the protocol stuff will be in exactly one place.\n\n> > I also noticed on the TODO list someone has put SQL*Net support as a \n> > network protocol. Is this a serious plan or just a pipedream? Part of \n> > what I'm aiming to do is make the network protocol stuff fairly modular so \n> > you could support the current protocol, and DRDA, and presumably SQL*Net \n> > or TDS (Microsoft/Sybases protocol), etc...\n> \n> XML-over-HTTP/1.1 would also be really cool, even more so if server\n> could apply XSLT transforms to results on the fly :)\n\nThis could be supported as just another protocol ;-)\n\nBrian\n\n", "msg_date": "Tue, 5 Feb 2002 17:22:16 -0500 (EST)", "msg_from": "Brian Bruns <camber@ais.org>", "msg_from_op": true, "msg_subject": "Re: DRDA, network protocol, and documentation" }, { "msg_contents": "On Tue, 5 Feb 2002, Tom Lane wrote:\n\n> Marc Lavergne <mlavergn@richlava.com> writes:\n> > This would involve far more than just a SQL*Net listener. Given the the \n> > syntactic differences between Oracle and PostgreSQL, a whole \n> > compatibility layer beyond the listener would be needed to allow for \n> > interoperability. The listener piece would be relatively small potatoes \n> > compared to building an Oracle<>PostgreSQL syntax translator or a \n> > compatibility layer (the number of \"bug\" reports something like this is \n> > capable of generating gives me cold sweats).\n> \n> Yup. There are some people fooling with a more-Oracle-compatible parser\n> (eg, Oracle-style outer join syntax) but I fear that just scratches the\n> surface of trying to make something that's plug-compatible enough to\n> make Oracle users happy.\n>\n> Still, having a SQL*Net listener would be a step forward, if anyone\n> cared to work on it.\n> \n> \t\t\tregards, tom lane\n\nI have reverse engineered TDS (the MS/Sybase equivalent) so I assume \nsomeone can do the same with SQL*Net/NET8 but that someone won't be me, we \nare a Sybase/DB2 shop.\n\nBrian\n\n", "msg_date": "Tue, 5 Feb 2002 17:25:19 -0500 (EST)", "msg_from": "Brian Bruns <camber@ais.org>", "msg_from_op": true, "msg_subject": "Re: DRDA, network protocol, and documentation " }, { "msg_contents": "On Tue, 5 Feb 2002, Tom Lane wrote:\n\n> Brian Bruns <camber@ais.org> writes:\n> > I also noticed on the TODO list someone has put SQL*Net support as a \n> > network protocol. Is this a serious plan or just a pipedream?\n> \n> Pipedream, I'm afraid. No one has volunteered to actually do the work,\n> and I'm not sure that Oracle provides enough documentation to allow a\n> compatible implementation to be built, anyhow...\n> \n> \t\t\tregards, tom lane\n\nWell, I'm offering to modularize the code for anyone who wants to take on \nthe reverse engineering effort, they'll have an easy time integrating it \ninto postgresql.\n\nBrian\n\nBTW, My master plan is to add DRDA support to not only postgresql, but \nMySQL, interbase, SAP DB, and anything else I can lay hands on. The idea \nbeing a single client capable of talking to DB2/UDB, Informix (with \ngateway), MS SQL Server (with SNA gateway) and all the open source DBs. \n;-)\n\n\n\n", "msg_date": "Tue, 5 Feb 2002 17:30:21 -0500 (EST)", "msg_from": "Brian Bruns <camber@ais.org>", "msg_from_op": true, "msg_subject": "Re: DRDA, network protocol, and documentation " }, { "msg_contents": "On Tue, 5 Feb 2002, Brian Bruns wrote:\n\n> I also noticed on the TODO list someone has put SQL*Net support as a \n> network protocol. Is this a serious plan or just a pipedream? Part of \n> what I'm aiming to do is make the network protocol stuff fairly modular so \n> you could support the current protocol, and DRDA, and presumably SQL*Net \n> or TDS (Microsoft/Sybases protocol), etc...\n\nI intend on looking into ways to implement SQL*Net/TNS etc. It's not\npretty but would be remarkably useful. I haven't started looking at it\nyet because PostgreSQL doesn't support all of the Oracle's SQL\nimplementation. Until this happens there's really not much point.\n\nGavin\n\n", "msg_date": "Thu, 7 Feb 2002 16:02:10 +1100 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: DRDA, network protocol, and documentation" }, { "msg_contents": "\nI'd have to agree on that point. Although there is probably an interesting \nsubset of querys that work, but let's face it this is for canned \ncommercial packages, otherwise the code would be ported including the \nnetwork stuff.\n\nFor my purposes (DRDA) the present SQL dialect is just fine since the DRDA \nstandard is really orthogonal to the SQL 9x standards. So, hopefully if I \ndon't get bogged down with other stuff, the infrastructure will be there \nto plug into when the time comes...although it'd be nice to be aware of \nsome of the nuances before hand to accomadate them. \n\nBrian\n\nOn Thu, 7 Feb 2002, Gavin Sherry wrote:\n\n> I intend on looking into ways to implement SQL*Net/TNS etc. It's not\n> pretty but would be remarkably useful. I haven't started looking at it\n> yet because PostgreSQL doesn't support all of the Oracle's SQL\n> implementation. Until this happens there's really not much point.\n> \n> Gavin\n\n", "msg_date": "Thu, 7 Feb 2002 00:20:50 -0500 (EST)", "msg_from": "Brian Bruns <camber@ais.org>", "msg_from_op": true, "msg_subject": "Re: DRDA, network protocol, and documentation" }, { "msg_contents": "On Thu, 2002-02-07 at 07:20, Brian Bruns wrote:\n> \n> For my purposes (DRDA) the present SQL dialect is just fine since the DRDA \n> standard is really orthogonal to the SQL 9x standards. So, hopefully if I \n> don't get bogged down with other stuff, the infrastructure will be there \n> to plug into when the time comes...although it'd be nice to be aware of \n> some of the nuances before hand to accomadate them. \n\nWhat is the relation of DRDA to SQL/CLI (SQL Call Level Interface, part\n3 of the standard) ?\n\n------------\nHannu\n\n\n", "msg_date": "07 Feb 2002 09:13:47 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: DRDA, network protocol, and documentation" }, { "msg_contents": "On 7 Feb 2002, Hannu Krosing wrote:\n\n> On Thu, 2002-02-07 at 07:20, Brian Bruns wrote:\n> > \n> > For my purposes (DRDA) the present SQL dialect is just fine since the DRDA \n> > standard is really orthogonal to the SQL 9x standards. So, hopefully if I \n> > don't get bogged down with other stuff, the infrastructure will be there \n> > to plug into when the time comes...although it'd be nice to be aware of \n> > some of the nuances before hand to accomadate them. \n> \n> What is the relation of DRDA to SQL/CLI (SQL Call Level Interface, part\n> 3 of the standard) ?\n\nDRDA, SQL 9x, and SQL/CLI (ODBC) form a complimentary set of standards.\n\nSQL 9x obviously specifies the SQL language and constructs. SQL/CLI \naddressses application portability with an API. DRDA on the other hand is \na bits on the wire protocol. So one would have a program using the ODBC \nAPI to send DRDA over the network to invoke SQL on the server.\n\nDRDA clients do not necessarily have to be ODBC, indeed there are JDBC \nones. OpenDRDA (my little endeavour) will be an ODBC/SQL CLI driver on \nthe client side, and something of a non-standard interface on the server \n(postgresql) side, if only because there is no standard for server side \ninterfaces. Actually I have the ODBC driver partially working against IBM \nDB2, but it's still, of course, in the beginning stages.\n\nCheers,\n\nBrian\n\n", "msg_date": "Thu, 7 Feb 2002 08:35:16 -0500 (EST)", "msg_from": "Brian Bruns <camber@ais.org>", "msg_from_op": true, "msg_subject": "Re: DRDA, network protocol, and documentation" }, { "msg_contents": "On Thu, 2002-02-07 at 15:35, Brian Bruns wrote:\n> On 7 Feb 2002, Hannu Krosing wrote:\n> \n> \n> > What is the relation of DRDA to SQL/CLI (SQL Call Level Interface, part\n> > 3 of the standard) ?\n> \n> DRDA, SQL 9x, and SQL/CLI (ODBC) form a complimentary set of standards.\n> \n> SQL 9x obviously specifies the SQL language and constructs. SQL/CLI \n> addressses application portability with an API. DRDA on the other hand is \n> a bits on the wire protocol. So one would have a program using the ODBC \n> API to send DRDA over the network to invoke SQL on the server.\n\nBut I guess that you can't fake PREPARE/EXECUTE on client side anymore\nif you want to be DRDA compatible?\n\nOr is DRDA so low-level that it does not care what info it carries ?\n\nDoes DRDA have standard representation of datatypes on wire ? \n\nIf so, how will postgres extendable datatypes fit in there ?\n\nI know that postgres's system tables have two sets of type i/o functions\n typinput | regproc | \n typoutput | regproc | \n typreceive | regproc | \n typsend | regproc | \n\nwhich are currently initialised to the same real functions\n\nhannu=# select count(*) from pg_type where typoutput <> typsend or\ntypinput <> typreceive;\n count \n-------\n 0\n(1 row)\n\nI suspect thet the typreceive and typsend were planned for some common\nnetwork representation, but such usage has probaly gone untested for a\nvery long time.\n\n----------------\nHannu\n\n", "msg_date": "07 Feb 2002 20:57:45 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: DRDA, network protocol, and documentation" }, { "msg_contents": "On Thu, Feb 07, 2002 at 04:02:10PM +1100, Gavin Sherry wrote:\n> On Tue, 5 Feb 2002, Brian Bruns wrote:\n> \n> > I also noticed on the TODO list someone has put SQL*Net support as a \n> > network protocol. Is this a serious plan or just a pipedream? Part of \n> > what I'm aiming to do is make the network protocol stuff fairly modular so \n> > you could support the current protocol, and DRDA, and presumably SQL*Net \n> > or TDS (Microsoft/Sybases protocol), etc...\n> \n> I intend on looking into ways to implement SQL*Net/TNS etc. It's not\n> pretty but would be remarkably useful. I haven't started looking at it\n> yet because PostgreSQL doesn't support all of the Oracle's SQL\n> implementation. Until this happens there's really not much point.\n\nI'd really like to see something that does XML queries, either over\nHTTP (POST/PUT) or BEEP (RFC 3080, 3081). I can start work on\nsomething generic (I've been meaning to start hacking pg anyway...).\nA generic input format (application/vnd.postgresql-query) and a\ngeneric XML schema that handles any query. Ideally, you'd have the\nability to designate an XML schema (SELECT [AS XML [SCHEMA 'blah']] ...),\nand an xml schemas table that has a description of the data parts,\nso the output XML is exactly right.\n\nI'm still wrapping my brain around this concept, if anybody else \nis interested in this let me know.\n\n-- \nDavid Terrell | \"When we said that you needed to cut the\ndbt@meat.net | wires for ultimate security, we didn't\nNebcorp Prime Minister | mean that you should go wireless instead.\"\nhttp://wwn.nebcorp.com/ | - Casper Dik\n", "msg_date": "Thu, 7 Feb 2002 11:15:26 -0800", "msg_from": "David Terrell <dbt@meat.net>", "msg_from_op": false, "msg_subject": "Re: DRDA, network protocol, and documentation" }, { "msg_contents": "On 7 Feb 2002, Hannu Krosing wrote:\n\n> But I guess that you can't fake PREPARE/EXECUTE on client side anymore\n> if you want to be DRDA compatible?\n\nDRDA has a facility for preparing and executing, but also for direct \nexecution. So, a server implementation would have to support all of the \nAR Level 1 capabilities to be compatible. The client is a bit free-er to \nchoose how to send it's SQL. That is, the client has the option to fake a \nprepare/execute but the server must service either method.\n\n> Does DRDA have standard representation of datatypes on wire ? \n\nDRDA has a quite extensive list of datatype representations. The ordering \nof bytes is server dictated (as opposed to TDS where it is client \ndictated, so server does the byte swapping if necessary). \n\n> If so, how will postgres extendable datatypes fit in there ?\n> \n> I know that postgres's system tables have two sets of type i/o functions\n> typinput | regproc | \n> typoutput | regproc | \n> typreceive | regproc | \n> typsend | regproc | \n> \n> which are currently initialised to the same real functions\n> \n> hannu=# select count(*) from pg_type where typoutput <> typsend or\n> typinput <> typreceive;\n> count \n> -------\n> 0\n> (1 row)\n\nThe server has the leeway to determine the DRDA representation for it's \ndataytpes, and it is the clients responsibility to deal with it.\n\n> I suspect thet the typreceive and typsend were planned for some common\n> network representation, but such usage has probaly gone untested for a\n> very long time.\n\ngood question. \n\nBrian\n\n", "msg_date": "Thu, 7 Feb 2002 16:14:59 -0500 (EST)", "msg_from": "Brian Bruns <camber@ais.org>", "msg_from_op": true, "msg_subject": "Re: DRDA, network protocol, and documentation" }, { "msg_contents": "[2002-02-07 11:15] David Terrell said:\n\n| I'd really like to see something that does XML queries, either over\n [snip]\n \n| I'm still wrapping my brain around this concept, if anybody else \n| is interested in this let me know.\n\nsearch for Jim Melton's SQLX initiative. The site (www.sqlx.org) \nhas not been available for quite a while, but I do have a copy of\nthe draft document if you'd like it.\n\n b\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Thu, 7 Feb 2002 16:42:41 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": false, "msg_subject": "Re: DRDA, network protocol, and documentation" } ]
[ { "msg_contents": "\nTom/Jason,\n\nA couple of weeks ago we exchanged emails regarding excessive database\nstartup times on Cygwin (~15 secs iirc). I've been playing with 7.2 release\ntoday and can happily report that I'm now seeing just a couple of seconds.\n\n:-)\n\nRegards, Dave.\n", "msg_date": "Tue, 5 Feb 2002 13:57:20 -0000 ", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "PostgreSQL 7.2 on Cygwin" }, { "msg_contents": "FYI...\n\nOn Tue, Feb 05, 2002 at 01:57:20PM +0000, Dave Page wrote:\n> A couple of weeks ago we exchanged emails regarding excessive database\n> startup times on Cygwin (~15 secs iirc). I've been playing with 7.2 release\n> today and can happily report that I'm now seeing just a couple of seconds.\n> \n> :-)\n\nJason\n", "msg_date": "Mon, 11 Feb 2002 14:48:17 -0500", "msg_from": "Jason Tishler <jason@tishler.net>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 7.2 on Cygwin" } ]
[ { "msg_contents": "\n> > If (instead) we had a user supplied sqlca, it could be used by multiple\n> > threads.\n> \n> How do you want to supply it? \n\nInformix does this:\n\n#else /* IFX_THREAD */\nextern long * ifx_sqlcode();\nextern struct sqlca_s * ifx_sqlca();\n#define SQLCODE (*(ifx_sqlcode()))\n#define SQLSTATE ((char *)(ifx_sqlstate()))\n#define sqlca (*(ifx_sqlca()))\n#endif /* IFX_THREAD */\n\nThey use one sqlca per active connection/thread. A connection is basically tied \nto a specific thread. If you want to use the conn in another thread you need \nto:\nEXEC SQL SET CONNECTION DORMANT; EXEC SQL SET CONNECTION ...\nWhen using the esql processor, you need to specify which thread package you use\n(posix, dce, ...), so they know with which functioncall to get the current \nthread id. Of course this costs, so they have a switch to turn it on (-thread).\n\nAndreas\n", "msg_date": "Tue, 5 Feb 2002 16:00:01 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Ecpg and reentrancy" } ]
[ { "msg_contents": "Hi,\n\nplease apply attached patch for 7.2.1\n\nintarray:\n add check size of array for gist__int_ops\ntsearch:\n make signature if index key is greater than TOAST_INDEX_TARGET\n For all operations gist_txtidx_ops amopreqcheck=true,\n i.e. index is lossy.\n\nThis patch is also available from http://www.sai.msu.su/~megera/postgres/gist/\n\nThanks to Poul L. Christiansen for spotting the bug.\n(next time, please submit bug report in beta cycle :-)\n\n\tRegards,\n\t\tOleg", "msg_date": "Tue, 5 Feb 2002 18:40:41 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "Please apply patch for 7.2.1" } ]
[ { "msg_contents": "\nFor Immediate Release\t\t\t\tFebruary 5th, 2002\n\n\tAfter almost a full year of development since PostgreSQL v7.1 was\nreleased, the PostgreSQL Global Development Group is proud to announce the\navailability of our latest development milestone ... PostgreSQL v7.2,\nanother step forward for the project.\n\n\tA full list of changes to v7.2 can be found in the HISTORY file,\nincluded with the release, as well as under all ftp mirrors as:\n\n\t\t/pub/README.v7_2\n\n\tHighlights of this release are as follows:\n\n VACUUM\n Vacuuming no longer locks tables, thus allowing normal user\n access during the vacuum. A new \"VACUUM FULL\" command does\n old-style vacuum by locking the table and shrinking the on-disk\n copy of the table.\n\n Transactions\n There is no longer a problem with installations that exceed\n four billion transactions.\n\n OID's\n OID's are now optional. Users can now create tables without\n OID's for cases where OID usage is excessive.\n\n Optimizer\n The system now computes histogram column statistics during\n \"ANALYZE\", allowing much better optimizer choices.\n\n Security\n A new MD5 encryption option allows more secure storage and\n transfer of passwords. A new Unix-domain socket authentication\n option is available on Linux and BSD systems.\n\n Statistics\n Administrators can use the new table access statistics module\n to get fine-grained information about table and index usage.\n\n Internationalization\n Program and library messages can now be displayed in several\n languages.\n\n\t.. with many many more bug fixes, enhancements and performance\nrelated changes ...\n\n\tSource for this release is available on all mirrors under:\n\n\t\t/pub/source/v7.2\n\n\tAs always, any bugs with this release should be reported to\npgsql-bugs@postgresql.org ... and, as with all point releases, this\nrelease requires a complete dump and reload from previous releases, due to\ninternal structure changes ...\n\nMarc G. Fournier\nCo-ordinator\nPostgreSQL Global Development Group\n\n\n\n\n", "msg_date": "Tue, 5 Feb 2002 12:01:24 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@postgresql.org>", "msg_from_op": true, "msg_subject": "PostgreSQL v7.2 Final Release" }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@postgresql.org> writes:\n\n> For Immediate Release\t\t\t\tFebruary 5th, 2002\n> \n> \tAfter almost a full year of development since PostgreSQL v7.1 was\n> released, the PostgreSQL Global Development Group is proud to announce the\n> availability of our latest development milestone ... PostgreSQL v7.2,\n> another step forward for the project.\n\nRPMs for Red Hat Linux 7.2 can be found at http://people.redhat.com/teg/pg/\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "05 Feb 2002 11:15:17 -0500", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: PostgreSQL v7.2 Final Release" }, { "msg_contents": "On Tue, 2002-02-05 at 18:15, Trond Eivind Glomsr�d wrote:\n> \"Marc G. Fournier\" <scrappy@postgresql.org> writes:\n> \n> > For Immediate Release\t\t\t\tFebruary 5th, 2002\n> > \n> > \tAfter almost a full year of development since PostgreSQL v7.1 was\n> > released, the PostgreSQL Global Development Group is proud to announce the\n> > availability of our latest development milestone ... PostgreSQL v7.2,\n> > another step forward for the project.\n> \n> RPMs for Red Hat Linux 7.2 can be found at http://people.redhat.com/teg/pg/\n\nWhy is just plperl included ?\n\nWhat about pl/python and pl/tcl (I hope pl/pgsql is there somewhere) ?\n\n--------------\nHannu\n\n\n", "msg_date": "05 Feb 2002 19:07:54 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL v7.2 Final Release" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n\n> On Tue, 2002-02-05 at 18:15, Trond Eivind Glomsr�d wrote:\n> > \"Marc G. Fournier\" <scrappy@postgresql.org> writes:\n> > \n> > > For Immediate Release\t\t\t\tFebruary 5th, 2002\n> > > \n> > > \tAfter almost a full year of development since PostgreSQL v7.1 was\n> > > released, the PostgreSQL Global Development Group is proud to announce the\n> > > availability of our latest development milestone ... PostgreSQL v7.2,\n> > > another step forward for the project.\n> > \n> > RPMs for Red Hat Linux 7.2 can be found at http://people.redhat.com/teg/pg/\n> \n> Why is just plperl included ?\n> \n> What about pl/python\n\nThere is no shared python library. Linking in static libraries in\ndynamic extensions doesn't work on most platforms.\n\n> and pl/tcl (I hope pl/pgsql is there somewhere) ?\n\nThe postgresql-tcl package contains that, FTTB, but tcl is pretty much\ndead anyway...\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "05 Feb 2002 12:11:08 -0500", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL v7.2 Final Release" }, { "msg_contents": "On Tuesday 05 February 2002 12:07 pm, Hannu Krosing wrote:\n> Why is just plperl included ?\n\n> What about pl/python and pl/tcl (I hope pl/pgsql is there somewhere) ?\n\npl/pgsql is in the base server package.\npl/tcl is in the tcl subpackage, although that might not be a good thing.\n\nWhat is required to build pl/python? Last I heard is was halfway \nexperimental?\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 5 Feb 2002 12:33:01 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL v7.2 Final Release" }, { "msg_contents": "On Tuesday 05 February 2002 12:11 pm, Trond Eivind Glomsr�d wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > What about pl/python\n\n> There is no shared python library. Linking in static libraries in\n> dynamic extensions doesn't work on most platforms.\n\nThat's what I thought, but wasn't sure.\n\nOh, I'm building NLS-capable RPM's as I write this; expect an upload shortly. \nThe NLS file list mechanism munged the execute permission for the initscript, \nso I had to track that down before release. Hopefully this last build will \nhave the right perms.\n\nI even have the release announcement composed in kmail waiting for a fully \nsuccessful build.....\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 5 Feb 2002 12:37:29 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL v7.2 Final Release" }, { "msg_contents": "On Tue, 5 Feb 2002, Lamar Owen wrote:\n\n> On Tuesday 05 February 2002 12:11 pm, Trond Eivind Glomsr�d wrote:\n> > Hannu Krosing <hannu@tm.ee> writes:\n> > > What about pl/python\n> \n> > There is no shared python library. Linking in static libraries in\n> > dynamic extensions doesn't work on most platforms.\n> \n> That's what I thought, but wasn't sure.\n\nFWIW, the python rpms in Rawhide have static libraries, but are compiled \nwith -fPIC. Thus, they can actually be used in this way...\n\n> Oh, I'm building NLS-capable RPM's as I write this; expect an upload shortly. \n> The NLS file list mechanism munged the execute permission for the initscript, \n> so I had to track that down before release. Hopefully this last build will \n> have the right perms.\n\nI need to sync up after that, before I do some more fixes to the \ninitscript.\n\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n\n", "msg_date": "Tue, 5 Feb 2002 12:39:28 -0500 (EST)", "msg_from": "=?ISO-8859-1?Q?Trond_Eivind_Glomsr=F8d?= <teg@redhat.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL v7.2 Final Release" }, { "msg_contents": "* Marc G. Fournier <scrappy@postgresql.org> [020205 11:10]:\n> \n> For Immediate Release\t\t\t\tFebruary 5th, 2002\n> \n> \tAfter almost a full year of development since PostgreSQL v7.1 was\n> released, the PostgreSQL Global Development Group is proud to announce the\n> availability of our latest development milestone ... PostgreSQL v7.2,\n> another step forward for the project.\n\nWoo hoo!\n\nCan I start putting changes into the PyGreSQL module or do we want to\ngive it a few days to shake the immediate bugs out?\n\nKudos all around, btw. This looks like a really nice release.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 5 Feb 2002 12:44:54 -0500", "msg_from": "\"D'Arcy J.M. Cain\" <darcy@druid.net>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL v7.2 Final Release" }, { "msg_contents": "On Tue, 2002-02-05 at 19:11, Trond Eivind Glomsr�d wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> \n> > On Tue, 2002-02-05 at 18:15, Trond Eivind Glomsr�d wrote:\n> > > \"Marc G. Fournier\" <scrappy@postgresql.org> writes:\n> > > \n> > > > For Immediate Release\t\t\t\tFebruary 5th, 2002\n> > > > \n> > > > \tAfter almost a full year of development since PostgreSQL v7.1 was\n> > > > released, the PostgreSQL Global Development Group is proud to announce the\n> > > > availability of our latest development milestone ... PostgreSQL v7.2,\n> > > > another step forward for the project.\n> > > \n> > > RPMs for Red Hat Linux 7.2 can be found at http://people.redhat.com/teg/pg/\n> > \n> > Why is just plperl included ?\n> > \n> > What about pl/python\n> \n> There is no shared python library. Linking in static libraries in\n> dynamic extensions doesn't work on most platforms.\n\nDoes that mean that one can't run pl/python on Redhat 7.2 ??\n\nI was hoping that all the work that went into fixing various flaws in\npl/python during 7.2 development would result in it being available in\nbinary distributions too...\n\n-----------------\nHannu\n\n\n", "msg_date": "05 Feb 2002 19:47:04 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL v7.2 Final Release" }, { "msg_contents": "On 5 Feb 2002, Hannu Krosing wrote:\n\n> On Tue, 2002-02-05 at 19:11, Trond Eivind Glomsr�d wrote:\n> > Hannu Krosing <hannu@tm.ee> writes:\n> > \n> > > On Tue, 2002-02-05 at 18:15, Trond Eivind Glomsr�d wrote:\n> > > > \"Marc G. Fournier\" <scrappy@postgresql.org> writes:\n> > > > \n> > > > > For Immediate Release\t\t\t\tFebruary 5th, 2002\n> > > > > \n> > > > > \tAfter almost a full year of development since PostgreSQL v7.1 was\n> > > > > released, the PostgreSQL Global Development Group is proud to announce the\n> > > > > availability of our latest development milestone ... PostgreSQL v7.2,\n> > > > > another step forward for the project.\n> > > > \n> > > > RPMs for Red Hat Linux 7.2 can be found at http://people.redhat.com/teg/pg/\n> > > \n> > > Why is just plperl included ?\n> > > \n> > > What about pl/python\n> > \n> > There is no shared python library. Linking in static libraries in\n> > dynamic extensions doesn't work on most platforms.\n> \n> Does that mean that one can't run pl/python on Redhat 7.2 ??\n\nOn IA32, it will work (with a performance penalty, \"thou shall not use \nstatic libraries in dynamic extensions\"), on other archs (alpha, IA64, \nS/390) it will die.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n\n", "msg_date": "Tue, 5 Feb 2002 12:51:59 -0500 (EST)", "msg_from": "=?ISO-8859-1?Q?Trond_Eivind_Glomsr=F8d?= <teg@redhat.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL v7.2 Final Release" }, { "msg_contents": "On Tue, 2002-02-05 at 18:11, Trond Eivind Glomsr�d wrote:\n\n> The postgresql-tcl package contains that, FTTB, but tcl is pretty much\n> dead anyway...\n\nSo all of us who have been using tcl for years and are comfortable with\nit should just forget about it maybe? Tcl the best kept secret of the\ninternet...\n\nCheers\n\nTony Grant\n\n-- \nRedHat Linux on Sony Vaio C1XD/S\nhttp://www.animaproductions.com/linux2.html\nMacromedia UltraDev with PostgreSQL\nhttp://www.animaproductions.com/ultra.html\n\n", "msg_date": "05 Feb 2002 19:20:41 +0100", "msg_from": "tony <tony@animaproductions.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL v7.2 Final Release" }, { "msg_contents": "On Tue, 2002-02-05 at 19:51, Trond Eivind Glomsr�d wrote:\n> On 5 Feb 2002, Hannu Krosing wrote:\n> \n> > On Tue, 2002-02-05 at 19:11, Trond Eivind Glomsr�d wrote:\n> > > Hannu Krosing <hannu@tm.ee> writes:\n> > > \n> > > > On Tue, 2002-02-05 at 18:15, Trond Eivind Glomsr�d wrote:\n> > > > > \"Marc G. Fournier\" <scrappy@postgresql.org> writes:\n> > > > > \n> > > > > > For Immediate Release\t\t\t\tFebruary 5th, 2002\n> > > > > > \n> > > > > > \tAfter almost a full year of development since PostgreSQL v7.1 was\n> > > > > > released, the PostgreSQL Global Development Group is proud to announce the\n> > > > > > availability of our latest development milestone ... PostgreSQL v7.2,\n> > > > > > another step forward for the project.\n> > > > > \n> > > > > RPMs for Red Hat Linux 7.2 can be found at http://people.redhat.com/teg/pg/\n> > > > \n> > > > Why is just plperl included ?\n> > > > \n> > > > What about pl/python\n> > > \n> > > There is no shared python library. Linking in static libraries in\n> > > dynamic extensions doesn't work on most platforms.\n> > \n> > Does that mean that one can't run pl/python on Redhat 7.2 ??\n> \n> On IA32, it will work (with a performance penalty, \"thou shall not use \n> static libraries in dynamic extensions\"),\n\nAny estimate how big the penalty is ? \n\nAlso is it just a load-time penalty or continuous ?\n\n-------------\nHannu\n\n\n", "msg_date": "05 Feb 2002 20:21:37 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL v7.2 Final Release" }, { "msg_contents": "\"D'Arcy J.M. Cain\" <darcy@druid.net> writes:\n> Can I start putting changes into the PyGreSQL module or do we want to\n> give it a few days to shake the immediate bugs out?\n\nDon't check in any 7.3 development until we split off a CVS branch for\n7.2 maintenance. We'll probably wait at least a week before we do that;\nlonger if it looks like there are lots of problems...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 05 Feb 2002 13:24:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL v7.2 Final Release " }, { "msg_contents": "\n> There is no shared python library. Linking in static libraries in\n> dynamic extensions doesn't work on most platforms.\n> \n> > and pl/tcl (I hope pl/pgsql is there somewhere) ?\n> \n> The postgresql-tcl package contains that, FTTB, but tcl is pretty much\n> dead anyway...\n\nPlease refrain from language flames...the pgsql lists are exceptional\nlists that really don't need this pollution.\n\nthanks,\n\n\t--brett\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Tue, 5 Feb 2002 11:15:30 -0800", "msg_from": "Brett Schwarz <brett_schwarz@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL v7.2 Final Release" }, { "msg_contents": ">>>>> \"Trond\" == Trond Eivind Glomsr�d <teg@redhat.com> writes:\n\n>> > There is no shared python library. Linking in static libraries in\n>> > dynamic extensions doesn't work on most platforms.\n>> \n>> Does that mean that one can't run pl/python on Redhat 7.2 ??\n\nTrond> On IA32, it will work (with a performance penalty, \"thou shall not use \nTrond> static libraries in dynamic extensions\"), on other archs (alpha, IA64, \nTrond> S/390) it will die.\n\nOn Darwin, linking a static libperl.a works just fine, although it\ncreates a libperl.dynlib, which I had to symlink to libperl.so to get\nit to load.\n\nWoo hoo. Embedded Perl. On #perl, we were already discussing having\na Pg process put up a web-socket, or proxy through to another database\nwith DBI. OK, we're crazy.\n\n-- \nRandal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095\n<merlyn@stonehenge.com> <URL:http://www.stonehenge.com/merlyn/>\nPerl/Unix/security consulting, Technical writing, Comedy, etc. etc.\nSee PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training!\n", "msg_date": "05 Feb 2002 12:11:17 -0800", "msg_from": "merlyn@stonehenge.com (Randal L. Schwartz)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL v7.2 Final Release" }, { "msg_contents": "teg@redhat.com (Trond Eivind Glomsr�d) writes:\n> ... but tcl is pretty much dead anyway...\n\nYou have no idea how wrong you are.\n-- \nmatthew rice <matt@starnix.com> starnix inc.\nphone: 905-771-0017 thornhill, ontario, canada\nhttp://www.starnix.com professional linux services & products\n", "msg_date": "05 Feb 2002 18:53:30 -0500", "msg_from": "Matthew Rice <matt@starnix.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL v7.2 Final Release" }, { "msg_contents": "Marc G. Fournier writes:\n\n> \tAfter almost a full year of development since PostgreSQL v7.1 was\n> released, the PostgreSQL Global Development Group is proud to announce the\n> availability of our latest development milestone ... PostgreSQL v7.2,\n> another step forward for the project.\n\nAre you going to put some announcements on web sites such as freshmeat,\nlinuxpr, bsdtoday?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 5 Feb 2002 22:34:10 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL v7.2 Final Release" }, { "msg_contents": "\nworking through those today ...\n\nOn Tue, 5 Feb 2002, Peter Eisentraut wrote:\n\n> Marc G. Fournier writes:\n>\n> > \tAfter almost a full year of development since PostgreSQL v7.1 was\n> > released, the PostgreSQL Global Development Group is proud to announce the\n> > availability of our latest development milestone ... PostgreSQL v7.2,\n> > another step forward for the project.\n>\n> Are you going to put some announcements on web sites such as freshmeat,\n> linuxpr, bsdtoday?\n>\n> --\n> Peter Eisentraut peter_e@gmx.net\n>\n>\n\n", "msg_date": "Wed, 6 Feb 2002 09:07:19 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL v7.2 Final Release" }, { "msg_contents": "[after delays....]\nOn Tuesday 05 February 2002 11:01 am, Marc G. Fournier wrote:\n> \tSource for this release is available on all mirrors under:\n\n> \t\t/pub/source/v7.2\n\nRPMs for PostgreSQL 7.2 available as soon as the mirrors propagate in \n/pub/binary/v7.2/RPMS\n\nBIG NOTE:\nDue to RPM's versioning scheme, and my unwillingness to further obfuscate the \nversioning with Epoch or Serial tags, if you have be running the beta or \nrelease candidate RPMs of 7.2 you will need to use the '--oldpackage' switch \nto the rpm command line.\n\nPlease read the README.rpm-dist and CHANGELOG files in the above referenced \ndirectory for more information.\n\nBug reports to either pgsql-bugs or pgsql-ports, please.\n\nRPMs for redhat-6.2 will be available shortly. Note that you may need to \nupdate OS utilities to rebuild from source on Red Hat 6.2. An updated patch \nutility is known to be necessary.\n\nSorry for the delay in getting these posted -- I had them built yesterday \nmorning, but couldn't upload to ftp.postgresql.org dueto some server problem \nthere.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n", "msg_date": "Wed, 6 Feb 2002 09:36:45 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL v7.2 Final Release" }, { "msg_contents": "Brett Schwarz wrote:\n> \n> > There is no shared python library. Linking in static libraries in\n> > dynamic extensions doesn't work on most platforms.\n> >\n> > > and pl/tcl (I hope pl/pgsql is there somewhere) ?\n> >\n> > The postgresql-tcl package contains that, FTTB, but tcl is pretty much\n> > dead anyway...\n> \n> Please refrain from language flames...the pgsql lists are exceptional\n> lists that really don't need this pollution.\n>\n\nYou think language flames are bad? Try the perennial GPL vs BSD flame.\nEvery few months or so we have a storm of hundreds of posts proclaiming\nthe merits of GPL or BSD. A language debate would be mild by comparison.\n", "msg_date": "Wed, 06 Feb 2002 15:15:55 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL v7.2 Final Release" }, { "msg_contents": "On Wednesday 06 February 2002 09:36 am, Lamar Owen wrote:\n> RPMs for PostgreSQL 7.2 available as soon as the mirrors propagate in\n> /pub/binary/v7.2/RPMS\n\n> RPMs for redhat-6.2 will be available shortly. Note that you may need to\n> update OS utilities to rebuild from source on Red Hat 6.2. An updated\n> patch utility is known to be necessary.\n\nThanks to Dr. Rich Shepard, we have Red Hat 6.2 binary RPMs, built for i386, \ni586, and i686 architectures. If there is demand, I will attempt a build for \nSuSE 7.3 on UltraSPARC. I also am looking at Caldera builds (thanks to Larry \nRosenman). Mandrake 8.0 RPMs should be built shortly, thanks to Justin Clift.\n\nThomas Lockhart typically does Mandrake 7.2. For Thomas and other PostgreSQL \ndevelopers who are members of the pgsql group on the dev server, the dirs are \ng+w for the binaries.\n\nExciting times!\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Wed, 6 Feb 2002 19:48:13 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL v7.2 Final Release" }, { "msg_contents": "On Wed, 6 Feb 2002, Lamar Owen wrote:\n\n> Thanks to Dr. Rich Shepard, we have Red Hat 6.2 binary RPMs, built for i386, \n> i586, and i686 architectures.\n\n I found that in order to rebuild the 7.2.src.rpm I had to upgrade my\n'patch' utility. The one I had installed was version 2.5-9 (the package\nbuild); the one I built from the .src.rpm is 2.5.4. As soon as I freshened\nthe installation, the command 'rpm --rebuild ...' worked just fine.\n\n However, there's a problem with 'gettext' that I didn't resolve.\nApparently RH 6.2's version is too old, but trying to rebuild the\ngettext.src.rpm kept failing. Rather than futz with that, too, I changed the\nspec file so !nls = 0} and I commented out the line requiring gettext.\n\n If you do this, you need to rebuild the packages from the\n/usr/src/redhat/SPECS/ directory with the command, 'rpm -ba [--target=iX86]\n...'. However, the full set of binary packages for i386, i586, and i686 are\non the postgres ftp server. The i686 version freshened our installation from\n7.1.3 to 7.2 flawlessly.\n\nGlad to contribute,\n\nRich\n\nDr. Richard B. Shepard, President\n\n Applied Ecosystem Services, Inc. (TM)\n 2404 SW 22nd Street | Troutdale, OR 97060-1247 | U.S.A.\n + 1 503-667-4517 (voice) | + 1 503-667-8863 (fax) | rshepard@appl-ecosys.com\n http://www.appl-ecosys.com\n\n", "msg_date": "Wed, 6 Feb 2002 17:13:44 -0800 (PST)", "msg_from": "Rich Shepard <rshepard@appl-ecosys.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL v7.2 Final Release" }, { "msg_contents": "Thank you!\n\n----- Original Message -----\nFrom: \"Marc G. Fournier\" <scrappy@postgresql.org>\nTo: <pgsql-announce@postgresql.org>\nCc: <pgsql-hackers@postgresql.org>; <pgsql-general@postgresql.org>\nSent: Tuesday, February 05, 2002 11:01 AM\nSubject: PostgreSQL v7.2 Final Release\n\n\n>\n> For Immediate Release February 5th, 2002\n>\n> After almost a full year of development since PostgreSQL v7.1 was\n> released, the PostgreSQL Global Development Group is proud to announce the\n> availability of our latest development milestone ... PostgreSQL v7.2,\n> another step forward for the project.\n>\n> A full list of changes to v7.2 can be found in the HISTORY file,\n> included with the release, as well as under all ftp mirrors as:\n>\n> /pub/README.v7_2\n>\n> Highlights of this release are as follows:\n>\n> VACUUM\n> Vacuuming no longer locks tables, thus allowing normal user\n> access during the vacuum. A new \"VACUUM FULL\" command does\n> old-style vacuum by locking the table and shrinking the on-disk\n> copy of the table.\n>\n> Transactions\n> There is no longer a problem with installations that exceed\n> four billion transactions.\n>\n> OID's\n> OID's are now optional. Users can now create tables without\n> OID's for cases where OID usage is excessive.\n>\n> Optimizer\n> The system now computes histogram column statistics during\n> \"ANALYZE\", allowing much better optimizer choices.\n>\n> Security\n> A new MD5 encryption option allows more secure storage and\n> transfer of passwords. A new Unix-domain socket authentication\n> option is available on Linux and BSD systems.\n>\n> Statistics\n> Administrators can use the new table access statistics module\n> to get fine-grained information about table and index usage.\n>\n> Internationalization\n> Program and library messages can now be displayed in several\n> languages.\n>\n> .. with many many more bug fixes, enhancements and performance\n> related changes ...\n>\n> Source for this release is available on all mirrors under:\n>\n> /pub/source/v7.2\n>\n> As always, any bugs with this release should be reported to\n> pgsql-bugs@postgresql.org ... and, as with all point releases, this\n> release requires a complete dump and reload from previous releases, due to\n> internal structure changes ...\n>\n> Marc G. Fournier\n> Co-ordinator\n> PostgreSQL Global Development Group\n>\n>\n>\n>\n>\n\n", "msg_date": "Wed, 6 Feb 2002 22:04:22 -0500", "msg_from": "\"Rob Arnold\" <rob@cabrion.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL v7.2 Final Release" }, { "msg_contents": "Quoting \"Marc G. Fournier\" <scrappy@postgresql.org>:\n\n> For Immediate Release\t\t\t\tFebruary 5th, 2002\n> \n> Security\n> A new MD5 encryption option allows more secure storage and\n> transfer of passwords. A new Unix-domain socket authentication\n> option is available on Linux and BSD systems.\n\nFirst question:\n\n Is this backport(-able/-ed) to 7.1.3?\n\nSecond question:\n\n Is 7.2 ready for production?\n", "msg_date": "07 Feb 2002 16:04:23 +0100", "msg_from": "Turbo Fredriksson <turbo@bayour.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL v7.2 Final Release" }, { "msg_contents": "Turbo Fredriksson <turbo@bayour.com> writes:\n\n> Quoting \"Marc G. Fournier\" <scrappy@postgresql.org>:\n> \n> > For Immediate Release\t\t\t\tFebruary 5th, 2002\n> > \n> > Security\n> > A new MD5 encryption option allows more secure storage and\n> > transfer of passwords. A new Unix-domain socket authentication\n> > option is available on Linux and BSD systems.\n> \n> First question:\n> \n> Is this backport(-able/-ed) to 7.1.3?\n\nIf you're referring to the unix-socket authentication, the Debian\npatch for 7.1.X is where it came from--it wasn't in mainline until\n7.2. \n\n> Second question:\n> \n> Is 7.2 ready for production?\n\nThe developers obviously think so. Whether it's true for you, who\nknows? You should probably test it, or wait a few weeks to see if any\nshow-stopper bugs are turned up by the early adopters. ;)\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n", "msg_date": "07 Feb 2002 11:00:18 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL v7.2 Final Release" }, { "msg_contents": ">>>>> \"Doug\" == Doug McNaught <doug@wireboard.com> writes:\n\n Doug> Turbo Fredriksson <turbo@bayour.com> writes:\n >> Quoting \"Marc G. Fournier\" <scrappy@postgresql.org>:\n >> \n >> > For Immediate Release February 5th, 2002\n >> > \n >> > Security > A new MD5 encryption option allows more secure\n >> storage and > transfer of passwords. A new Unix-domain socket\n >> authentication > option is available on Linux and BSD systems.\n >> \n >> First question:\n >> \n >> Is this backport(-able/-ed) to 7.1.3?\n\n Doug> If you're referring to the unix-socket authentication, the\n Doug> Debian patch for 7.1.X is where it came from--it wasn't in\n Doug> mainline until 7.2.\n\nI was more refering to the on-disk encrypted password. A user (which\nhave root) found the password in two minutes with grep...\n\n >> Is 7.2 ready for production?\n\n Doug> The developers obviously think so.\n\nGood enough. I'll download it and try it out then. Thanx.\n\nAlbanian killed subway explosion president Treasury Ft. Meade Iran\nWorld Trade Center BATF Panama ammunition nitrate CIA smuggle\n[See http://www.aclu.org/echelonwatch/index.html for more about this]\n", "msg_date": "08 Feb 2002 10:16:51 +0100", "msg_from": "Turbo Fredriksson <turbo@bayour.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL v7.2 Final Release" }, { "msg_contents": "The following seems to be a bug in 7.2 (and in 7.1.2) I'm pretty sure\nit worked before, certainly it's something I do a lot (but postgresql\nisn't the only database I use).\n\nThe bug concerns a NOT IN on a list generated by a select. If you\nhave two tables thus:\n\n\n create table t1 (id integer, name varchar(20), t2_id integer);\n insert into t1 (id, name, t2_id) values (1, 'nic', 2);\n insert into t1 (id, name, t2_id) values (2, 'jim', NULL);\n\n create table t2 (id integer, name varchar(20));\n insert into t1 (id, name, t2_id) values (1, 'ferrier');\n insert into t1 (id, name, t2_id) values (2, 'broadbent');\n\nAnd now do this query:\n\n select * from t2 where id not in (select t2_id from t1);\n\nthen I get a NULL response (ie: no rows returned).\n\nWhat I SHOULD get is the row from t2 with id == 2;\n\n\nNic Ferrier\n\n", "msg_date": "01 Apr 2002 15:55:32 +0000", "msg_from": "Nic Ferrier <nferrier@tapsellferrier.co.uk>", "msg_from_op": false, "msg_subject": "NOT IN queries" }, { "msg_contents": "Nic Ferrier <nferrier@tapsellferrier.co.uk> writes:\n\n> create table t1 (id integer, name varchar(20), t2_id integer);\n> insert into t1 (id, name, t2_id) values (1, 'nic', 2);\n> insert into t1 (id, name, t2_id) values (2, 'jim', NULL);\n> \n> create table t2 (id integer, name varchar(20));\n> insert into t1 (id, name, t2_id) values (1, 'ferrier');\n> insert into t1 (id, name, t2_id) values (2, 'broadbent');\n> \n> And now do this query:\n> \n> select * from t2 where id not in (select t2_id from t1);\n> \n> then I get a NULL response (ie: no rows returned).\n\nWell, you never inserted any rows into t2, so that makes sense.\n\n-Doug\n-- \nDoug McNaught Wireboard Industries http://www.wireboard.com/\n\n Custom software development, systems and network consulting.\n Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD...\n", "msg_date": "01 Apr 2002 11:14:52 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: NOT IN queries" }, { "msg_contents": "Nic Ferrier <nferrier@tapsellferrier.co.uk> writes:\n> create table t1 (id integer, name varchar(20), t2_id integer);\n> insert into t1 (id, name, t2_id) values (1, 'nic', 2);\n> insert into t1 (id, name, t2_id) values (2, 'jim', NULL);\n\n> create table t2 (id integer, name varchar(20));\n> insert into t1 (id, name, t2_id) values (1, 'ferrier');\n> insert into t1 (id, name, t2_id) values (2, 'broadbent');\n\n> And now do this query:\n\n> select * from t2 where id not in (select t2_id from t1);\n\n> then I get a NULL response (ie: no rows returned).\n\n> What I SHOULD get is the row from t2 with id == 2;\n\nNo, you should not; the system's response is correct per spec.\n\nFor the t2 row with id=2, the WHERE clause is clearly FALSE\n(2 is in select t2_id from t1). For the t2 row with id=1,\nthe WHERE clause yields UNKNOWN because of the NULL in t1,\nand WHERE treats UNKNOWN as FALSE. This has been discussed\nbefore on the lists, and it's quite clear that the result is\ncorrect according to SQL's 3-valued boolean logic.\n\nThere are a number of ways you could deal with this. If you\nsimply want to ignore the NULLs in t1 then you could do either\n\nselect * from t2 where id not in (select distinct t2_id from t1);\nselect * from t2 where (id in (select t2_id from t1)) is not false;\n\nThe first of these will probably be faster if there aren't many\ndistinct t2_id values.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 Apr 2002 11:21:54 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: NOT IN queries " }, { "msg_contents": "On 1 Apr 2002, Nic Ferrier wrote:\n\n> The following seems to be a bug in 7.2 (and in 7.1.2) I'm pretty sure\n> it worked before, certainly it's something I do a lot (but postgresql\n> isn't the only database I use).\n>\n> The bug concerns a NOT IN on a list generated by a select. If you\n> have two tables thus:\n>\n>\n> create table t1 (id integer, name varchar(20), t2_id integer);\n> insert into t1 (id, name, t2_id) values (1, 'nic', 2);\n> insert into t1 (id, name, t2_id) values (2, 'jim', NULL);\n>\n> create table t2 (id integer, name varchar(20));\n> insert into t1 (id, name, t2_id) values (1, 'ferrier');\n> insert into t1 (id, name, t2_id) values (2, 'broadbent');\n>\n> And now do this query:\n>\n> select * from t2 where id not in (select t2_id from t1);\n>\n> then I get a NULL response (ie: no rows returned).\n>\n> What I SHOULD get is the row from t2 with id == 2;\n\nAssuming that some of those inserts were supposed to be in t2, you're\nmisunderstanding how NULLs work. Because there's a NULL in the output\nof the subselect, NOT IN is never going to return rows and this is\ncorrect.\n\nThe transformations by the spec start out:\nRVC NOT IN IPV => NOT (RVC IN IPV) => NOT (RVC =ANY IPV)\n\nThe result of RVC =ANY IPV is derived from the application of\n= to each row in IPV. If = is true for at least one row RT\nof IPV then RVC =ANY IPV is true. If IPV is empty or if =\nis false for each row RT of IPV then RVC =ANY IPV is false.\nIf neither of those cases hold, it's unknown. Since\nanything = NULL returns unknown, not false, the last case\nis the one that holds. You then NOT the unknown and get\nunknown back. Where clauses don't return rows where the\ncondition is unknown, so you won't get any rows back.\n\n\n", "msg_date": "Mon, 1 Apr 2002 08:42:14 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: NOT IN queries" }, { "msg_contents": "Whoops!\n\nThanks Tom (and everyone else who replied)\n\nAnd apoloies for the dumb mistake with the inserts (I could try and\npretend it was a deliberate mistake but since I made an even bigger\nerror I won't do that).\n\n\nNic\n\n\n", "msg_date": "01 Apr 2002 19:46:55 +0000", "msg_from": "Nic Ferrier <nferrier@tapsellferrier.co.uk>", "msg_from_op": false, "msg_subject": "Re: NOT IN queries" } ]
[ { "msg_contents": "-----Original Message-----\nFrom: Marc G. Fournier [mailto:scrappy@hub.org]\nSent: Tuesday, February 05, 2002 11:37 AM\nTo: Haroldo Stenger\nCc: Dann Corbit; Tom Lane; pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] Threaded PosgreSQL server\n[snip]\n> That's kinda what I was hoping ... is it something that could be\n> seamlessly integrated to have minimal impact on the code itself ...\neven\n> if there was some way of having a 'thread.c' vs 'non-thread.c' that\ncould\n> be link'd in, with wrapper functions?\n\n> Tha again, has anyone looked at the apache project? Apache2 has\nseveral\n> \"process models\" ... prefork being one (like ours), or a 'worker',\nwhich\n> is a prefork/threaded model where you can have n child processes, with\nm\n> 'threads' inside of each ... not sure if something like that coul be\n> retrofit'd into what we have, but ... ?\n\nIt could be done, but it might be an effort. As an example the ACE\nproject:\nhttp://www.cs.wustl.edu/~schmidt/ACE.html\nhas a number of easily selected threading models. It is also portable\nto an\nenormous number of platforms (including all flavors of UNIX). However,\nit\nis C++ rather than C, and so that particular transition would probably\nbe\npretty traumatic if someone tried to use ACE as a toolset. But at least\nit\ndoes demonstrate that such a thing is feasible. As a \"for instance\" you\ncan\nlook at the Jaws web server (which is both open source and very much\nfaster\nthan the Apache server). It can easily be built with many different\nthreading\nmodels.\n\n", "msg_date": "Tue, 5 Feb 2002 11:47:26 -0800", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: Threaded PosgreSQL server" } ]
[ { "msg_contents": "Hi All,\n\nIs there a make target in postgres so that you can install the client\nlibraries, psql and the pg_* libraries WITHOUT installing the postgres\ndatabase?\n\nie. For accessing a remove postgres database?\n\nChris\n\n", "msg_date": "Wed, 6 Feb 2002 09:56:42 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "installing client without server" } ]
[ { "msg_contents": "If you can improve the ipc implementation under cygwin that would be \ngreat. Perhaps you could look at the source for the current \nimplementation \n(http://www.neuro.gatech.edu/users/cwilson/cygutils/V1.1/cygipc/), and \ncome up with a patch that would improve the performance/implmentation.\n\n--Barry\n\n\nmlw wrote:\n> Oleg Bartunov wrote:\n> \n>>we have to support earlier windows which have no shared memory support.\n>>\n> \n> I'm pretty sure it could a configuration option then. Using a file is just\n> plain stupid. \"Every\" version of Windows has supported shared memory in one\n> form or another.\n> \n> What version are you wishing to support which the cygwin guys claim does not\n> have shared memory?\n> \n> I'm prety sure I can write a shared memory interface for Windows which should\n> work across all \"supported\" versions.\n> \n> (BTW I have been a Windows developer since version 1.03.)\n> \n> \n> \n> \n>>On Tue, 26 Feb 2002, mlw wrote:\n>>\n>>\n>>>Oleg Bartunov wrote:\n>>>\n>>>>Having frustrated with performance on Windows box I'm wondering if it's\n>>>>possible to get postgresql optimized for working without shared memory,\n>>>>say in single-task mode. It looks like it's shared memory emulation on disk\n>>>>(by cygipc daemon) is responsible for performance degradation.\n>>>>In our project we have to use Windows for desktop application and it's\n>>>>single task, so we don't need shared memory. In principle, it's possible\n>>>>to hack cygipc, so it wouldn't emulate shared memory and address calls\n>>>>to normal memory, but I'm wondering if it's possible from postgres side.\n>>>>\n>>>> Regards,\n>>>>\n>>>How does cygwin do shared memory on Windows? Windows support real shared\n>>>memory, surely the cygwin guys are using \"real\" shared memory. Are you sure\n>>>that this is the problem? If so, might we not be able to use a few\n>>>macros/#ifdefs to do it right?\n>>>\n>>>I'll be up for that if you guys don't mind some \"ifdef/endif\" stuff here and\n>>>there.\n>>>\n>>>\n>> Regards,\n>> Oleg\n>>_____________________________________________________________\n>>Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n>>Sternberg Astronomical Institute, Moscow University (Russia)\n>>Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n>>phone: +007(095)939-16-83, +007(095)939-23-83\n>>\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n> \n\n\n", "msg_date": "Tue, 05 Feb 2002 18:30:45 -0800", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": true, "msg_subject": "Re: single task postgresql" }, { "msg_contents": "Oleg Bartunov wrote:\n> \n> we have to support earlier windows which have no shared memory support.\n\nI'm pretty sure it could a configuration option then. Using a file is just\nplain stupid. \"Every\" version of Windows has supported shared memory in one\nform or another.\n\nWhat version are you wishing to support which the cygwin guys claim does not\nhave shared memory?\n\nI'm prety sure I can write a shared memory interface for Windows which should\nwork across all \"supported\" versions.\n\n(BTW I have been a Windows developer since version 1.03.)\n\n\n\n> \n> On Tue, 26 Feb 2002, mlw wrote:\n> \n> > Oleg Bartunov wrote:\n> > >\n> > > Having frustrated with performance on Windows box I'm wondering if it's\n> > > possible to get postgresql optimized for working without shared memory,\n> > > say in single-task mode. It looks like it's shared memory emulation on disk\n> > > (by cygipc daemon) is responsible for performance degradation.\n> > > In our project we have to use Windows for desktop application and it's\n> > > single task, so we don't need shared memory. In principle, it's possible\n> > > to hack cygipc, so it wouldn't emulate shared memory and address calls\n> > > to normal memory, but I'm wondering if it's possible from postgres side.\n> > >\n> > > Regards,\n> >\n> > How does cygwin do shared memory on Windows? Windows support real shared\n> > memory, surely the cygwin guys are using \"real\" shared memory. Are you sure\n> > that this is the problem? If so, might we not be able to use a few\n> > macros/#ifdefs to do it right?\n> >\n> > I'll be up for that if you guys don't mind some \"ifdef/endif\" stuff here and\n> > there.\n> >\n> \n> Regards,\n> Oleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n", "msg_date": "Wed, 27 Feb 2002 07:16:28 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: single task postgresql" }, { "msg_contents": "On Wed, 27 Feb 2002, mlw wrote:\n\n> Oleg Bartunov wrote:\n> >\n> > we have to support earlier windows which have no shared memory support.\n>\n> I'm pretty sure it could a configuration option then. Using a file is just\n> plain stupid. \"Every\" version of Windows has supported shared memory in one\n> form or another.\n>\n> What version are you wishing to support which the cygwin guys claim does not\n> have shared memory?\n\nWe've tried windows 98 and would like to have support for ALL versions\nbeginning from W95.\n\n>\n> I'm prety sure I can write a shared memory interface for Windows which should\n> work across all \"supported\" versions.\n>\n> (BTW I have been a Windows developer since version 1.03.)\n>\n\ngreat ! We have had success to run our OpenFTS under Windows but need\npostgresql runs a bit faster. It'd be great if you write such interface,\nso postgresql could run under older windows without shared memory.\nbtw, BeoS seems is supported in similar way. I think people would appreciate\nthis. There are still many old hardware with Windows95 installed which\ncould be used for desktop application based on postgresql.\n\n>\n>\n> >\n> > On Tue, 26 Feb 2002, mlw wrote:\n> >\n> > > Oleg Bartunov wrote:\n> > > >\n> > > > Having frustrated with performance on Windows box I'm wondering if it's\n> > > > possible to get postgresql optimized for working without shared memory,\n> > > > say in single-task mode. It looks like it's shared memory emulation on disk\n> > > > (by cygipc daemon) is responsible for performance degradation.\n> > > > In our project we have to use Windows for desktop application and it's\n> > > > single task, so we don't need shared memory. In principle, it's possible\n> > > > to hack cygipc, so it wouldn't emulate shared memory and address calls\n> > > > to normal memory, but I'm wondering if it's possible from postgres side.\n> > > >\n> > > > Regards,\n> > >\n> > > How does cygwin do shared memory on Windows? Windows support real shared\n> > > memory, surely the cygwin guys are using \"real\" shared memory. Are you sure\n> > > that this is the problem? If so, might we not be able to use a few\n> > > macros/#ifdefs to do it right?\n> > >\n> > > I'll be up for that if you guys don't mind some \"ifdef/endif\" stuff here and\n> > > there.\n> > >\n> >\n> > Regards,\n> > Oleg\n> > _____________________________________________________________\n> > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > Sternberg Astronomical Institute, Moscow University (Russia)\n> > Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> > phone: +007(095)939-16-83, +007(095)939-23-83\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n", "msg_date": "Wed, 27 Feb 2002 16:48:27 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: single task postgresql" }, { "msg_contents": "Windows does not really have shared memory support. This has been a \nbeef with the Win32 API for a long time now. Because it has been a long \ntime complaint, it was finally added in Win2000 and later. Likewise, \nI'd like to point out that thinks like sims, shared memory, pipes, etc, \nand other entities commonly used for concurrent programming strategies \nare slower in XP. So, because shared memory really isn't well \nsupported, they elected to have what is, in essense, memory mapped \nfiles. Multiple processes then map the same file and read/write to it \nas needed, more or less as you would shared memory. Unless you plan on \nonly targetting on Win 2000 and XP, it sounds like a waste of time.\n\nAs for your shared memory interface, I suspect you'll find up with \npretty much what the cygwin guys have. Your implementation may prove to \nbe slightly lighter as they may of had some other issues to address but \nI doubt yours would prove to be much of a difference from a performance \nperspective as it's still going to be pretty much the same thing under \nthe covers for most of the Win32 platforms. Also, expect to be taking a \nperformance hit with XP as is so I can't say targetting that platform \nmakes much sense, IMOHO.\n\n\nHere's from MSDN:\n\nCreateSharedMemory\nThe CreateSharedMemory function creates a section of memory that is \nshared by client processes and the security package.\n\nPVOID NTAPI CreateSharedMemory (\n ULONG MaxSize,\n ULONG InitialSize\n);\nParameters\n[in] MaxSize\nSpecifies the maximum size of the shared memory.\n[in] InitialSize Specifies the initial size of the shared memory.\nReturn Values\nThe function returns a pointer to the block of shared memory, or NULL if \nthe block was not reserved.\n\nRemarks\nCreating a shared section for each client is not advisable because it is \na very expensive operation and may exhaust system resources.\n\nThe package's clients can write to shared memory which makes it \nsusceptible to attack. Data in the shared segment should not be trusted.\n\nThe pointer returned by the CreateSharedMemory function is required by \nthe AllocateSharedMemory, DeleteSharedMemory, and FreeSharedMemory \nfunctions.\n\nUse the DeleteSharedMemory function to release memory reserved by the\nCreateSharedMemory function.\n\nPointers to these functions are available in the \nLSA_SECPKG_FUNCTION_TABLE structure received by the SpInitialize function.\n\nRequirements\n Windows NT/2000 or later: Requires Windows 2000 or later.\n Header: Declared in Ntsecpkg.h.\n\n\nHere's an except from Python's Win32 API:\n\nMemory Mapped Files\n\nAccess to Win32 Memory Mapped files. Memory mapped files allow efficient \nsharing of data between separate processes. It allows a block of \n(possibly shared) memory to behave like a file.\n\nThis was just thrown in to show that memory mapped files are the \nstandard for shared memory on Win32 systems. Likewise, I'd like to \npoint out that I've not seen anything published that would suggest that \nshared memory versus memory mapped files via the Win32 API should yield \na higher performance.\n\n\nGreg\n\n\n\n\nmlw wrote:\n> Oleg Bartunov wrote:\n> \n>>we have to support earlier windows which have no shared memory support.\n>>\n> \n> I'm pretty sure it could a configuration option then. Using a file is just\n> plain stupid. \"Every\" version of Windows has supported shared memory in one\n> form or another.\n> \n> What version are you wishing to support which the cygwin guys claim does not\n> have shared memory?\n> \n> I'm prety sure I can write a shared memory interface for Windows which should\n> work across all \"supported\" versions.\n> \n> (BTW I have been a Windows developer since version 1.03.)\n> \n> \n> \n> \n>>On Tue, 26 Feb 2002, mlw wrote:\n>>\n>>\n>>>Oleg Bartunov wrote:\n>>>\n>>>>Having frustrated with performance on Windows box I'm wondering if it's\n>>>>possible to get postgresql optimized for working without shared memory,\n>>>>say in single-task mode. It looks like it's shared memory emulation on disk\n>>>>(by cygipc daemon) is responsible for performance degradation.\n>>>>In our project we have to use Windows for desktop application and it's\n>>>>single task, so we don't need shared memory. In principle, it's possible\n>>>>to hack cygipc, so it wouldn't emulate shared memory and address calls\n>>>>to normal memory, but I'm wondering if it's possible from postgres side.\n>>>>\n>>>> Regards,\n>>>>\n>>>How does cygwin do shared memory on Windows? Windows support real shared\n>>>memory, surely the cygwin guys are using \"real\" shared memory. Are you sure\n>>>that this is the problem? If so, might we not be able to use a few\n>>>macros/#ifdefs to do it right?\n>>>\n>>>I'll be up for that if you guys don't mind some \"ifdef/endif\" stuff here and\n>>>there.\n>>>\n>>>\n>> Regards,\n>> Oleg\n>>_____________________________________________________________\n>>Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n>>Sternberg Astronomical Institute, Moscow University (Russia)\n>>Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n>>phone: +007(095)939-16-83, +007(095)939-23-83\n>>\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n\n-- \nGreg Copeland, Principal Consultant\nCopeland Computer Consulting\n--------------------------------------------------\nPGP/GPG Key at http://www.keyserver.net\n5A66 1470 38F5 5E1B CABD 19AF E25A F56E 96DC 2FA9\n--------------------------------------------------\n\n", "msg_date": "Wed, 27 Feb 2002 09:33:00 -0600", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: single task postgresql" }, { "msg_contents": "Oh ya, sorry for not including these in my previous posting, however, \nhere's the links that may prove to be of value for you:\n\nMemory mapped files as shared memory:\nhttp://msdn.microsoft.com/library/default.asp?url=/library/en-us/fileio/filemap_64jd.asp\n\nShared memory interface, added August 2001:\nhttp://msdn.microsoft.com/library/default.asp?url=/library/en-us/security/customsecfunctions_13eh.asp\n\n\nEnjoy,\n\tGreg\n\n\nmlw wrote:\n> Oleg Bartunov wrote:\n> \n>>we have to support earlier windows which have no shared memory support.\n>>\n> \n> I'm pretty sure it could a configuration option then. Using a file is just\n> plain stupid. \"Every\" version of Windows has supported shared memory in one\n> form or another.\n> \n> What version are you wishing to support which the cygwin guys claim does not\n> have shared memory?\n> \n> I'm prety sure I can write a shared memory interface for Windows which should\n> work across all \"supported\" versions.\n> \n> (BTW I have been a Windows developer since version 1.03.)\n> \n> \n> \n> \n>>On Tue, 26 Feb 2002, mlw wrote:\n>>\n>>\n>>>Oleg Bartunov wrote:\n>>>\n>>>>Having frustrated with performance on Windows box I'm wondering if it's\n>>>>possible to get postgresql optimized for working without shared memory,\n>>>>say in single-task mode. It looks like it's shared memory emulation on disk\n>>>>(by cygipc daemon) is responsible for performance degradation.\n>>>>In our project we have to use Windows for desktop application and it's\n>>>>single task, so we don't need shared memory. In principle, it's possible\n>>>>to hack cygipc, so it wouldn't emulate shared memory and address calls\n>>>>to normal memory, but I'm wondering if it's possible from postgres side.\n>>>>\n>>>> Regards,\n>>>>\n>>>How does cygwin do shared memory on Windows? Windows support real shared\n>>>memory, surely the cygwin guys are using \"real\" shared memory. Are you sure\n>>>that this is the problem? If so, might we not be able to use a few\n>>>macros/#ifdefs to do it right?\n>>>\n>>>I'll be up for that if you guys don't mind some \"ifdef/endif\" stuff here and\n>>>there.\n>>>\n>>>\n>> Regards,\n>> Oleg\n>>_____________________________________________________________\n>>Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n>>Sternberg Astronomical Institute, Moscow University (Russia)\n>>Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n>>phone: +007(095)939-16-83, +007(095)939-23-83\n>>\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n\n-- \nGreg Copeland, Principal Consultant\nCopeland Computer Consulting\n--------------------------------------------------\nPGP/GPG Key at http://www.keyserver.net\n5A66 1470 38F5 5E1B CABD 19AF E25A F56E 96DC 2FA9\n--------------------------------------------------\n\n", "msg_date": "Wed, 27 Feb 2002 09:38:02 -0600", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: single task postgresql" }, { "msg_contents": "Greg Copeland wrote:\n> \n> Windows does not really have shared memory support. This has been a\n> beef with the Win32 API for a long time now. Because it has been a long\n> time complaint, it was finally added in Win2000 and later. Likewise,\n> I'd like to point out that thinks like sims, shared memory, pipes, etc,\n> and other entities commonly used for concurrent programming strategies\n> are slower in XP. So, because shared memory really isn't well\n> supported, they elected to have what is, in essense, memory mapped\n> files. Multiple processes then map the same file and read/write to it\n> as needed, more or less as you would shared memory. Unless you plan on\n> only targetting on Win 2000 and XP, it sounds like a waste of time.\n\nThis is not really true. Under DOS windows, i.e. 95,98, etc. Shared memory can\nbe done in 16 bit land with a touch of assembly and a DLL. Allocate, with\nglobalalloc, a shared memory segment. The base selector is a valid 32 bit\nselector, and the memory is mapped in the above 2G space shared and mapped to\nall 32bit processes.\n\nUnder NT through 2K, yes using a memory mapped files is the way to do it, but\nyou do not actually need to create a file, you can use (HANDLE)0xFFFFFFFF,\nwhich is the NT equivilent of the system memory file. The handle returned is a\nsystem global object which can be shared across processes.\n\n\n>\n", "msg_date": "Wed, 27 Feb 2002 11:48:34 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: single task postgresql" }, { "msg_contents": "On Wed, 27 Feb 2002, mlw wrote:\n\n> Greg Copeland wrote:\n> >\n> > Windows does not really have shared memory support. This has been a\n> > beef with the Win32 API for a long time now. Because it has been a long\n> > time complaint, it was finally added in Win2000 and later. Likewise,\n> > I'd like to point out that thinks like sims, shared memory, pipes, etc,\n> > and other entities commonly used for concurrent programming strategies\n> > are slower in XP. So, because shared memory really isn't well\n> > supported, they elected to have what is, in essense, memory mapped\n> > files. Multiple processes then map the same file and read/write to it\n> > as needed, more or less as you would shared memory. Unless you plan on\n> > only targetting on Win 2000 and XP, it sounds like a waste of time.\n>\n> This is not really true. Under DOS windows, i.e. 95,98, etc. Shared memory can\n> be done in 16 bit land with a touch of assembly and a DLL. Allocate, with\n> globalalloc, a shared memory segment. The base selector is a valid 32 bit\n> selector, and the memory is mapped in the above 2G space shared and mapped to\n> all 32bit processes.\n>\n> Under NT through 2K, yes using a memory mapped files is the way to do it, but\n> you do not actually need to create a file, you can use (HANDLE)0xFFFFFFFF,\n> which is the NT equivilent of the system memory file. The handle returned is a\n> system global object which can be shared across processes.\n>\n\nMark,\n\ndo you consider to work on this issue ?\n\n>\n> >\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Wed, 27 Feb 2002 20:36:21 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: single task postgresql" }, { "msg_contents": "\n\nmlw wrote:\n> Greg Copeland wrote:\n> \n>>Windows does not really have shared memory support. This has been a\n>>beef with the Win32 API for a long time now. Because it has been a long\n>>time complaint, it was finally added in Win2000 and later. Likewise,\n>>I'd like to point out that thinks like sims, shared memory, pipes, etc,\n>>and other entities commonly used for concurrent programming strategies\n>>are slower in XP. So, because shared memory really isn't well\n>>supported, they elected to have what is, in essense, memory mapped\n>>files. Multiple processes then map the same file and read/write to it\n>>as needed, more or less as you would shared memory. Unless you plan on\n>>only targetting on Win 2000 and XP, it sounds like a waste of time.\n>>\n> \n> This is not really true. Under DOS windows, i.e. 95,98, etc. Shared memory can\n> be done in 16 bit land with a touch of assembly and a DLL. Allocate, with\n> globalalloc, a shared memory segment. The base selector is a valid 32 bit\n> selector, and the memory is mapped in the above 2G space shared and mapped to\n> all 32bit processes.\n\n\nThat's fine ad all, however, we're talking about the Win32 API. None of \nthat qualifies. If you wish to implement back door magic for your own \nuse, I guess that's fine but I'd wonder how well it would be received by \nthe rest of the developers at large. I'm betting it wouldn't give me \nthe warm fuzzies.\n\n> \n> Under NT through 2K, yes using a memory mapped files is the way to do it, but\n> you do not actually need to create a file, you can use (HANDLE)0xFFFFFFFF,\n> which is the NT equivilent of the system memory file. The handle returned is a\n> system global object which can be shared across processes.\n> \n\nYes, that's fine, however, this pretty much puts us back to a memory \nmapped file/shared memory implementation.\n\nI guess you could have a feature set which is perhaps optimial (again, \nI've not seen anything which indicates you'll see perf improvement) for \nWin2000/XP platforms. Perhaps you'd be willing to perform some \nbenchmarks and provide source of random read/writes to these segments \nand present to us for comparsion? I'm personally very curious as to see \nif there is any addition speed to be gleaned even if I am doubtful.\n\nOn a side note, if you did create such a benchmark, I'm fairly sure \nthere are a couple of guys at IBM that may find them of interest and \nmight be willing to contrast these results with other platforms (various \nwin32 and unix/linux) and publish the results.\n\nAfter all, if this path truely provides an optimized solution I'm sure \nmany Win32 coders would like to hear about it.\n\nGreg\n\n\n", "msg_date": "Wed, 27 Feb 2002 11:40:58 -0600", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: single task postgresql" }, { "msg_contents": "Greg Copeland wrote:\n> \n> mlw wrote:\n> > Greg Copeland wrote:\n> >\n> >>Windows does not really have shared memory support. This has been a\n> >>beef with the Win32 API for a long time now. Because it has been a long\n> >>time complaint, it was finally added in Win2000 and later. Likewise,\n> >>I'd like to point out that thinks like sims, shared memory, pipes, etc,\n> >>and other entities commonly used for concurrent programming strategies\n> >>are slower in XP. So, because shared memory really isn't well\n> >>supported, they elected to have what is, in essense, memory mapped\n> >>files. Multiple processes then map the same file and read/write to it\n> >>as needed, more or less as you would shared memory. Unless you plan on\n> >>only targetting on Win 2000 and XP, it sounds like a waste of time.\n> >>\n> >\n> > This is not really true. Under DOS windows, i.e. 95,98, etc. Shared memory can\n> > be done in 16 bit land with a touch of assembly and a DLL. Allocate, with\n> > globalalloc, a shared memory segment. The base selector is a valid 32 bit\n> > selector, and the memory is mapped in the above 2G space shared and mapped to\n> > all 32bit processes.\n> \n> That's fine ad all, however, we're talking about the Win32 API. None of\n> that qualifies. If you wish to implement back door magic for your own\n> use, I guess that's fine but I'd wonder how well it would be received by\n> the rest of the developers at large. I'm betting it wouldn't give me\n> the warm fuzzies.\n\nIt cerainly wont give UNIX developers the warm and fuzzies, thats for sure, but\nI have developed Windows software since there has been a Windows, and believe\nme, there is no \"non-trivial\" Windows application which is not forced, at some\npoint, to do these sorts of things.\n\nWindows, as a platform, is pure and simple, a disaster.\n\n> \n> >\n> > Under NT through 2K, yes using a memory mapped files is the way to do it, but\n> > you do not actually need to create a file, you can use (HANDLE)0xFFFFFFFF,\n> > which is the NT equivilent of the system memory file. The handle returned is a\n> > system global object which can be shared across processes.\n> >\n> \n> Yes, that's fine, however, this pretty much puts us back to a memory\n> mapped file/shared memory implementation.\n\nNot nessisarily true. The system memory map should not have the same criteria\nfor disk writes. A memory mapped file has some notion hard disk \"persistence.\"\nNT will maintain the disk image. When a view of the system memory is mapped,\nthere is no requirement that the disk image is written. It will follow the OS\nvirtual memory strategy for generic memory. (It may even make it more\npersistent than generic RAM, I will have to double check.)\n\n> \n> I guess you could have a feature set which is perhaps optimial (again,\n> I've not seen anything which indicates you'll see perf improvement) for\n> Win2000/XP platforms. Perhaps you'd be willing to perform some\n> benchmarks and provide source of random read/writes to these segments\n> and present to us for comparsion? I'm personally very curious as to see\n> if there is any addition speed to be gleaned even if I am doubtful.\n\nI am not sure if there will be much benefit, I too am skeptical, with one\ncaveat, if NT or Windows is using a shared file handle, there may be contention\nat the disk level for the disk, and that could impact performance.\n\nOn NT, puting the shared memory file on one disk, and the database on another,\none should be able to see how often the disk which contains the shared image is\nwritten.\n\nAnother posibility is to create a RAM disk and put the shared memory file on\nit.\n\n> \n> On a side note, if you did create such a benchmark, I'm fairly sure\n> there are a couple of guys at IBM that may find them of interest and\n> might be willing to contrast these results with other platforms (various\n> win32 and unix/linux) and publish the results.\n> \n> After all, if this path truely provides an optimized solution I'm sure\n> many Win32 coders would like to hear about it.\n\nSigh, it looks like I have to reinstall NT and startup my Windows development\nstuff again. I was having so much fun the last few years developing for Linux.\nAgain, sigh. :-|\n", "msg_date": "Wed, 27 Feb 2002 13:08:10 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: single task postgresql" }, { "msg_contents": "Oleg Bartunov wrote:\n> \n> On Wed, 27 Feb 2002, mlw wrote:\n> \n> > Greg Copeland wrote:\n> > >\n> > > Windows does not really have shared memory support. This has been a\n> > > beef with the Win32 API for a long time now. Because it has been a long\n> > > time complaint, it was finally added in Win2000 and later. Likewise,\n> > > I'd like to point out that thinks like sims, shared memory, pipes, etc,\n> > > and other entities commonly used for concurrent programming strategies\n> > > are slower in XP. So, because shared memory really isn't well\n> > > supported, they elected to have what is, in essense, memory mapped\n> > > files. Multiple processes then map the same file and read/write to it\n> > > as needed, more or less as you would shared memory. Unless you plan on\n> > > only targetting on Win 2000 and XP, it sounds like a waste of time.\n> >\n> > This is not really true. Under DOS windows, i.e. 95,98, etc. Shared memory can\n> > be done in 16 bit land with a touch of assembly and a DLL. Allocate, with\n> > globalalloc, a shared memory segment. The base selector is a valid 32 bit\n> > selector, and the memory is mapped in the above 2G space shared and mapped to\n> > all 32bit processes.\n> >\n> > Under NT through 2K, yes using a memory mapped files is the way to do it, but\n> > you do not actually need to create a file, you can use (HANDLE)0xFFFFFFFF,\n> > which is the NT equivilent of the system memory file. The handle returned is a\n> > system global object which can be shared across processes.\n> >\n> \n> Mark,\n> \n> do you consider to work on this issue ?\n\nYea, let me think about it. What is your time frame? When I offered to work on\nit, I thought it could be a leasurely thing. I have to get a machine running\nsome form of Windows on which to develop and test.\n\nI want to say yes, and if no one else does it, I will, but I'm not sure what\nyour timeframe is. If it is the mystical 7.3, then sure I can do it easily. If\nyou need something quickly, I can help, but I don't think I could shoulder the\nwhole thing.\n\nI have a couple things I have promised people. Let me get those done. I will\ntry to write an equivilent set of functions for shget, shmat, etc. as soon as I\ncan. Anyone wanting to run with them can hack and test PostgreSQL on Windows.\n\nHow does that sound?\n", "msg_date": "Wed, 27 Feb 2002 13:15:57 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: single task postgresql" }, { "msg_contents": "On Wed, 27 Feb 2002, mlw wrote:\n\n> Oleg Bartunov wrote:\n> >\n> > On Wed, 27 Feb 2002, mlw wrote:\n> >\n> > > Greg Copeland wrote:\n> > > >\n> > > > Windows does not really have shared memory support. This has been a\n> > > > beef with the Win32 API for a long time now. Because it has been a long\n> > > > time complaint, it was finally added in Win2000 and later. Likewise,\n> > > > I'd like to point out that thinks like sims, shared memory, pipes, etc,\n> > > > and other entities commonly used for concurrent programming strategies\n> > > > are slower in XP. So, because shared memory really isn't well\n> > > > supported, they elected to have what is, in essense, memory mapped\n> > > > files. Multiple processes then map the same file and read/write to it\n> > > > as needed, more or less as you would shared memory. Unless you plan on\n> > > > only targetting on Win 2000 and XP, it sounds like a waste of time.\n> > >\n> > > This is not really true. Under DOS windows, i.e. 95,98, etc. Shared memory can\n> > > be done in 16 bit land with a touch of assembly and a DLL. Allocate, with\n> > > globalalloc, a shared memory segment. The base selector is a valid 32 bit\n> > > selector, and the memory is mapped in the above 2G space shared and mapped to\n> > > all 32bit processes.\n> > >\n> > > Under NT through 2K, yes using a memory mapped files is the way to do it, but\n> > > you do not actually need to create a file, you can use (HANDLE)0xFFFFFFFF,\n> > > which is the NT equivilent of the system memory file. The handle returned is a\n> > > system global object which can be shared across processes.\n> > >\n> >\n> > Mark,\n> >\n> > do you consider to work on this issue ?\n>\n> Yea, let me think about it. What is your time frame? When I offered to work on\n\nyesterday :-) I think, at first we need to be sure this way could provide\nperformance win and compatibility.\n\n> it, I thought it could be a leasurely thing. I have to get a machine running\n> some form of Windows on which to develop and test.\n>\n> I want to say yes, and if no one else does it, I will, but I'm not sure what\n> your timeframe is. If it is the mystical 7.3, then sure I can do it easily. If\n> you need something quickly, I can help, but I don't think I could shoulder the\n> whole thing.\n>\n\nlooks like people will appreciate your work. Currently we're investigating\nanother possibility Tom suggested - standalone backend. But things are\nstill dim, so we'll also track your development.\n\n\n> I have a couple things I have promised people. Let me get those done. I will\n> try to write an equivilent set of functions for shget, shmat, etc. as soon as I\n> can. Anyone wanting to run with them can hack and test PostgreSQL on Windows.\n>\n> How does that sound?\n>\n\nLoosk great, we could run and test. I'm not sure if we have some funding\nfor this work but I think I could talk with project manager once I'd be\nsure this way is promising.\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Wed, 27 Feb 2002 21:29:31 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: single task postgresql" }, { "msg_contents": "\n\nmlw wrote:\n> Oleg Bartunov wrote:\n> \n>>On Wed, 27 Feb 2002, mlw wrote:\n>>\n>>\n[Greg's ramblings removed]\n\n>>\n>>do you consider to work on this issue ?\n>>\n> \n> Yea, let me think about it. What is your time frame? When I offered to work on\n> it, I thought it could be a leasurely thing. I have to get a machine running\n> some form of Windows on which to develop and test.\n> \n> I want to say yes, and if no one else does it, I will, but I'm not sure what\n> your timeframe is. If it is the mystical 7.3, then sure I can do it easily. If\n> you need something quickly, I can help, but I don't think I could shoulder the\n> whole thing.\n\n\nIt appears that cygipc's shared memory implementation boils down to \npretty much this section of code. As extracted from shm_connect in shm.c:\n\n GFdShm = open(CYGWIN_IPCNT_FILESHM, O_RDWR, 00666 ) ;\n shareadrshm = (CYGWIN_IPCNT_SHMSTR *)\n \t\tmmap(0, sizeof(CYGWIN_IPCNT_SHMSTR), PROT_WRITE|PROT_READ,\n\t\t\tMAP_SHARED, GFdShm, 0) ;\n if( shareadrshm == (CYGWIN_IPCNT_SHMSTR *) -1 )\n {\n close (GFdShm) ;\n return (0) ;\n }\n\n\nSo, I guess the question of the day is, how is Cygwin handling mmap() \ncalls??\n\nPerhaps looking to see if Cygwin can support page file access via \nINVALID_HANDLE_VALUE to the *assumed* CreateFileMapping call?? I get \nthe impression that you already understand this, however, I thought I'd \ngo ahead and toss this out there. By using the INVALID_HANDLE_VALUE \n(0xFFFFFFFF) value during's initial mapped file's creation, it is \nactually being backed by the VM's page file rather than a distinct file \nlayered on top of the FS.\n\nAs you said earlier, this may provide for less disk contension, however, \nI'm doubtful you're going to find a significant speed improvement. Just \nthe same, I guess a couple percent here and there can add up.\n\nIn case you can't tell, I'm somewhat interested in this thread. When \nyou get ready to do the implementation, I wouldn't mind kicking the code \naround a little. Of course, being that pesky voice that says, \"what \nabout this...\" can just be plain fun too. ;)\n\nGreg\n\n\n\n\n\n\n\n", "msg_date": "Wed, 27 Feb 2002 17:03:23 -0600", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: single task postgresql" }, { "msg_contents": "Mark,\n\nI've found\n\"Fast synchronized access to shared memory for Windows and for i86 Unix-es\"\nhttp://www.ispras.ru/~knizhnik/shmem/Readme.htm\nWould't be useful ?\n\n\n\tRegards,\n\n\t\tOleg\n\n\nOn Wed, 27 Feb 2002, mlw wrote:\n\n> Oleg Bartunov wrote:\n> >\n> > On Wed, 27 Feb 2002, mlw wrote:\n> >\n> > > Greg Copeland wrote:\n> > > >\n> > > > Windows does not really have shared memory support. This has been a\n> > > > beef with the Win32 API for a long time now. Because it has been a long\n> > > > time complaint, it was finally added in Win2000 and later. Likewise,\n> > > > I'd like to point out that thinks like sims, shared memory, pipes, etc,\n> > > > and other entities commonly used for concurrent programming strategies\n> > > > are slower in XP. So, because shared memory really isn't well\n> > > > supported, they elected to have what is, in essense, memory mapped\n> > > > files. Multiple processes then map the same file and read/write to it\n> > > > as needed, more or less as you would shared memory. Unless you plan on\n> > > > only targetting on Win 2000 and XP, it sounds like a waste of time.\n> > >\n> > > This is not really true. Under DOS windows, i.e. 95,98, etc. Shared memory can\n> > > be done in 16 bit land with a touch of assembly and a DLL. Allocate, with\n> > > globalalloc, a shared memory segment. The base selector is a valid 32 bit\n> > > selector, and the memory is mapped in the above 2G space shared and mapped to\n> > > all 32bit processes.\n> > >\n> > > Under NT through 2K, yes using a memory mapped files is the way to do it, but\n> > > you do not actually need to create a file, you can use (HANDLE)0xFFFFFFFF,\n> > > which is the NT equivilent of the system memory file. The handle returned is a\n> > > system global object which can be shared across processes.\n> > >\n> >\n> > Mark,\n> >\n> > do you consider to work on this issue ?\n>\n> Yea, let me think about it. What is your time frame? When I offered to work on\n> it, I thought it could be a leasurely thing. I have to get a machine running\n> some form of Windows on which to develop and test.\n>\n> I want to say yes, and if no one else does it, I will, but I'm not sure what\n> your timeframe is. If it is the mystical 7.3, then sure I can do it easily. If\n> you need something quickly, I can help, but I don't think I could shoulder the\n> whole thing.\n>\n> I have a couple things I have promised people. Let me get those done. I will\n> try to write an equivilent set of functions for shget, shmat, etc. as soon as I\n> can. Anyone wanting to run with them can hack and test PostgreSQL on Windows.\n>\n> How does that sound?\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Wed, 6 Mar 2002 18:36:21 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: single task postgresql" }, { "msg_contents": "Sorry for taking so long to get back to everyone. I wanted to post a\nfollow up to the profiling data that has been submitted as well as\ncomment on the provided link (thank you btw).\n\nThe profiling data provided had some minor issues with it. It seems\nthat everything was able to run in exactly zero % of overall time. \nWhile this doesn't have to mean the results are invalid, it does raise\nthat question. It is certainly possible that the overall % duration for\nthe reported functions was so negligible that it should show up as 0%,\nhowever, somewhere in the back of my head I seem to recall that cygwin's\nprofiler is broken in this regard. So, while there are some minor\ndifferences, there are no timings to indicate which path in which\nprofiling is consuming the greatest amount of time. Is it possible to\nrun a longer benchmark to see if we can get anything to register with a\npercent value higher than 0%?\n\nAt any rate, I'm still wading through the two files to determine if\nthere is yet any value to found within the profiling results.\n\nGreg\n\n\nOn Wed, 2002-03-06 at 09:36, Oleg Bartunov wrote:\n> Mark,\n> \n> I've found\n> \"Fast synchronized access to shared memory for Windows and for i86 Unix-es\"\n> http://www.ispras.ru/~knizhnik/shmem/Readme.htm\n> Would't be useful ?\n> \n> \n> \tRegards,\n> \n> \t\tOleg\n> \n> \n> On Wed, 27 Feb 2002, mlw wrote:\n> \n> > Oleg Bartunov wrote:\n> > >\n> > > On Wed, 27 Feb 2002, mlw wrote:\n> > >\n> > > > Greg Copeland wrote:\n> > > > >\n> > > > > Windows does not really have shared memory support. This has been a\n> > > > > beef with the Win32 API for a long time now. Because it has been a long\n> > > > > time complaint, it was finally added in Win2000 and later. Likewise,\n> > > > > I'd like to point out that thinks like sims, shared memory, pipes, etc,\n> > > > > and other entities commonly used for concurrent programming strategies\n> > > > > are slower in XP. So, because shared memory really isn't well\n> > > > > supported, they elected to have what is, in essense, memory mapped\n> > > > > files. Multiple processes then map the same file and read/write to it\n> > > > > as needed, more or less as you would shared memory. Unless you plan on\n> > > > > only targetting on Win 2000 and XP, it sounds like a waste of time.\n> > > >\n> > > > This is not really true. Under DOS windows, i.e. 95,98, etc. Shared memory can\n> > > > be done in 16 bit land with a touch of assembly and a DLL. Allocate, with\n> > > > globalalloc, a shared memory segment. The base selector is a valid 32 bit\n> > > > selector, and the memory is mapped in the above 2G space shared and mapped to\n> > > > all 32bit processes.\n> > > >\n> > > > Under NT through 2K, yes using a memory mapped files is the way to do it, but\n> > > > you do not actually need to create a file, you can use (HANDLE)0xFFFFFFFF,\n> > > > which is the NT equivilent of the system memory file. The handle returned is a\n> > > > system global object which can be shared across processes.\n> > > >\n> > >\n> > > Mark,\n> > >\n> > > do you consider to work on this issue ?\n> >\n> > Yea, let me think about it. What is your time frame? When I offered to work on\n> > it, I thought it could be a leasurely thing. I have to get a machine running\n> > some form of Windows on which to develop and test.\n> >\n> > I want to say yes, and if no one else does it, I will, but I'm not sure what\n> > your timeframe is. If it is the mystical 7.3, then sure I can do it easily. If\n> > you need something quickly, I can help, but I don't think I could shoulder the\n> > whole thing.\n> >\n> > I have a couple things I have promised people. Let me get those done. I will\n> > try to write an equivilent set of functions for shget, shmat, etc. as soon as I\n> > can. Anyone wanting to run with them can hack and test PostgreSQL on Windows.\n> >\n> > How does that sound?\n> >\n> \n> \tRegards,\n> \t\tOleg\n>", "msg_date": "06 Mar 2002 12:10:36 -0600", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: single task postgresql" }, { "msg_contents": "On 6 Mar 2002, Greg Copeland wrote:\n\n> Sorry for taking so long to get back to everyone. I wanted to post a\n> follow up to the profiling data that has been submitted as well as\n> comment on the provided link (thank you btw).\n>\n> The profiling data provided had some minor issues with it. It seems\n> that everything was able to run in exactly zero % of overall time.\n> While this doesn't have to mean the results are invalid, it does raise\n> that question. It is certainly possible that the overall % duration for\n> the reported functions was so negligible that it should show up as 0%,\n> however, somewhere in the back of my head I seem to recall that cygwin's\n> profiler is broken in this regard. So, while there are some minor\n> differences, there are no timings to indicate which path in which\n> profiling is consuming the greatest amount of time. Is it possible to\n> run a longer benchmark to see if we can get anything to register with a\n> percent value higher than 0%?\n\ntimings doesnt' correct !\ngettimeofday doesn't works properly under Cygwin. It's claimed that\n1.3.10 (we have used 1.3.9)\n\"- Use QueryPerformance* functions for gettimeofday calls. (cgf)\".\nWe'll rerun tests and submit results.\n\nbtw, Konstatntin Knizhnik suggested that poor performance of postgres\nunder cygwin+cygipc could be caused by locking. We need to look at\ntask manager to see how cpu is occupied by postgres.\n>\n> At any rate, I'm still wading through the two files to determine if\n> there is yet any value to found within the profiling results.\n>\n> Greg\n>\n>\n> On Wed, 2002-03-06 at 09:36, Oleg Bartunov wrote:\n> > Mark,\n> >\n> > I've found\n> > \"Fast synchronized access to shared memory for Windows and for i86 Unix-es\"\n> > http://www.ispras.ru/~knizhnik/shmem/Readme.htm\n> > Would't be useful ?\n> >\n> >\n> > \tRegards,\n> >\n> > \t\tOleg\n> >\n> >\n> > On Wed, 27 Feb 2002, mlw wrote:\n> >\n> > > Oleg Bartunov wrote:\n> > > >\n> > > > On Wed, 27 Feb 2002, mlw wrote:\n> > > >\n> > > > > Greg Copeland wrote:\n> > > > > >\n> > > > > > Windows does not really have shared memory support. This has been a\n> > > > > > beef with the Win32 API for a long time now. Because it has been a long\n> > > > > > time complaint, it was finally added in Win2000 and later. Likewise,\n> > > > > > I'd like to point out that thinks like sims, shared memory, pipes, etc,\n> > > > > > and other entities commonly used for concurrent programming strategies\n> > > > > > are slower in XP. So, because shared memory really isn't well\n> > > > > > supported, they elected to have what is, in essense, memory mapped\n> > > > > > files. Multiple processes then map the same file and read/write to it\n> > > > > > as needed, more or less as you would shared memory. Unless you plan on\n> > > > > > only targetting on Win 2000 and XP, it sounds like a waste of time.\n> > > > >\n> > > > > This is not really true. Under DOS windows, i.e. 95,98, etc. Shared memory can\n> > > > > be done in 16 bit land with a touch of assembly and a DLL. Allocate, with\n> > > > > globalalloc, a shared memory segment. The base selector is a valid 32 bit\n> > > > > selector, and the memory is mapped in the above 2G space shared and mapped to\n> > > > > all 32bit processes.\n> > > > >\n> > > > > Under NT through 2K, yes using a memory mapped files is the way to do it, but\n> > > > > you do not actually need to create a file, you can use (HANDLE)0xFFFFFFFF,\n> > > > > which is the NT equivilent of the system memory file. The handle returned is a\n> > > > > system global object which can be shared across processes.\n> > > > >\n> > > >\n> > > > Mark,\n> > > >\n> > > > do you consider to work on this issue ?\n> > >\n> > > Yea, let me think about it. What is your time frame? When I offered to work on\n> > > it, I thought it could be a leasurely thing. I have to get a machine running\n> > > some form of Windows on which to develop and test.\n> > >\n> > > I want to say yes, and if no one else does it, I will, but I'm not sure what\n> > > your timeframe is. If it is the mystical 7.3, then sure I can do it easily. If\n> > > you need something quickly, I can help, but I don't think I could shoulder the\n> > > whole thing.\n> > >\n> > > I have a couple things I have promised people. Let me get those done. I will\n> > > try to write an equivilent set of functions for shget, shmat, etc. as soon as I\n> > > can. Anyone wanting to run with them can hack and test PostgreSQL on Windows.\n> > >\n> > > How does that sound?\n> > >\n> >\n> > \tRegards,\n> > \t\tOleg\n> >\n>\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Wed, 6 Mar 2002 21:55:40 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: single task postgresql" }, { "msg_contents": "Seems I'm replying rather quickly but I thought I'd point out that I\nwent back and started looking at the cygipc code again. I'm now\nstarting to suspect that that the majority of the performance impact we\nare seeing has to do with the semaphore implementation versus the shared\nmemory implementation.\n\nBasically, the version which is using local memory is not having to\ncontend with a significant amount of semaphore negotiations while the\nshared memory version must contend with this issue each and every time\nmemory is accessed.\n\nIt's worth noting that based on what I've seen so far (pointing out that\nlots still need to be reviewed), the semaphore implementation on via the\ncygipc library is going to yield an absolute worse case semaphore\nperformance on Win32 platforms. That is, of all the natve\nsynchronization mechanisms available for Win32, the use of a generic\nSemaphore is going to deliver the worst performance whereby one would\nexpect it to be perhaps an order of magnitude or slower than an ideal\nWin32 semaphore implementation.\n\nSo, if you have the time, perhaps you can write some quick benchmarks on\nWin32. One which simply allocates a shared memory region and randomly\nreads and writes to it. Same thing for the shared/memory mapped file\nimplementation. Next, create a multi-process implementation based on\neach of your previous two tests. In these test, try using the cygwin\nlibrary, native Win32 Mutex implementation (warning, this has some\nissues because on Win32 Mutexs are optimized for threading and not\nmulti-process implementations), a Win32 Critical section (warning SMP\nscaling issues -- absolute fastest for uni-processor systems on Win32)\nand a native Win32 Semaphore implementation (horrible performance). Of\ncourse, timing each.\n\nIf my *guess* is correct, this will tell you that the significant\nperformance issues are directly associated with the use of Semaphores.\n\nAs I am starting to understand things, it looks like performance suffers\nbecause the Win32 platforms are significantly biased toward threaded IPC\nimplementations and significantly suffer when forced into a\nmulti-process architecture (very Unix like).\n\nGreg\n\n\nOn Wed, 2002-03-06 at 09:36, Oleg Bartunov wrote:\n> Mark,\n> \n> I've found\n> \"Fast synchronized access to shared memory for Windows and for i86 Unix-es\"\n> http://www.ispras.ru/~knizhnik/shmem/Readme.htm\n> Would't be useful ?\n> \n> \n> \tRegards,\n> \n> \t\tOleg\n> \n> \n> On Wed, 27 Feb 2002, mlw wrote:\n> \n> > Oleg Bartunov wrote:\n> > >\n> > > On Wed, 27 Feb 2002, mlw wrote:\n> > >\n> > > > Greg Copeland wrote:\n> > > > >\n> > > > > Windows does not really have shared memory support. This has been a\n> > > > > beef with the Win32 API for a long time now. Because it has been a long\n> > > > > time complaint, it was finally added in Win2000 and later. Likewise,\n> > > > > I'd like to point out that thinks like sims, shared memory, pipes, etc,\n> > > > > and other entities commonly used for concurrent programming strategies\n> > > > > are slower in XP. So, because shared memory really isn't well\n> > > > > supported, they elected to have what is, in essense, memory mapped\n> > > > > files. Multiple processes then map the same file and read/write to it\n> > > > > as needed, more or less as you would shared memory. Unless you plan on\n> > > > > only targetting on Win 2000 and XP, it sounds like a waste of time.\n> > > >\n> > > > This is not really true. Under DOS windows, i.e. 95,98, etc. Shared memory can\n> > > > be done in 16 bit land with a touch of assembly and a DLL. Allocate, with\n> > > > globalalloc, a shared memory segment. The base selector is a valid 32 bit\n> > > > selector, and the memory is mapped in the above 2G space shared and mapped to\n> > > > all 32bit processes.\n> > > >\n> > > > Under NT through 2K, yes using a memory mapped files is the way to do it, but\n> > > > you do not actually need to create a file, you can use (HANDLE)0xFFFFFFFF,\n> > > > which is the NT equivilent of the system memory file. The handle returned is a\n> > > > system global object which can be shared across processes.\n> > > >\n> > >\n> > > Mark,\n> > >\n> > > do you consider to work on this issue ?\n> >\n> > Yea, let me think about it. What is your time frame? When I offered to work on\n> > it, I thought it could be a leasurely thing. I have to get a machine running\n> > some form of Windows on which to develop and test.\n> >\n> > I want to say yes, and if no one else does it, I will, but I'm not sure what\n> > your timeframe is. If it is the mystical 7.3, then sure I can do it easily. If\n> > you need something quickly, I can help, but I don't think I could shoulder the\n> > whole thing.\n> >\n> > I have a couple things I have promised people. Let me get those done. I will\n> > try to write an equivilent set of functions for shget, shmat, etc. as soon as I\n> > can. Anyone wanting to run with them can hack and test PostgreSQL on Windows.\n> >\n> > How does that sound?\n> >\n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n>", "msg_date": "06 Mar 2002 13:05:47 -0600", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: single task postgresql" }, { "msg_contents": "Hehe.\n\nThank, I just submitted another email about this very issue.\n\nGreg\n\n\nOn Wed, 2002-03-06 at 12:55, Oleg Bartunov wrote:\n> On 6 Mar 2002, Greg Copeland wrote:\n> \n> > Sorry for taking so long to get back to everyone. I wanted to post a\n> > follow up to the profiling data that has been submitted as well as\n> > comment on the provided link (thank you btw).\n> >\n> > The profiling data provided had some minor issues with it. It seems\n> > that everything was able to run in exactly zero % of overall time.\n> > While this doesn't have to mean the results are invalid, it does raise\n> > that question. It is certainly possible that the overall % duration for\n> > the reported functions was so negligible that it should show up as 0%,\n> > however, somewhere in the back of my head I seem to recall that cygwin's\n> > profiler is broken in this regard. So, while there are some minor\n> > differences, there are no timings to indicate which path in which\n> > profiling is consuming the greatest amount of time. Is it possible to\n> > run a longer benchmark to see if we can get anything to register with a\n> > percent value higher than 0%?\n> \n> timings doesnt' correct !\n> gettimeofday doesn't works properly under Cygwin. It's claimed that\n> 1.3.10 (we have used 1.3.9)\n> \"- Use QueryPerformance* functions for gettimeofday calls. (cgf)\".\n> We'll rerun tests and submit results.\n> \n> btw, Konstatntin Knizhnik suggested that poor performance of postgres\n> under cygwin+cygipc could be caused by locking. We need to look at\n> task manager to see how cpu is occupied by postgres.\n> >\n> > At any rate, I'm still wading through the two files to determine if\n> > there is yet any value to found within the profiling results.\n> >\n> > Greg\n> >\n> >\n> > On Wed, 2002-03-06 at 09:36, Oleg Bartunov wrote:\n> > > Mark,\n> > >\n> > > I've found\n> > > \"Fast synchronized access to shared memory for Windows and for i86 Unix-es\"\n> > > http://www.ispras.ru/~knizhnik/shmem/Readme.htm\n> > > Would't be useful ?\n> > >\n> > >\n> > > \tRegards,\n> > >\n> > > \t\tOleg\n> > >\n> > >\n> > > On Wed, 27 Feb 2002, mlw wrote:\n> > >\n> > > > Oleg Bartunov wrote:\n> > > > >\n> > > > > On Wed, 27 Feb 2002, mlw wrote:\n> > > > >\n> > > > > > Greg Copeland wrote:\n> > > > > > >\n> > > > > > > Windows does not really have shared memory support. This has been a\n> > > > > > > beef with the Win32 API for a long time now. Because it has been a long\n> > > > > > > time complaint, it was finally added in Win2000 and later. Likewise,\n> > > > > > > I'd like to point out that thinks like sims, shared memory, pipes, etc,\n> > > > > > > and other entities commonly used for concurrent programming strategies\n> > > > > > > are slower in XP. So, because shared memory really isn't well\n> > > > > > > supported, they elected to have what is, in essense, memory mapped\n> > > > > > > files. Multiple processes then map the same file and read/write to it\n> > > > > > > as needed, more or less as you would shared memory. Unless you plan on\n> > > > > > > only targetting on Win 2000 and XP, it sounds like a waste of time.\n> > > > > >\n> > > > > > This is not really true. Under DOS windows, i.e. 95,98, etc. Shared memory can\n> > > > > > be done in 16 bit land with a touch of assembly and a DLL. Allocate, with\n> > > > > > globalalloc, a shared memory segment. The base selector is a valid 32 bit\n> > > > > > selector, and the memory is mapped in the above 2G space shared and mapped to\n> > > > > > all 32bit processes.\n> > > > > >\n> > > > > > Under NT through 2K, yes using a memory mapped files is the way to do it, but\n> > > > > > you do not actually need to create a file, you can use (HANDLE)0xFFFFFFFF,\n> > > > > > which is the NT equivilent of the system memory file. The handle returned is a\n> > > > > > system global object which can be shared across processes.\n> > > > > >\n> > > > >\n> > > > > Mark,\n> > > > >\n> > > > > do you consider to work on this issue ?\n> > > >\n> > > > Yea, let me think about it. What is your time frame? When I offered to work on\n> > > > it, I thought it could be a leasurely thing. I have to get a machine running\n> > > > some form of Windows on which to develop and test.\n> > > >\n> > > > I want to say yes, and if no one else does it, I will, but I'm not sure what\n> > > > your timeframe is. If it is the mystical 7.3, then sure I can do it easily. If\n> > > > you need something quickly, I can help, but I don't think I could shoulder the\n> > > > whole thing.\n> > > >\n> > > > I have a couple things I have promised people. Let me get those done. I will\n> > > > try to write an equivilent set of functions for shget, shmat, etc. as soon as I\n> > > > can. Anyone wanting to run with them can hack and test PostgreSQL on Windows.\n> > > >\n> > > > How does that sound?\n> > > >\n> > >\n> > > \tRegards,\n> > > \t\tOleg\n> > >\n> >\n> >\n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n>", "msg_date": "06 Mar 2002 13:06:52 -0600", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: single task postgresql" } ]
[ { "msg_contents": "Well, we have a released version of PostgreSQL, I am making sure my explicit\nconfiguration patch works on the current version of PostreSQL. It is very\ncontroversial, I know, but if anyone wants it, let me know I will post it on\npgsql-hackers, otherwise I will post it on my site.\n", "msg_date": "Tue, 05 Feb 2002 22:45:28 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Explicit configuration patch." }, { "msg_contents": "mlw writes:\n\n> Well, we have a released version of PostgreSQL, I am making sure my explicit\n> configuration patch works on the current version of PostreSQL. It is very\n> controversial, I know, but if anyone wants it, let me know I will post it on\n> pgsql-hackers, otherwise I will post it on my site.\n\nWe're going to put something like this into 7.3. I've had a full summary\nand proposal sitting in my mailbox for a few weeks. Look for it on this\nlist soon.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 6 Feb 2002 19:10:54 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Explicit configuration patch." } ]
[ { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nI'd like some opinions on fixing the following behavior in \npsql (and postgres in general):\n\nfoobar=> CREATE TABLE foo (bar INTEGER);\nCREATE\nfoobar=> CREATE TABLE foo (bar INTEGER UNIQUE);\nNOTICE: CREATE TABLE / UNIQUE will create implicit index 'foo_bar_key' for table 'foo'\nERROR: Relation 'foo' already exists\nturnstep=>\n\nI think that this notice should not appear in this case, since \nthe ERROR negates the actual table creation. My first question: \nis this worth pursuing?\n\nMy second question is to the method of surpression: I considered \ndoing it at the postgres server level, but realized that this \nis probably too radical a change for the numerous clients \nthat connect to the server. So, I decided just to do it in \npsql. Is there any NOTICE that should always be displayed, \neven if an ERROR occured (and therefore the query failed)? \nSeems as though the implicit creation of indexes and \nsequences are the major notices used.\n\nThanks,\nGreg Sabino Mullane greg@turnstep.com\nPGP Key: 0x14964AC8 200202060716\n\n-----BEGIN PGP SIGNATURE-----\nComment: http://www.turnstep.com/pgp.html\n\niQA/AwUBPGEf9LybkGcUlkrIEQJpGgCg1y6gUvVO/6aLJfLZTrWtAKNzdggAnRl4\nhGCqbSO7/gt5aoPC5TSqMf6E\n=alod\n-----END PGP SIGNATURE-----\n\n\n\n", "msg_date": "Wed, 6 Feb 2002 12:23:43 -0000", "msg_from": "\"Greg Sabino Mullane\" <greg@turnstep.com>", "msg_from_op": true, "msg_subject": "Supression of NOTICEs with an ERROR" }, { "msg_contents": "\"Greg Sabino Mullane\" <greg@turnstep.com> writes:\n> I'd like some opinions on fixing the following behavior in \n> psql (and postgres in general):\n\n> foobar=> CREATE TABLE foo (bar INTEGER);\n> CREATE\n> foobar=> CREATE TABLE foo (bar INTEGER UNIQUE);\n> NOTICE: CREATE TABLE / UNIQUE will create implicit index 'foo_bar_key' for table 'foo'\n> ERROR: Relation 'foo' already exists\n> turnstep=>\n\n> I think that this notice should not appear in this case, since \n> the ERROR negates the actual table creation. My first question: \n> is this worth pursuing?\n\nI'd argue that it is not broken. This isn't really different from the\ncase of\n\tBEGIN;\n\tdo something provoking a NOTICE;\n\tROLLBACK;\nWould you have the system suppress the NOTICE in this case?\n\nPerhaps more to the point, this particular notice might be useful in\nfiguring out the reason for the failure --- for example, it might be\nthat the relation name conflict is not on your table name, but on the\nname of an implicitly created index or sequence. So suppressing the\nnotice might discard valuable information.\n\nNot everyone likes these notices, and so there has been some talk of\na server configuration variable that could be set to make the server\nsomewhat less chatty. This is quite independent of whether the command\nultimately succeeds or fails, however. In any case I doubt that\nhacking psql is a rational way to approach the issue; psql can't be\nexpected to know all about every sort of notice the backend might issue.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 06 Feb 2002 10:55:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Supression of NOTICEs with an ERROR " } ]
[ { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n> I think you're wasting your time to think about this now. By \n> the time 7.3 comes out, we will have an entirely new approach \n> to temp tables: they'll be named with the same names the user \n> sees, and live in per-backend temp schemas to avoid name \n> conflicts with permanent tables. So any code based on working \n> with the existing temp-name mapper will be in the scrap heap \n> before it can get released :-(\n\nThank you for the heads-up. Yes, this means some of my code is \nnow \"scrapped\" but I did learn a lot in the process. I seem to \nrecall that at least one other person was recently pointed to \ntemprel.c (I think for temporary views?) so may I suggest \nthat a README be added to the backend/utils/cache directory, or \nperhaps a comment at the top of temprel.c stating something \nsimilar to the quoted paragraph above, so that others will not \nuse the soon-to-be-replaced temp-name code.\n\nAlso, thanks for the replies regarding the unused oids. As I \nsuspected, there is no rhyme or reason. :)\n\nGreg Sabino Mullane greg@turnstep.com\nPGP Key: 0x14964AC8 200202060731\n\n-----BEGIN PGP SIGNATURE-----\nComment: http://www.turnstep.com/pgp.html\n\niQA/AwUBPGEitrybkGcUlkrIEQKZxQCgwNiBrgA/yl7ZnMfYaTsvwZkamVQAoMDU\nE0YzIE0qVN1XI08jCyA74LKq\n=CjLE\n-----END PGP SIGNATURE-----\n\n\n", "msg_date": "Wed, 6 Feb 2002 12:34:05 -0000", "msg_from": "\"Greg Sabino Mullane\" <greg@turnstep.com>", "msg_from_op": true, "msg_subject": "Re: New system OIDS inside include/catalog/pg_proc.h" } ]
[ { "msg_contents": "Looking trough the 7.2 readme, I see that there's no message translation \ninto Danish. \n\nWhat is required to do this? \n\n --\nKaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\nKaki Data tshirts, merchandize Fax: 3816 2501\nHowitzvej 75 �ben 14.00-18.00 Web: www.suse.dk\n2000 Frederiksberg L�rdag 11.00-17.00 Email: kar@kakidata.dk\n", "msg_date": "Wed, 06 Feb 2002 13:25:11 GMT", "msg_from": "\"Kaare Rasmussen\" <kar@kakidata.dk>", "msg_from_op": true, "msg_subject": "i18n" }, { "msg_contents": "On Wed, Feb 06, 2002 at 01:25:11PM +0000, Kaare Rasmussen wrote:\n> Looking trough the 7.2 readme, I see that there's no message translation \n> into Danish. \n> \n> What is required to do this? \n\nTo read http://developer.postgresql.org/docs/postgres/nls.html.\n\n-- \nHolger Krug\nhkrug@rationalizer.com\n", "msg_date": "Wed, 6 Feb 2002 14:45:09 +0100", "msg_from": "Holger Krug <hkrug@rationalizer.com>", "msg_from_op": false, "msg_subject": "Re: i18n" }, { "msg_contents": "Holger Krug writes:\n\n> On Wed, Feb 06, 2002 at 01:25:11PM +0000, Kaare Rasmussen wrote:\n> > Looking trough the 7.2 readme, I see that there's no message translation\n> > into Danish.\n> >\n> > What is required to do this?\n>\n> To read http://developer.postgresql.org/docs/postgres/nls.html.\n\nAnd/or http://www.postgresql.org/~petere/nls.php.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 6 Feb 2002 11:14:49 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: i18n" } ]
[ { "msg_contents": "Hi,\n\nwe're about to release new version of contrib/tree available from\nhttp://www.sai.msu.su/~megera/postgres/gist/tree/tree.tar.gz\nBrave could test it, very sparse doc is in README.tree (in russian).\n\nThe basic idea is to represent node by path from root,\nso '3.4' means 4-th child of 3-rd child of the root.\n\nWe introduce two types for handling trees:\n\nentree (enumerated tree) and bitree (bit tree).\n\nbitree is actually what we already used in previous version\nand has limitation on number of children, specified in compilation -\n8,16,32(on default),64.\n\nThis limitation and discussion in mailing list (Hanny, Don) inspired us to\nimplement another type - 'entree', which has maximum number of\nchildren - 65535 !!! Thanks Eugeny Rodichev for fruitful discussion.\n\nWe have tested the module with 7.2 release.\n\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Wed, 6 Feb 2002 20:47:53 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "new contrib/tree, 65535 children !" } ]
[ { "msg_contents": "\nMorning all ...\n\n\tFirst off, this is using v7.2 release ...\n\n\tOkay, this is going to drive me up the proverbial wall ... very\nsimple query:\n\nSELECT p.uid, p.handle\n FROM orientation_c poc, profiles p, gender_f pgf\n WHERE (p.uid = pgf.uid )\n AND (pgf.uid = poc.uid ) ;\n\nprofiles contains:\n\niwantu=# select count(1) from profiles;\n count\n--------\n 485969\n(1 row)\n\nand is everyone in the system ... no problems there ...\n\ngender_f contains:\n\niwantu=# select count(1) from gender_f;\n count\n-------\n 75664\n(1 row)\n\nAnd is *just* the uid's of those in profiles that are female ...\n\nfinally, orientation_c:\n\niwantu=# select count(1) from orientation_c;\n count\n--------\n 126477\n(1 row)\n\nIs again *just* the uid's of those in profiles that have a 'c'\norientiation ...\n\nNow, the above wquery has an explain of:\n\nHash Join (cost=6363.90..47877.08 rows=19692 width=35)\n -> Seq Scan on profiles p (cost=0.00..35707.69 rows=485969 width=19)\n -> Hash (cost=6174.74..6174.74 rows=75664 width=16)\n -> Hash Join (cost=2928.34..6174.74 rows=75664 width=16)\n -> Seq Scan on gender_f pgf (cost=0.00..1165.64 rows=75664 width=8)\n -> Hash (cost=1948.77..1948.77 rows=126477 width=8)\n -> Seq Scan on orientation_c poc (cost=0.00..1948.77 rows=126477 width=8)\n\nNow, a join between poc and pgf alone comes out to:\n\niwantu=# select count(1) from orientation_c poc, gender_f pgf where poc.uid = pgf.uid;\n count\n-------\n 12703\n(1 row)\n\niwantu=# explain select count(1) from orientation_c poc, gender_f pgf where poc.uid = pgf.uid;\nNOTICE: QUERY PLAN:\n\nAggregate (cost=6363.90..6363.90 rows=1 width=16)\n -> Hash Join (cost=2928.34..6174.74 rows=75664 width=16)\n -> Seq Scan on gender_f pgf (cost=0.00..1165.64 rows=75664 width=8)\n -> Hash (cost=1948.77..1948.77 rows=126477 width=8)\n -> Seq Scan on orientation_c poc (cost=0.00..1948.77 rows=126477 width=8)\n\nEXPLAIN\n\n\nNow, what I'd like to have happen is a SEQ SCAN through the smaller table\n(gender_f), and grab everything in orientation_c that matches (both tables\nhave zero duplicates of uid, its purely a one of, so I would think that I\nshould be able to take 1 uid from pgf, and use the index on poc to\ndetermine if it exists, and do that 75664 times ...\n\nThat would live me with 12703 UIDs to match up with apropriate records in\nthe almost 500+k records in profiles itself, instead of having to scan\nthrough each of thoose 500+k records themselves ...\n\nThen again, let's go one simpler:\n\niwantu=# \\d orientation_c\nTable \"orientation_c\"\n Column | Type | Modifiers\n--------+--------+-----------\n uid | bigint |\nIndexes: poc_uid\n\niwantu=# \\d poc_uid\n Index \"poc_uid\"\n Column | Type\n--------+--------\n uid | bigint\nbtree\n\niwantu=# explain select count(1) from orientation_c poc where uid = 1;\nNOTICE: QUERY PLAN:\n\nAggregate (cost=2264.97..2264.97 rows=1 width=0)\n -> Seq Scan on orientation_c poc (cost=0.00..2264.96 rows=1 width=0)\n\nEXPLAIN\n\nif all varlues in orientation_c are unique, and there are 127k\nrecords ... shouldn't it use the index instead of scanning through all 127k records ? Or am I missing something totally obvious here?\n\n", "msg_date": "Wed, 6 Feb 2002 16:47:45 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "JOIN between three *simple* tables ..." }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> iwantu=# \\d poc_uid\n> Index \"poc_uid\"\n> Column | Type\n> --------+--------\n> uid | bigint\n> btree\n\n> iwantu=# explain select count(1) from orientation_c poc where uid = 1;\n> NOTICE: QUERY PLAN:\n\n> Aggregate (cost=2264.97..2264.97 rows=1 width=0)\n> -> Seq Scan on orientation_c poc (cost=0.00..2264.96 rows=1 width=0)\n\n> EXPLAIN\n\nYou're forgetting ye olde constant-casting problem. You need something\nlike\n\n\tselect count(1) from orientation_c poc where uid = 1::bigint;\n\nto use an index on a bigint column.\n\nNot sure about the other thing; have you VACUUM ANALYZEd (or at least\nANALYZEd) since filling the tables? It looks like the system thinks\nthe tables are much smaller than they really are.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 06 Feb 2002 16:14:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: JOIN between three *simple* tables ... " }, { "msg_contents": "\nOkay, now I'm annoyed ... I *swear* I ran VACUUM ANALYZE a couple of times\nsince populating those tables *sigh* now its bringing up indices I'm\nexpecting :(\n\nOn Wed, 6 Feb 2002, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > iwantu=# \\d poc_uid\n> > Index \"poc_uid\"\n> > Column | Type\n> > --------+--------\n> > uid | bigint\n> > btree\n>\n> > iwantu=# explain select count(1) from orientation_c poc where uid = 1;\n> > NOTICE: QUERY PLAN:\n>\n> > Aggregate (cost=2264.97..2264.97 rows=1 width=0)\n> > -> Seq Scan on orientation_c poc (cost=0.00..2264.96 rows=1 width=0)\n>\n> > EXPLAIN\n>\n> You're forgetting ye olde constant-casting problem. You need something\n> like\n>\n> \tselect count(1) from orientation_c poc where uid = 1::bigint;\n>\n> to use an index on a bigint column.\n>\n> Not sure about the other thing; have you VACUUM ANALYZEd (or at least\n> ANALYZEd) since filling the tables? It looks like the system thinks\n> the tables are much smaller than they really are.\n>\n> \t\t\tregards, tom lane\n>\n\n", "msg_date": "Wed, 6 Feb 2002 17:41:17 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: JOIN between three *simple* tables ... " } ]
[ { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nThe attached patch actually does two related things. First, \nit keeps track of whether or not you are in a trnasaction \nand modifies the prompt slightly when you are by putting \nan asterick at the very front of it.\n\nSecondly, it adds a \"begin transaction\" option that, when \nenabled, ensures that you are always inside a transaction \nwhile in psql, so you can always rollback. It does this \nby issuing a BEGIN at the appropriate times. This patch \n(if ever accepted) conflicts a bit with LO_RTANSACTION:\npsql now *does* have a way to know if it is in a \ntransaction or not, so that part may need to get rewritten.\n\nGreg Sabino Mullane greg@turnstep.com\nPGP Key: 0x14964AC8 200202061602\n\n-----BEGIN PGP SIGNATURE-----\nComment: http://www.turnstep.com/pgp.html\n\niQA/AwUBPGGZ37ybkGcUlkrIEQJhJQCgr2TEKcvPakEIC8Exn09pInLLOywAoL4I\nuGv3TL6hUm/O1oSPrDVdmdc4\n=rmRt\n-----END PGP SIGNATURE-----", "msg_date": "Wed, 6 Feb 2002 21:07:29 -0000", "msg_from": "\"Greg Sabino Mullane\" <greg@turnstep.com>", "msg_from_op": true, "msg_subject": "Automatic transactions in psql" }, { "msg_contents": "Greg Sabino Mullane writes:\n\n> The attached patch actually does two related things. First,\n> it keeps track of whether or not you are in a trnasaction\n> and modifies the prompt slightly when you are by putting\n> an asterick at the very front of it.\n\nThis is an interesting idea, although you may want to give the user the\noption to customize his prompt. Add an escape, maybe %* or %t, with the\nmeaning \"resolves to * if in a transaction block and to the empty string\nif not\". (The existing escapes were all stolen from tcsh, so look there\nif you need an idea.)\n\n> Secondly, it adds a \"begin transaction\" option that, when\n> enabled, ensures that you are always inside a transaction\n> while in psql, so you can always rollback.\n\nThis should be done in the backend.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 6 Feb 2002 19:17:25 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Automatic transactions in psql" }, { "msg_contents": "> Secondly, it adds a \"begin transaction\" option that, when\n> enabled, ensures that you are always inside a transaction\n> while in psql, so you can always rollback. It does this\n> by issuing a BEGIN at the appropriate times. This patch\n> (if ever accepted) conflicts a bit with LO_RTANSACTION:\n> psql now *does* have a way to know if it is in a\n> transaction or not, so that part may need to get rewritten.\n\nSweeeet. I've gone mad trying to get people with access to our production\ndatabases to do _everything_ within a transaction when they start fiddling\naround!\n\nChris\n\n", "msg_date": "Thu, 7 Feb 2002 08:51:14 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Automatic transactions in psql" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> This is an interesting idea, although you may want to give the user the\n> option to customize his prompt.\n\nSeems cool. I am a bit worried about whether the transaction-block\ndetection mechanism is reliable, though. We might need to add something\nto the FE/BE protocol to make this work correctly.\n\n>> Secondly, it adds a \"begin transaction\" option that, when\n>> enabled, ensures that you are always inside a transaction\n>> while in psql, so you can always rollback.\n\n> This should be done in the backend.\n\nAgreed. If I recall recent discussions correctly, the spec says that\ncertain SQL commands should open a transaction and others should not.\nIt's not reasonable to have that logic in psql rather than the backend.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Feb 2002 13:40:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Automatic transactions in psql " }, { "msg_contents": "Thread added.\n\nThis has been saved for the 7.3 release:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches2\n\n---------------------------------------------------------------------------\n\nGreg Sabino Mullane wrote:\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n> \n> The attached patch actually does two related things. First, \n> it keeps track of whether or not you are in a trnasaction \n> and modifies the prompt slightly when you are by putting \n> an asterick at the very front of it.\n> \n> Secondly, it adds a \"begin transaction\" option that, when \n> enabled, ensures that you are always inside a transaction \n> while in psql, so you can always rollback. It does this \n> by issuing a BEGIN at the appropriate times. This patch \n> (if ever accepted) conflicts a bit with LO_RTANSACTION:\n> psql now *does* have a way to know if it is in a \n> transaction or not, so that part may need to get rewritten.\n> \n> Greg Sabino Mullane greg@turnstep.com\n> PGP Key: 0x14964AC8 200202061602\n> \n> -----BEGIN PGP SIGNATURE-----\n> Comment: http://www.turnstep.com/pgp.html\n> \n> iQA/AwUBPGGZ37ybkGcUlkrIEQJhJQCgr2TEKcvPakEIC8Exn09pInLLOywAoL4I\n> uGv3TL6hUm/O1oSPrDVdmdc4\n> =rmRt\n> -----END PGP SIGNATURE-----\n> \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 21 Feb 2002 23:01:42 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Automatic transactions in psql" }, { "msg_contents": "...\n> > Secondly, it adds a \"begin transaction\" option that, when\n> > enabled, ensures that you are always inside a transaction\n> > while in psql, so you can always rollback. It does this\n> > by issuing a BEGIN at the appropriate times. This patch\n> > (if ever accepted) conflicts a bit with LO_RTANSACTION:\n> > psql now *does* have a way to know if it is in a\n> > transaction or not, so that part may need to get rewritten.\n\nThis part of the feature (corresponding to the Ingres \"autocommit = off\"\nfeature) should be implemented in the backend rather than in psql. I've\nhad a moderate interest in doing this but haven't gotten to it; if\nsomeone wants to pick it up I'm sure it would be well received...\n\n - Thomas\n", "msg_date": "Fri, 22 Feb 2002 15:21:51 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Automatic transactions in psql" }, { "msg_contents": "Thomas Lockhart wrote:\n> ...\n> > > Secondly, it adds a \"begin transaction\" option that, when\n> > > enabled, ensures that you are always inside a transaction\n> > > while in psql, so you can always rollback. It does this\n> > > by issuing a BEGIN at the appropriate times. This patch\n> > > (if ever accepted) conflicts a bit with LO_RTANSACTION:\n> > > psql now *does* have a way to know if it is in a\n> > > transaction or not, so that part may need to get rewritten.\n> \n> This part of the feature (corresponding to the Ingres \"autocommit = off\"\n> feature) should be implemented in the backend rather than in psql. I've\n> had a moderate interest in doing this but haven't gotten to it; if\n> someone wants to pick it up I'm sure it would be well received...\n\nAgreed. I wondered whether we could use the psql status part of this\npatch. We currently cound parens and quotes, and show that in the psql\nprompt. Could we do that for transaction status? Considering we\nalready track the parens/quotes, another level of status on the psql\ndisplay seems a bit much, even if we could do it reliably. Comments?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Feb 2002 11:09:48 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Automatic transactions in psql" }, { "msg_contents": "Tom Lane wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > This is an interesting idea, although you may want to give the user the\n> > option to customize his prompt.\n> \n> Seems cool. I am a bit worried about whether the transaction-block\n> detection mechanism is reliable, though. We might need to add something\n> to the FE/BE protocol to make this work correctly.\n\nOK, status on this? Seems we can't apply the patch as-is because of\nreliability of the status display. Do people wnat a TODO item? I don't\nthink I want to make an incompatible protocol change for this feature.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Feb 2002 11:12:56 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Automatic transactions in psql" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, status on this? Seems we can't apply the patch as-is because of\n> reliability of the status display. Do people wnat a TODO item? I don't\n> think I want to make an incompatible protocol change for this feature.\n\nI believe Fernando Nasser at Red Hat is currently working on backend\nchanges to do this properly; so I recommend we not apply the psql hack.\n\nThe notion of customizing the psql prompt based on\nin-an-xact-block-or-not seems cool; but I do not see how to do it\nreliably without a protocol change, and it's not worth that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Feb 2002 11:17:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Automatic transactions in psql " } ]
[ { "msg_contents": "This is something that would be wonderful to have, a mapping of PostgreSQL functions to MSSQL Server functions. In fact, I have been fantasizing about a compatibility module that would allow SQL Server applications to think they were talking to MSSQL Server when they were really talking to PG. That is a huge job, but my how many more people would convert!\n\nIn the meantime, you will have to dump schema, twiddle them to meet PosgtreSQL requirements, and run them to re-create your db. Your app will have to be re-written to some degree as well.\n\nIan A. Harding\nProgrammer/Analyst II\nTacoma-Pierce County Health Department\n(253) 798-3549\nmailto: iharding@tpchd.org\n\n>>> Justin Clift <justin@postgresql.org> 02/06/02 02:31AM >>>\nHi everyone,\n\nDean here seems to be converting to PostgreSQL from MS SQL Server, and I\nhave *no* idea how to help him out.\n\nHe's not on the list, so if anyone's got suggestions, please remember to\nkeep him in the To/CC list.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n-------- Original Message --------\nSubject: MS SQL compatible functions\nDate: Wed, 6 Feb 2002 18:17:13 +0800\nFrom: \"Dean Lu\" <>\nTo: <justin@postgresql.org>\n\nDear Justin,\n\tMy name is Dean Lu, I am working in a SI company in Taiwan, we are\ngoing to change our products to support the PostgreSQL and drop the MS\nSQL away, but I got some problems with the functions compatibility\nbetween MS and pgSQL. Could you please give me some suggestions or tell\nme where can I get the functions from Internet. It will be better if\nthose functions are written in C language. Thank you very much.\n\nBest Regards'\nDean Lu\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n", "msg_date": "Wed, 06 Feb 2002 13:39:50 -0800", "msg_from": "\"Ian Harding\" <ianh@tpchd.org>", "msg_from_op": true, "msg_subject": "Re: [Fwd: MS SQL compatible functions]" }, { "msg_contents": "Hi Ian,\n\nA compatibility has already been considered for Oracle, you're\nmentioning one for MS SQL Server.\n\nMaybe it's time to think about how an abstraction layer could be added,\nand then appropriate Oracle/Sybase/Informix/MSSQL/etc\nmodules/plug-ins/layers could be added to that?\n\nWonder how much work it would take?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\nIan Harding wrote:\n> \n> This is something that would be wonderful to have, a mapping of PostgreSQL functions to MSSQL Server functions. In fact, I have been fantasizing about a compatibility module that would allow SQL Server applications to think they were talking to MSSQL Server when they were really talking to PG. That is a huge job, but my how many more people would convert!\n> \n> In the meantime, you will have to dump schema, twiddle them to meet PosgtreSQL requirements, and run them to re-create your db. Your app will have to be re-written to some degree as well.\n> \n> Ian A. Harding\n> Programmer/Analyst II\n> Tacoma-Pierce County Health Department\n> (253) 798-3549\n> mailto: iharding@tpchd.org\n> \n> >>> Justin Clift <justin@postgresql.org> 02/06/02 02:31AM >>>\n> Hi everyone,\n> \n> Dean here seems to be converting to PostgreSQL from MS SQL Server, and I\n> have *no* idea how to help him out.\n> \n> He's not on the list, so if anyone's got suggestions, please remember to\n> keep him in the To/CC list.\n> \n> :-)\n> \n> Regards and best wishes,\n> \n> Justin Clift\n> \n> -------- Original Message --------\n> Subject: MS SQL compatible functions\n> Date: Wed, 6 Feb 2002 18:17:13 +0800\n> From: \"Dean Lu\" <>\n> To: <justin@postgresql.org>\n> \n> Dear Justin,\n> My name is Dean Lu, I am working in a SI company in Taiwan, we are\n> going to change our products to support the PostgreSQL and drop the MS\n> SQL away, but I got some problems with the functions compatibility\n> between MS and pgSQL. Could you please give me some suggestions or tell\n> me where can I get the functions from Internet. It will be better if\n> those functions are written in C language. Thank you very much.\n> \n> Best Regards'\n> Dean Lu\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Thu, 07 Feb 2002 10:42:33 +1100", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: [Fwd: MS SQL compatible functions]" }, { "msg_contents": "Justin Clift wrote:\n> Hi Ian,\n> \n> A compatibility has already been considered for Oracle, you're\n> mentioning one for MS SQL Server.\n> \n> Maybe it's time to think about how an abstraction layer could be added,\n> and then appropriate Oracle/Sybase/Informix/MSSQL/etc\n> modules/plug-ins/layers could be added to that?\n> \n> Wonder how much work it would take?\n\nIt's on the TODO list and I am willing to outline the options anytime.\n\nOne way is to have SQL functions that can be run on a desired database\nto CREATE FUNCTION various compatibility functions. I think there are\nsome for ODBC and I can imagine others.\n\nFor syntax stuff, I think the cleanest way would be to have a parser run\n_before_ the main parser, rewriting stuff into PostgreSQL syntax and\ncleaning up issues.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 6 Feb 2002 19:37:02 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [Fwd: MS SQL compatible functions]" }, { "msg_contents": "I think that kind of effort can be spent inceasing PG's capabilities.\nWhat percentage of people who have MS are going to shift to PG.\nThe MS customer base is usually not the Linux/PG/GNU type. So\nwho are we trying to please. Those who buy MS are buying it for\ncommercial, finnancial, staffing and all the bean counting reasons.\nThose decisions are made by non-techies. So now you going to\npresent to some IT Director who used to answer phones, your adapter\nlayer. And the first question you'll get is \"So does that mean I can still\ncall my 800-microsoft-help-me for support\". The second one is what is\nLinux?\n\nLet them be....\n\nJustin Clift wrote:\n\n> Hi Ian,\n>\n> A compatibility has already been considered for Oracle, you're\n> mentioning one for MS SQL Server.\n>\n> Maybe it's time to think about how an abstraction layer could be added,\n> and then appropriate Oracle/Sybase/Informix/MSSQL/etc\n> modules/plug-ins/layers could be added to that?\n>\n> Wonder how much work it would take?\n>\n> :-)\n>\n> Regards and best wishes,\n>\n> Justin Clift\n>\n> Ian Harding wrote:\n> >\n> > This is something that would be wonderful to have, a mapping of PostgreSQL functions to MSSQL Server functions. In fact, I have been fantasizing about a compatibility module that would allow SQL Server applications to think they were talking to MSSQL Server when they were really talking to PG. That is a huge job, but my how many more people would convert!\n> >\n> > In the meantime, you will have to dump schema, twiddle them to meet PosgtreSQL requirements, and run them to re-create your db. Your app will have to be re-written to some degree as well.\n> >\n> > Ian A. Harding\n> > Programmer/Analyst II\n> > Tacoma-Pierce County Health Department\n> > (253) 798-3549\n> > mailto: iharding@tpchd.org\n> >\n> > >>> Justin Clift <justin@postgresql.org> 02/06/02 02:31AM >>>\n> > Hi everyone,\n> >\n> > Dean here seems to be converting to PostgreSQL from MS SQL Server, and I\n> > have *no* idea how to help him out.\n> >\n> > He's not on the list, so if anyone's got suggestions, please remember to\n> > keep him in the To/CC list.\n> >\n> > :-)\n> >\n> > Regards and best wishes,\n> >\n> > Justin Clift\n> >\n> > -------- Original Message --------\n> > Subject: MS SQL compatible functions\n> > Date: Wed, 6 Feb 2002 18:17:13 +0800\n> > From: \"Dean Lu\" <>\n> > To: <justin@postgresql.org>\n> >\n> > Dear Justin,\n> > My name is Dean Lu, I am working in a SI company in Taiwan, we are\n> > going to change our products to support the PostgreSQL and drop the MS\n> > SQL away, but I got some problems with the functions compatibility\n> > between MS and pgSQL. Could you please give me some suggestions or tell\n> > me where can I get the functions from Internet. It will be better if\n> > those functions are written in C language. Thank you very much.\n> >\n> > Best Regards'\n> > Dean Lu\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n> --\n> \"My grandfather once told me that there are two kinds of people: those\n> who work and those who take the credit. He told me to try to be in the\n> first group; there was less competition there.\"\n> - Indira Gandhi\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n--\n-------------------------------------------------------------------------\nMedi Montaseri medi@CyberShell.com\nUnix Distributed Systems Engineer HTTP://www.CyberShell.com\nCyberShell Engineering\n-------------------------------------------------------------------------\n\n\n\n", "msg_date": "Wed, 06 Feb 2002 17:44:42 -0800", "msg_from": "Medi Montaseri <medi@cybershell.com>", "msg_from_op": false, "msg_subject": "Re: [Fwd: MS SQL compatible functions]" }, { "msg_contents": "I dunno... you could also get the type of people that have built (or\ninherited) a pre-existing application written in Oracle/SQLServer/etc that\nno longer want to pay the license fees and would like to switch to\nPostgreSQL, but aren't looking (or don't have the time right now) to\nconvert their entire application... having that abstraction layer makes it\nthat much easier for them to make the switch...\n\n-philip\n\nOn Wed, 6 Feb 2002, Medi Montaseri wrote:\n\n> I think that kind of effort can be spent inceasing PG's capabilities.\n> What percentage of people who have MS are going to shift to PG.\n> The MS customer base is usually not the Linux/PG/GNU type. So\n> who are we trying to please. Those who buy MS are buying it for\n> commercial, finnancial, staffing and all the bean counting reasons.\n> Those decisions are made by non-techies. So now you going to\n> present to some IT Director who used to answer phones, your adapter\n> layer. And the first question you'll get is \"So does that mean I can still\n> call my 800-microsoft-help-me for support\". The second one is what is\n> Linux?\n>\n> Let them be....\n>\n> Justin Clift wrote:\n>\n> > Hi Ian,\n> >\n> > A compatibility has already been considered for Oracle, you're\n> > mentioning one for MS SQL Server.\n> >\n> > Maybe it's time to think about how an abstraction layer could be added,\n> > and then appropriate Oracle/Sybase/Informix/MSSQL/etc\n> > modules/plug-ins/layers could be added to that?\n> >\n> > Wonder how much work it would take?\n> >\n> > :-)\n> >\n> > Regards and best wishes,\n> >\n> > Justin Clift\n> >\n> > Ian Harding wrote:\n> > >\n> > > This is something that would be wonderful to have, a mapping of PostgreSQL functions to MSSQL Server functions. In fact, I have been fantasizing about a compatibility module that would allow SQL Server applications to think they were talking to MSSQL Server when they were really talking to PG. That is a huge job, but my how many more people would convert!\n> > >\n> > > In the meantime, you will have to dump schema, twiddle them to meet PosgtreSQL requirements, and run them to re-create your db. Your app will have to be re-written to some degree as well.\n> > >\n> > > Ian A. Harding\n> > > Programmer/Analyst II\n> > > Tacoma-Pierce County Health Department\n> > > (253) 798-3549\n> > > mailto: iharding@tpchd.org\n> > >\n> > > >>> Justin Clift <justin@postgresql.org> 02/06/02 02:31AM >>>\n> > > Hi everyone,\n> > >\n> > > Dean here seems to be converting to PostgreSQL from MS SQL Server, and I\n> > > have *no* idea how to help him out.\n> > >\n> > > He's not on the list, so if anyone's got suggestions, please remember to\n> > > keep him in the To/CC list.\n> > >\n> > > :-)\n> > >\n> > > Regards and best wishes,\n> > >\n> > > Justin Clift\n> > >\n> > > -------- Original Message --------\n> > > Subject: MS SQL compatible functions\n> > > Date: Wed, 6 Feb 2002 18:17:13 +0800\n> > > From: \"Dean Lu\" <>\n> > > To: <justin@postgresql.org>\n> > >\n> > > Dear Justin,\n> > > My name is Dean Lu, I am working in a SI company in Taiwan, we are\n> > > going to change our products to support the PostgreSQL and drop the MS\n> > > SQL away, but I got some problems with the functions compatibility\n> > > between MS and pgSQL. Could you please give me some suggestions or tell\n> > > me where can I get the functions from Internet. It will be better if\n> > > those functions are written in C language. Thank you very much.\n> > >\n> > > Best Regards'\n> > > Dean Lu\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> >\n> > --\n> > \"My grandfather once told me that there are two kinds of people: those\n> > who work and those who take the credit. He told me to try to be in the\n> > first group; there was less competition there.\"\n> > - Indira Gandhi\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n>\n> --\n> -------------------------------------------------------------------------\n> Medi Montaseri medi@CyberShell.com\n> Unix Distributed Systems Engineer HTTP://www.CyberShell.com\n> CyberShell Engineering\n> -------------------------------------------------------------------------\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n", "msg_date": "Wed, 6 Feb 2002 17:51:41 -0800 (PST)", "msg_from": "Philip Hallstrom <philip@adhesivemedia.com>", "msg_from_op": false, "msg_subject": "Re: [Fwd: MS SQL compatible functions]" }, { "msg_contents": "Philip Hallstrom wrote:\n> I dunno... you could also get the type of people that have built (or\n> inherited) a pre-existing application written in Oracle/SQLServer/etc that\n> no longer want to pay the license fees and would like to switch to\n> PostgreSQL, but aren't looking (or don't have the time right now) to\n> convert their entire application... having that abstraction layer makes it\n> that much easier for them to make the switch...\n\nYes. No question about it. Our biggest problem is that it lacks\n_excitement_. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 6 Feb 2002 21:32:35 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Fwd: MS SQL compatible functions]" }, { "msg_contents": "> Or it lacks priority.....on another thread a caller is asking for Distributed\n> PG\n> so he can install 20 linux boxes instead of an IBM 390 or Sun E10k\n> \n> PG and Ingress have always been more popular in the academic world, why\n> not channel our energy in that area. Just imagine, a good distributed OS\n> (linux),\n> a good distributed DB (DPG) and a good distributed HTTP (apache, soap) and\n> we have lift off.\n\nNo question is is important. It actually isn't the lack of excitement\nwhich is the problem, it is really the lack of challenge and the fact it\nis tedious to mimick another database.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 6 Feb 2002 22:06:36 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Fwd: MS SQL compatible functions]" }, { "msg_contents": "Bruce Momjian wrote:\n\n> Philip Hallstrom wrote:\n> > I dunno... you could also get the type of people that have built (or\n> > inherited) a pre-existing application written in Oracle/SQLServer/etc that\n> > no longer want to pay the license fees and would like to switch to\n> > PostgreSQL, but aren't looking (or don't have the time right now) to\n> > convert their entire application... having that abstraction layer makes it\n> > that much easier for them to make the switch...\n>\n> Yes. No question about it. Our biggest problem is that it lacks\n> _excitement_. :-)\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nOr it lacks priority.....on another thread a caller is asking for Distributed\nPG\nso he can install 20 linux boxes instead of an IBM 390 or Sun E10k\n\nPG and Ingress have always been more popular in the academic world, why\nnot channel our energy in that area. Just imagine, a good distributed OS\n(linux),\na good distributed DB (DPG) and a good distributed HTTP (apache, soap) and\nwe have lift off.\n\n--\n-------------------------------------------------------------------------\nMedi Montaseri medi@CyberShell.com\nUnix Distributed Systems Engineer HTTP://www.CyberShell.com\nCyberShell Engineering\n-------------------------------------------------------------------------\n\n\n\n", "msg_date": "Wed, 06 Feb 2002 19:54:36 -0800", "msg_from": "Medi Montaseri <medi@cybershell.com>", "msg_from_op": false, "msg_subject": "Re: [Fwd: MS SQL compatible functions]" }, { "msg_contents": "Dear all,\n\n Thanks for all of your advices and replied messages.\n I've start to study how to write a function in pgSQL, not only for my company or myself.\nalso not for please anyone. That is only what I am going to do, I'll try it.\n In Taiwan, most of the schools teach about MS series. So, we don't have the chance to touch \nLinux as well, or I may say \" We didn't smell it when we were students.\" I learnt Linux since\n8 months ago, and meanwhile I found that's what I were looking for. \n MS is a very big company and a very variant software's provider, they spent lots of money on \nmarketing and localization for expanding their customer base. We can't ignore what they done \nsuccessfully lock on their customers. So, I won't blame or hate anyone who were making \ndecision to buy MS stuff.\n My company is a very small and young SI company, fortunately my boss is an open guy,\nhe agrees me that I am going to try to change the DB we used originally. In commercial\nthinking, you may say that what I am doing is just wanted to keeping down the product cost.\nBut I have more expectancy on it. \n \"Linux\", I can not make any definition on it cause I am just a beginner in this area, but if\nby the rights of free speaking, I may say Linux is an open spirit, we have no enemy here in Linux.\nNo fights, no hates, no self conceit, no self reliant. Thus, I am not talking about business, I am \ntalking about we should make more people know the pgSQL is a fantasy DB. If some of SI \ncompanies can join the pgSQL group. isn't it wonderful? If we build a freeway or a bridge between \nthem and pgSQL, isn't it wonderful?\n I'd like to be one of the developers of pgSQL, but I know I am not qualify to do so, still I'll learn,\nand hope I will be the one. My English is very poor but I've try so hard to read all the resources from \ninternet, and I need more resources about writing functions in C language, that's why I am here \nand looking for some advices and helps from you masters.\n Thanks for your help again.\n\nBest Regards'\nDean Lu\n\n-----Original Message-----\nFrom: medi@mail.tmdt.com.tw [mailto:medi@mail.tmdt.com.tw]On Behalf Of Medi Montaseri\nSent: Thursday, February 07, 2002 9:45 AM\nTo: Justin Clift\nCc: Ian Harding; PostgreSQL Hackers Mailing List; pgsql-general@postgresql.org; dean@tmdt.com.tw\nSubject: Re: [GENERAL] [Fwd: MS SQL compatible functions]\n\n\nI think that kind of effort can be spent inceasing PG's capabilities.\nWhat percentage of people who have MS are going to shift to PG.\nThe MS customer base is usually not the Linux/PG/GNU type. So\nwho are we trying to please. Those who buy MS are buying it for\ncommercial, finnancial, staffing and all the bean counting reasons.\nThose decisions are made by non-techies. So now you going to\npresent to some IT Director who used to answer phones, your adapter\nlayer. And the first question you'll get is \"So does that mean I can still\ncall my 800-microsoft-help-me for support\". The second one is what is\nLinux?\n\nLet them be....\n\nJustin Clift wrote:\n\n> Hi Ian,\n>\n> A compatibility has already been considered for Oracle, you're\n> mentioning one for MS SQL Server.\n>\n> Maybe it's time to think about how an abstraction layer could be added,\n> and then appropriate Oracle/Sybase/Informix/MSSQL/etc\n> modules/plug-ins/layers could be added to that?\n>\n> Wonder how much work it would take?\n>\n> :-)\n>\n> Regards and best wishes,\n>\n> Justin Clift\n>\n> Ian Harding wrote:\n> >\n> > This is something that would be wonderful to have, a mapping of PostgreSQL functions to MSSQL Server functions. In fact, I have been fantasizing about a compatibility module that would allow SQL Server applications to think they were talking to MSSQL Server when they were really talking to PG. That is a huge job, but my how many more people would convert!\n> >\n> > In the meantime, you will have to dump schema, twiddle them to meet PosgtreSQL requirements, and run them to re-create your db. Your app will have to be re-written to some degree as well.\n> >\n> > Ian A. Harding\n> > Programmer/Analyst II\n> > Tacoma-Pierce County Health Department\n> > (253) 798-3549\n> > mailto: iharding@tpchd.org\n> >\n> > >>> Justin Clift <justin@postgresql.org> 02/06/02 02:31AM >>>\n> > Hi everyone,\n> >\n> > Dean here seems to be converting to PostgreSQL from MS SQL Server, and I\n> > have *no* idea how to help him out.\n> >\n> > He's not on the list, so if anyone's got suggestions, please remember to\n> > keep him in the To/CC list.\n> >\n> > :-)\n> >\n> > Regards and best wishes,\n> >\n> > Justin Clift\n> >\n> > -------- Original Message --------\n> > Subject: MS SQL compatible functions\n> > Date: Wed, 6 Feb 2002 18:17:13 +0800\n> > From: \"Dean Lu\" <>\n> > To: <justin@postgresql.org>\n> >\n> > Dear Justin,\n> > My name is Dean Lu, I am working in a SI company in Taiwan, we are\n> > going to change our products to support the PostgreSQL and drop the MS\n> > SQL away, but I got some problems with the functions compatibility\n> > between MS and pgSQL. Could you please give me some suggestions or tell\n> > me where can I get the functions from Internet. It will be better if\n> > those functions are written in C language. Thank you very much.\n> >\n> > Best Regards'\n> > Dean Lu\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n> --\n> \"My grandfather once told me that there are two kinds of people: those\n> who work and those who take the credit. He told me to try to be in the\n> first group; there was less competition there.\"\n> - Indira Gandhi\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n--\n-------------------------------------------------------------------------\nMedi Montaseri medi@CyberShell.com\nUnix Distributed Systems Engineer HTTP://www.CyberShell.com\nCyberShell Engineering\n-------------------------------------------------------------------------\n\n", "msg_date": "Thu, 7 Feb 2002 18:47:24 +0800", "msg_from": "\"Dean Lu\" <dean@tmdt.com.tw>", "msg_from_op": false, "msg_subject": "Re: [Fwd: MS SQL compatible functions]" }, { "msg_contents": "Hi Justin,\n\nJust saw this on the newsgroup.\n\nI think it would be interesting to do some gap analysis (apologies for the\nmanagement talk) that identified what could be done in MSSql server that\ncannot be done in postgres and use that to:\n\na: provide functional comparison material to allow people to make a more\ninformed decision when selecting there db.\nb. drive out potential projects to aid and encourage migration.\n\nI probably won't have much time to get involved in this over the next\ncoiuple of months but may be able to get involved later.\n\nany thoughts?\n\nregards\n\nsteve\n----- Original Message -----\nFrom: \"Justin Clift\" <justin@postgresql.org>\nTo: \"Ian Harding\" <ianh@tpchd.org>; \"PostgreSQL Hackers Mailing List\"\n<pgsql-hackers@postgresql.org>\nCc: <pgsql-general@postgresql.org>; <dean@tmdt.com.tw>\nSent: Wednesday, February 06, 2002 11:42 PM\nSubject: Re: [GENERAL] [Fwd: MS SQL compatible functions]\n\n\n> Hi Ian,\n>\n> A compatibility has already been considered for Oracle, you're\n> mentioning one for MS SQL Server.\n>\n> Maybe it's time to think about how an abstraction layer could be added,\n> and then appropriate Oracle/Sybase/Informix/MSSQL/etc\n> modules/plug-ins/layers could be added to that?\n>\n> Wonder how much work it would take?\n>\n> :-)\n>\n> Regards and best wishes,\n>\n> Justin Clift\n>\n>\n> Ian Harding wrote:\n> >\n> > This is something that would be wonderful to have, a mapping of\nPostgreSQL functions to MSSQL Server functions. In fact, I have been\nfantasizing about a compatibility module that would allow SQL Server\napplications to think they were talking to MSSQL Server when they were\nreally talking to PG. That is a huge job, but my how many more people would\nconvert!\n> >\n> > In the meantime, you will have to dump schema, twiddle them to meet\nPosgtreSQL requirements, and run them to re-create your db. Your app will\nhave to be re-written to some degree as well.\n> >\n> > Ian A. Harding\n> > Programmer/Analyst II\n> > Tacoma-Pierce County Health Department\n> > (253) 798-3549\n> > mailto: iharding@tpchd.org\n> >\n> > >>> Justin Clift <justin@postgresql.org> 02/06/02 02:31AM >>>\n> > Hi everyone,\n> >\n> > Dean here seems to be converting to PostgreSQL from MS SQL Server, and I\n> > have *no* idea how to help him out.\n> >\n> > He's not on the list, so if anyone's got suggestions, please remember to\n> > keep him in the To/CC list.\n> >\n> > :-)\n> >\n> > Regards and best wishes,\n> >\n> > Justin Clift\n> >\n> > -------- Original Message --------\n> > Subject: MS SQL compatible functions\n> > Date: Wed, 6 Feb 2002 18:17:13 +0800\n> > From: \"Dean Lu\" <>\n> > To: <justin@postgresql.org>\n> >\n> > Dear Justin,\n> > My name is Dean Lu, I am working in a SI company in Taiwan, we\nare\n> > going to change our products to support the PostgreSQL and drop the MS\n> > SQL away, but I got some problems with the functions compatibility\n> > between MS and pgSQL. Could you please give me some suggestions or tell\n> > me where can I get the functions from Internet. It will be better if\n> > those functions are written in C language. Thank you very much.\n> >\n> > Best Regards'\n> > Dean Lu\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n> --\n> \"My grandfather once told me that there are two kinds of people: those\n> who work and those who take the credit. He told me to try to be in the\n> first group; there was less competition there.\"\n> - Indira Gandhi\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n", "msg_date": "Thu, 7 Feb 2002 15:42:45 -0000", "msg_from": "\"Steve Boyle \\(Roselink\\)\" <boylesa@roselink.co.uk>", "msg_from_op": false, "msg_subject": "Re: [Fwd: MS SQL compatible functions]" } ]
[ { "msg_contents": "Hi all,\n\nI've started separating out the general socket code away from the \nfrontend/backend protocol code to make way for the inclusion of other \nnetwork protocols, and have some style questions.\n\nI see a lot of the code has K&R style names such as foo_bar_baz() and some \nhas FooBarBaz() style. Is one a newer style and one an older? or is it \njust a function of different developers working on the code? Personally I \nlike K&R style but will of course adapt to whatever is the consensus view.\n\nAlso, while I'm in there (and after reading the threads thread), I'll be \nmoving some of the globals/statics to either the Port structure or \nprotocol specific structures. Anyone have any other special requests?\n\nBrian\n\n", "msg_date": "Wed, 6 Feb 2002 19:27:20 -0500 (EST)", "msg_from": "Brian Bruns <camber@ais.org>", "msg_from_op": true, "msg_subject": "function and variable names" }, { "msg_contents": "Brian Bruns <camber@ais.org> writes:\n> I see a lot of the code has K&R style names such as foo_bar_baz() and some \n> has FooBarBaz() style. Is one a newer style and one an older? or is it \n> just a function of different developers working on the code?\n\nThe latter; we have never tried to enforce any particular naming style.\nI'd counsel trying to match the style of the existing code near where\nyou are working.\n\n> Also, while I'm in there (and after reading the threads thread), I'll be \n> moving some of the globals/statics to either the Port structure or \n> protocol specific structures. Anyone have any other special requests?\n\nDon't overdo it. There will never be more than one postmaster thread,\neven if we have threads at all; ergo no good reason to avoid static\nstorage of things that are only used by the postmaster.\n\nThe Port structure is actually a leftover from back when the postmaster\ndid its own poor-man's-multithreading to handle concurrent client\nauthentication operations. I am not sure we really need/want it at all\nanymore. I'd suggest thinking in terms of making the code simpler and\nmore readable, rather than worrying about whether it can be\nmultithreaded.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Feb 2002 13:47:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: function and variable names " } ]
[ { "msg_contents": "Here's a concrete summary of the various proposals about the location of\nconfiguration files and other things that have been discussed a while ago.\nI think we pretty much came to agree -- if not, the rest could perhaps\nbetter be discussed based on the following. There are also a couple of\nopen items that need resolution.\n\n\n* postgresql.conf configuration file\n\nDefault location: ${sysconfdir}/postgresql.conf (where ${sysconfdir}\ndefaults to /usr/local/pgsql/etc). For those who don't know, --sysconfdir\nis actually a configure option, so for \"base-system\" installs you can set\nit to /etc if you prefer.\n\nOverridable by:\n\n- postmaster option -C FILENAME (not directory)\n\n\n* pg_hba.conf, pg_ident.conf, secondary \"password\" files, SSL\n certificates, all other configuration things formerly in $PGDATA\n\nDefault location: ${sysconfdir}\n\nOverridable by postgresql.conf/GUC options (thus also\npostmaster command-line options). Proposed names:\n\nhba_conf_file\nident_conf_file\npassword_file_dir\nssl_key_file\nssl_certificate_file\n\nQUESTION: Do we want to have the -C command-line option affect these\nparameters in some way? It would seem quite sensible. But if -C denotes\na file name, as was requested, the location of say pg_hba.conf would be\n\"${directory part of -C}/pg_hba.conf\" (base-name fixed), which might not\nbe the most elegant way.\n\n\n* Permission of configuration files\n\nBy default, I like postgresql.conf, pg_hba.conf, and pg_ident.conf as\nroot-owned (or whatever the installer was) 0644 for ease of installation\nand use. Password files containing actual passwords and the SSL files\nneed to be postgres-owned 0600 (or less), which will require a chmod or\nchown call or two in most installations, but setting up secondary\n\"password\" files or SSL will take a few key strokes anyway. We should\nhave run-time security checks that we don't use world-readable files that\ncontain secrets.\n\n\n* Central database cluster storage area\n\nDefault location for postmaster and initdb: ${localstatedir}/data (which\ndefaults to /usr/local/pgsql/var/data).\n\nOverridable by, in order of decreasing priority:\n- -D option\n- $PGDATA environment variable (perhaps obsolescent, but no reason to\nremove it outright)\n- postgresql.conf parameter\n\n\n* Possible transitional aid\n\nWe could have an environment variable $PGCONF that overrides the location\nof the postgresql.conf file (in some to be specified way), so those who\ndon't like the new setup can set PGCONF=$PGDATA or something like that.\nHowever, since this would require the user to actually copy over all the\nnew configurations files from .../etc/ to $PGDATA, I don't know how many\nwould actually go for that.\n\n\nComments? Better ideas?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 6 Feb 2002 19:28:12 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Summary of new configuration file and data directory locations" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> * pg_hba.conf, pg_ident.conf, secondary \"password\" files, SSL\n> certificates, all other configuration things formerly in $PGDATA\n> Default location: ${sysconfdir}\n\nThis strikes me as a fairly BAD idea because of the security\nimplications of keeping these things in a world-accessible directory.\nI'm willing to tolerate moving postgresql.conf but I am much less\nwilling to move anything that contains sensitive information.\n\nI suggest that the default location of these things continue to be\n$PGDATA (which as you note will be settable from postgresql.conf).\n\n> QUESTION: Do we want to have the -C command-line option affect these\n> parameters in some way? It would seem quite sensible.\n\nNot necessary if done as above.\n\n> Password files containing actual passwords and the SSL files\n> need to be postgres-owned 0600 (or less), which will require a chmod or\n> chown call or two in most installations, but setting up secondary\n> \"password\" files or SSL will take a few key strokes anyway. We should\n> have run-time security checks that we don't use world-readable files that\n> contain secrets.\n\nWhile such a check is not a bad idea, it is really just locking the barn\ndoor after the horse has been stolen. Better to set up the default\nconfiguration to make such errors difficult to commit in the first place.\n\n> We could have an environment variable $PGCONF that overrides the location\n> of the postgresql.conf file (in some to be specified way), so those who\n> don't like the new setup can set PGCONF=$PGDATA or something like that.\n\nThe postmaster -C switch seems sufficient for this; I don't see a reason\nto invent an environment var too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Feb 2002 13:58:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Summary of new configuration file and data directory locations " } ]
[ { "msg_contents": "\nPerhaps someone can shed light on this problem we are seeing.\n\nIn a table on Postgresql 7.2.b2, we have a table with one of the columns defined as\ntext[7]. If you select * from tab1, this columns data comes out as this:\n\n yada | yoda | {\"wkend\",\"wkd\",\"wkd\",\"wkd\",\"wkd\",\"wkd\",\"wkend\"}\n\nThen we dump and restore into a table with the exact same schema (this\ncolumn is a text[7]) but now when you select data from the table is comes\nout looking like this:\n\n yada | yoda | {wkend,wkd,wkd,wkd,wkd,wkd,wkend}\n\n\nWas the handling of arrays changed in this latest release of the server?\n\nSpecifically, we have a java program that reads this table data and expects\nit returned with the double quotes. It is now failing.\n\nAny information on this would be helpful.\n\nThanks,\n\n-- \nLaurette Cisneros\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\nPassenger Information Everywhere\n\n\n", "msg_date": "Wed, 6 Feb 2002 16:35:25 -0800 (PST)", "msg_from": "Laurette Cisneros <laurette@nextbus.com>", "msg_from_op": true, "msg_subject": "text array" }, { "msg_contents": "Laurette Cisneros <laurette@nextbus.com> writes:\n> Was the handling of arrays changed in this latest release of the server?\n\nYes. Double quotes are now inserted if and only if needed to re-parse\nthe array value correctly. The old code inserted quotes more or less\nat-whim.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Feb 2002 10:31:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: text array " } ]
[ { "msg_contents": "Hi,\n\nHow about adding these for 7.3? Can this be put in the TODO?\n\nEXTRACT(TIMESTAMP FROM epoch);\nEXTRACT(DATE FROM epoch);\nEXTRACT(DOW FROM epoch);\n...\n\netc.\n\nWould be very useful.\n\nChris\n\n", "msg_date": "Thu, 7 Feb 2002 11:18:20 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Suggestions for 7.3 date handling" }, { "msg_contents": "> How about adding these for 7.3? Can this be put in the TODO?\n> \n> EXTRACT(TIMESTAMP FROM epoch);\n> EXTRACT(DATE FROM epoch);\n> EXTRACT(DOW FROM epoch);\n> ...\n\nWhat do you want this to do exactly?\n\n - Thomas\n", "msg_date": "Thu, 07 Feb 2002 05:15:15 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Suggestions for 7.3 date handling" }, { "msg_contents": "> > How about adding these for 7.3? Can this be put in the TODO?\n> >\n> > EXTRACT(TIMESTAMP FROM epoch);\n> > EXTRACT(DATE FROM epoch);\n> > EXTRACT(DOW FROM epoch);\n> > ...\n>\n> What do you want this to do exactly?\n\nOK, we have some legacy columns that use int4 as their type. It would be\nnice to be able to do easy date handling with them.\n\neg. EXTRACT(TIMESTAMP FROM EPOCH '1081237846')\n\nChris\n\n", "msg_date": "Thu, 7 Feb 2002 13:27:27 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: Suggestions for 7.3 date handling" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n>> What do you want this to do exactly?\n\n> OK, we have some legacy columns that use int4 as their type. It would be\n> nice to be able to do easy date handling with them.\n\nCast to abstime.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Feb 2002 11:12:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Suggestions for 7.3 date handling " }, { "msg_contents": "Christopher Kings-Lynne writes:\n\n> OK, we have some legacy columns that use int4 as their type. It would be\n> nice to be able to do easy date handling with them.\n>\n> eg. EXTRACT(TIMESTAMP FROM EPOCH '1081237846')\n\ntimestamp 'epoch' + interval '1 second' * your_int\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 7 Feb 2002 11:43:42 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Suggestions for 7.3 date handling" }, { "msg_contents": "(resent, with changes)\n\n> OK, we have some legacy columns that use int4 as their type. It would be\n> nice to be able to do easy date handling with them.\n\nHow about this? Folding in Peter's suggestion to use a multiplication\noperator rather than a text string conversion which I originally\nproposed:\n\nthomas=# create or replace function date_part(text,int4)\nthomas-# returns float8 as\nthomas-# 'select date_part($1, timestamp without time zone \\'epoch\\'\nthomas-# + (interval '1 sec' * $2));' language 'sql';\n\nthomas=# select extract('epoch' from timestamp without time zone\n'today'),\nthomas-# extract('epoch' from 1013040000);\n date_part | date_part \n------------+------------\n 1013040000 | 1013040000\n\nSeems to provide what you want, and you don't have to do any coding.\n\nbtw, I like that \"create or replace\" we have now!\n\n - Thomas\n", "msg_date": "Thu, 07 Feb 2002 17:29:38 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Suggestions for 7.3 date handling" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> thomas=# create or replace function date_part(text,int4)\n> thomas-# returns float8 as\n> thomas-# 'select date_part($1, timestamp without time zone \\'epoch\\'\n> thomas-# + (interval '1 sec' * $2));' language 'sql';\n\nOr just\n\nregression=# create or replace function date_part(text,int4)\nregression-# returns float8 as\nregression-# 'select date_part($1, $2::abstime::timestamp)'\nregression-# language sql;\n\nThomas, of course, would really like to get rid of type abstime,\nbut it's so dang useful (for exactly this reason) that I don't\nexpect it to disappear until Unixen move away from 4-byte time_t.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Feb 2002 12:58:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Suggestions for 7.3 date handling " } ]
[ { "msg_contents": "\nOkay, went back through teh archives, as I know that Tom provided a\nsolution for this before, and found it at:\n\nhttp://archives.postgresql.org/pgsql-sql/2001-06/msg00329.php\n\nPlain and simple ... makes perfect sense ... doesn't work in v7.2, or, at\nleast, not as I'm expecting it to ...\n\nI've broken what I'm trying to do down the the *basest* component I can:\n\nexplain SELECT p.uid, p.handle\n FROM gender_f pgf JOIN profiles p ON (pgf.uid = p.uid) ;\n\nWhich explains out as:\n\nHash Join (cost=1354.80..45297.83 rows=75664 width=27)\n -> Seq Scan on profiles p (cost=0.00..35707.69 rows=485969 width=19)\n -> Hash (cost=1165.64..1165.64 rows=75664 width=8)\n -> Seq Scan on gender_f pgf (cost=0.00..1165.64 rows=75664 width=8)\n\nNow, profiles has uid as its primary KEY, and there are no\nduplicates in gender_f ... so, as my HashJoin points out, I should have 75664\nresults returned ... that is expected ... and the SeqScan on gender_f is\nexpected ... but the SeqScan on profiles is what I would hope to get rid\nof ... get uid from gender_f, find corresponding entry in profiles ... its only\never goign to pull out 75664 out of 485969 records from profiles, so why\nwould it seqscan *through* profiles for each and every UID?\n\nNow, if I go to the next level that I'm trying to pull together:\n\nexplain SELECT p.uid, p.handle\n FROM ( orientation_c poc JOIN gender_f pgf USING ( uid ) ) JOIN profiles p ON (pgf.uid = p.uid) ;\n\nIt still explains, what I think, is wrong:\n\nHash Join (cost=6023.92..47537.10 rows=75664 width=35)\n -> Seq Scan on profiles p (cost=0.00..35707.69 rows=485969 width=19)\n -> Hash (cost=5834.76..5834.76 rows=75664 width=16)\n -> Merge Join (cost=0.00..5834.76 rows=75664 width=16)\n -> Index Scan using poc_uid on orientation_c poc (cost=0.00..2807.82 rows=126477 width=8)\n -> Index Scan using pgf_uid on gender_f pgf (cost=0.00..1575.79 rows=75664 width=8)\n\n\nThe MergeJoin between poc/pgf will only return 12000 records, and since it\nis a 1:1 relationship between each of those tables, there will *only* be\n12000 records pulled from profiles ... yet its doing a SeqScan through all\n485k records for each of those UIDs?\n\nThis is after I've performed a VACUUM ANALYZE ...\n\nThe final query itself is:\n\nSELECT p.uid, p.profiles_handle\n FROM ( ( profiles_orientation_c poc JOIN profiles_gender_f pgf USING ( uid ) ) JOIN iwantu_profiles p USING (uid ) ) LEFT JOIN iwantu_last_login ll USING ( uid );\n\nWhich explains as:\n\nHash Join (cost=31636.40..78239.34 rows=75664 width=43)\n -> Hash Join (cost=6023.92..47537.10 rows=75664 width=35)\n -> Seq Scan on iwantu_profiles p (cost=0.00..35707.69 rows=485969 width=19)\n -> Hash (cost=5834.76..5834.76 rows=75664 width=16)\n -> Merge Join (cost=0.00..5834.76 rows=75664 width=16)\n -> Index Scan using poc_uid on profiles_orientation_c poc (cost=0.00..2807.82 rows=126477 width=8)\n -> Index Scan using pgf_uid on profiles_gender_f pgf (cost=0.00..1575.79 rows=75664 width=8)\n -> Hash (cost=7955.64..7955.64 rows=485964 width=8)\n -> Seq Scan on iwantu_last_login ll (cost=0.00..7955.64 rows=485964 width=8)\n\nEXPLAIN\n\nSo, poc&pgf are MergeJoin's, leaving me with 12000 records again ... then\nthere is the SeqScan/HashJoin wiht profiles, which will leave me with\n12000 records, but with more information ... but, again, for each of\n*those* 12000 records, its doing a SeqScan on last_login's 485k records,\ninstead of using the index ... again, like pgf and poc, there is only one\nrecord for every uid, so we aren't dealing with duplicates ...\n\nNow, if I 'set enable_seqscan=false;' and do the exact same explain, it\ndefinitely comes more in line with what I'd like to see, as far as index\nusage is concerned:\n\nNested Loop (cost=0.00..546759.46 rows=75664 width=43)\n -> Nested Loop (cost=0.00..272274.75 rows=75664 width=35)\n -> Merge Join (cost=0.00..5834.76 rows=75664 width=16)\n -> Index Scan using poc_uid on profiles_orientation_c poc (cost=0.00..2807.82 rows=126477 width=8)\n -> Index Scan using pgf_uid on profiles_gender_f pgf (cost=0.00..1575.79 rows=75664 width=8)\n -> Index Scan using iwantu_profiles_uid on iwantu_profiles p (cost=0.00..3.51 rows=1 width=19)\n -> Index Scan using ill_uid on iwantu_last_login ll (cost=0.00..3.62 rows=1 width=8)\n\n\n\n", "msg_date": "Thu, 7 Feb 2002 10:23:22 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "JOINs ... how I hate them ..." }, { "msg_contents": "On Thu, 2002-02-07 at 16:23, Marc G. Fournier wrote:\n> \n> \n> \n> The MergeJoin between poc/pgf will only return 12000 records, and since it\n> is a 1:1 relationship between each of those tables, there will *only* be\n> 12000 records pulled from profiles ... yet its doing a SeqScan through all\n> 485k records for each of those UIDs?\n> \n> This is after I've performed a VACUUM ANALYZE ...\n> \n> The final query itself is:\n> \n> SELECT p.uid, p.profiles_handle\n> FROM ( ( profiles_orientation_c poc JOIN profiles_gender_f pgf USING ( uid ) )> JOIN iwantu_profiles p USING (uid ) ) LEFT JOIN iwantu_last_login ll USING ( uid );\n> \n> Which explains as:\n> \n> Hash Join (cost=31636.40..78239.34 rows=75664 width=43)\n> -> Hash Join (cost=6023.92..47537.10 rows=75664 width=35)\n> -> Seq Scan on iwantu_profiles p (cost=0.00..35707.69 rows=485969 width=19)\n> -> Hash (cost=5834.76..5834.76 rows=75664 width=16)\n> -> Merge Join (cost=0.00..5834.76 rows=75664 width=16)\n> -> Index Scan using poc_uid on profiles_orientation_c poc (cost=0.00..2807.82 rows=126477 width=8)\n> -> Index Scan using pgf_uid on profiles_gender_f pgf (cost=0.00..1575.79 rows=75664 width=8)\n> -> Hash (cost=7955.64..7955.64 rows=485964 width=8)\n> -> Seq Scan on iwantu_last_login ll (cost=0.00..7955.64 rows=485964 width=8)\n> \n> EXPLAIN\n> \n> So, poc&pgf are MergeJoin's, leaving me with 12000 records again ... then\n> there is the SeqScan/HashJoin wiht profiles, which will leave me with\n> 12000 records, but with more information ... but, again, for each of\n> *those* 12000 records, its doing a SeqScan on last_login's 485k records,\n> instead of using the index ... again, like pgf and poc, there is only one\n> record for every uid, so we aren't dealing with duplicates ...\n\nI recently sped up a somewhat similar query from 15 sec to < 1 sec by\nrewriting it to use a subselect:\n\nSELECT p.uid, p.profiles_handle\n FROM profiles_orientation_c poc,\n profiles_gender_f pgf\n (select uid, profiles_handle\n from iwantu_profiles ip\n where ip.uid = pgf.uid\n ) p\n WHERE poc.uid = pgf.uid\n\nIf you need something from iwantu_last_login it should go into that\nsubselect as well\n\nThat tricked my case to do the small join first.\n\n-----------------\nHannu\n\n\n\n", "msg_date": "07 Feb 2002 21:07:52 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: JOINs ... how I hate them ..." }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> explain SELECT p.uid, p.handle\n> FROM gender_f pgf JOIN profiles p ON (pgf.uid = p.uid) ;\n\n> Which explains out as:\n\n> Hash Join (cost=1354.80..45297.83 rows=75664 width=27)\n> -> Seq Scan on profiles p (cost=0.00..35707.69 rows=485969 width=19)\n> -> Hash (cost=1165.64..1165.64 rows=75664 width=8)\n> -> Seq Scan on gender_f pgf (cost=0.00..1165.64 rows=75664 width=8)\n\n> Now, profiles has uid as its primary KEY, and there are no\n> duplicates in gender_f ... so, as my HashJoin points out, I should have 75664\n> results returned ... that is expected ... and the SeqScan on gender_f is\n> expected ... but the SeqScan on profiles is what I would hope to get rid\n> of ...\n\nUm, why? Looks like a perfectly reasonable plan to me.\n\n> get uid from gender_f, find corresponding entry in profiles ...\n\nI'm not convinced that 75000 indexscan probes would be faster than a\nsequential scan across that table. You could probably force the issue\nwith \"set enable_hashjoin to off\" (and maybe also \"set enable_mergejoin\nto off\") and then see what the plan is and what the actual timing is.\n(EXPLAIN ANALYZE should be real helpful here.)\n\n> explain SELECT p.uid, p.handle\n> FROM ( orientation_c poc JOIN gender_f pgf USING ( uid ) ) JOIN profiles p ON (pgf.uid = p.uid) ;\n\n> Hash Join (cost=6023.92..47537.10 rows=75664 width=35)\n> -> Seq Scan on profiles p (cost=0.00..35707.69 rows=485969 width=19)\n> -> Hash (cost=5834.76..5834.76 rows=75664 width=16)\n> -> Merge Join (cost=0.00..5834.76 rows=75664 width=16)\n> -> Index Scan using poc_uid on orientation_c poc (cost=0.00..2807.82 rows=126477 width=8)\n> -> Index Scan using pgf_uid on gender_f pgf (cost=0.00..1575.79 rows=75664 width=8)\n\n> The MergeJoin between poc/pgf will only return 12000 records, and since it\n> is a 1:1 relationship between each of those tables, there will *only* be\n> 12000 records pulled from profiles ...\n\nHmm, it thinks that there will be 75664 not 12000 records out of that\njoin. Why the discrepancy? Could we see the pg_stats data for these\ntables?\n\n> ... but, again, for each of\n> *those* 12000 records, its doing a SeqScan on last_login's 485k records,\n> instead of using the index\n\nNo, certainly *not* \"for each record\". It's a hash join, so it only\nreads each table once.\n\n> Now, if I 'set enable_seqscan=false;' and do the exact same explain, it\n> definitely comes more in line with what I'd like to see, as far as index\n> usage is concerned:\n\nAnd what's the actual runtime come out to be?\n\nWe definitely should standardize on asking for EXPLAIN ANALYZE results\nin bad-plan discussions, now that we have that capability.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Feb 2002 18:32:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: JOINs ... how I hate them ... " } ]
[ { "msg_contents": "Tom, do you have plans yet on how to store permissions granted on schemas?\n\nFor the almost-done permissions on functions and languages, I reuse the\naclitem arrays. Since these objects only have one kind of permission, it\nseems reasonable to overload the select/read permission bit for this.\n\nHowever, I imagine that schemas may have a different set of permissions,\nperhaps including CREATE and such, which might not fit into the aclitem.\nIn case you're inventing a whole new mechanism that needs to be\ncoordinated, let me know.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 7 Feb 2002 11:58:34 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Implementation details of schema permissions?" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom, do you have plans yet on how to store permissions granted on schemas?\n\nHaven't thought about it very hard. I would like to reuse the existing\nACL support, of course. We might need to generalize it to allow\ndifferent sets of permission bits for different kinds of objects.\n\n[ thinks... ] AFAIR, the low-level ACL routines don't really know/care\nmuch about the meanings of the bits, except for the I/O converters which\nhave to be able to map bits to code letters. So parameterization seems\npretty feasible. We could use atttypmod to let the I/O converters know\nwhich code map applies to a particular ACL column, I think.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Feb 2002 12:41:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Implementation details of schema permissions? " } ]
[ { "msg_contents": "In Rdb (for instance) you can edit the plan if you want. (Oracle too,\nIIRC -- but I never have edited a plan in Oracle)\n\nSure, it opens a big can of worms, but it would be nice for someone\ntechnically inclined to be able to fix a plan if they know better than\nthe SQL compiler did.\n", "msg_date": "Thu, 7 Feb 2002 11:26:55 -0800", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: JOINs ... how I hate them ..." } ]
[ { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> This strikes me as a fairly BAD idea because of the security\n>> implications of keeping these things in a world-accessible directory.\n\n> I assumed sysconfdir was _not_ going to be world-accessable. Does it\n> have to be?\n\nPeter mentioned /etc as a plausible value of sysconfdir. I don't think\nwe should assume that it is a postgresql-only directory. Moreover,\nthere is little point in making these files root-owned (as he also\nsuggested) if they live in a postgres-owned directory; yet unless they\ndo, we can't use restrictive directory permissions.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Feb 2002 14:40:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Summary of new configuration file and data directory locations " } ]
[ { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I am confused why we can't just make the directory be owned by\n> PostgreSQL super user with 700 permissions, like we do now with /data.\n\nWe could do it that way, but then the set of parameters Peter proposed\nis quite unreasonable; there should be exactly one, namely the name of\nthe config directory.\n\nNow that I think about it, that actually seems a pretty reasonable idea.\nJust allow all the hand-maintained config files to be placed in a\nseparate directory, which we treat just like $PGDATA as far as\npermissions go. If you want a backwards-compatible setup, you need only\nset the config directory equal to $PGDATA, and you're done.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Feb 2002 14:49:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Summary of new configuration file and data directory locations " }, { "msg_contents": "Tom Lane writes:\n\n> Now that I think about it, that actually seems a pretty reasonable idea.\n> Just allow all the hand-maintained config files to be placed in a\n> separate directory, which we treat just like $PGDATA as far as\n> permissions go. If you want a backwards-compatible setup, you need only\n> set the config directory equal to $PGDATA, and you're done.\n\nBut that isn't going to satisfy anyone. The reason that people want this\nis so they can mix and match their configuration files between different\nservers. For instance, they might want to share the SSL key files between\nall instances. Or they might want different postgresql.conf files for\neach server but put them all into the same directory with different names.\nThis was a fairly clear request in the original thread.\n\nSo the premise is that in theory any file can live anywhere. And the\naccess permissions of a file are solely controlled by its own permission\nbits and ownership, not what directory it may live in. Ultimately, the\nformer way is more secure.\n\nI'm not 100% comfortable either with pg_hba.conf and pg_ident.conf\nworld-readable by default. The alternative is to install all\nconfiguration files 0600 by default. This will work painlessly for\nplain-pen installations where everything is owned and run by the same\nuser. If the installation and the server instance are owned by separate\nusers, then the installer will have to issue a few chmod/chown commands,\nbut he has to do that anyway right now before running initdb. I imagine\nsomething like this in the installation/setup procedure:\n\n\"\"\"\n1. Create a user account for the PostgreSQL server. This is the user the\n server will run as. [...]\n\n2. Make sure the user account you created in step 1 can read the\n configuration files. There are a few ways to make this happen:\n\n a. Make the configuration files world-readable. For the SSL\n certificate files and the secondary password files, you should not\n use this method. For other configuration files, this is safe, but\n if your authentication setup is insecure, everyone will be able to\n see that and exploit it easily. [Gratuitous comment about security\n through obscurity here]\n\n $ chmod a+r /usr/local/pgsql/etc/*\n\n b. Change the ownership of the files to the PostgreSQL user account.\n\n $ chown postgres /usr/local/pgsql/etc/*\n\n c. Create a \"postgres\" group, set the group ownership of the\n configuration files to that group, and make the files group\n readable.\n\n $ groupadd postgres\n $ usermod -G postgres postgres\n $ chgrp postgres /usr/local/pgsql/etc/*\n $ chmod g+r /usr/local/pgsql/etc/*\n\n This setup implies that the \"postgres\" user does not have the\n ability to write to these files, which may be considered desirable\n or annoying.\n\n3. Create a database installation with the \"initdb\" command. [Continue as\n usual]\n\"\"\"\n\nThis would give maximum flexibility to a variety of server setups and\nparanoia levels.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Fri, 8 Feb 2002 12:08:37 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Summary of new configuration file and data directory" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> So the premise is that in theory any file can live anywhere. And the\n> access permissions of a file are solely controlled by its own permission\n> bits and ownership, not what directory it may live in. Ultimately, the\n> former way is more secure.\n\n<<itch>> I guess my thoughts on this are colored by bad experience with\ntools that are sloppy about preserving ownership/permissions on edited\nfiles. (I can recall being burnt this way by both Emacs and HP's \"SAM\"\nadmin tool. Perhaps recent versions don't have those bugs anymore.)\nI am not at all convinced that \"the former way is more secure\" in\nreality, even if it's cleaner in theory.\n\nCan't we do both? If the default setup is to put config files in\na Postgres-specific directory, then let's make the default arrangement\nbe that that directory is Postgres-owned, mode 700, *and* the config\nfiles are Postgres-owned and mode 600. Anyone who wants to back off\nfrom that is welcome to take responsibility for any security holes\nthey've created.\n\n> 2. Make sure the user account you created in step 1 can read the\n> configuration files. There are a few ways to make this happen:\n\n> a. Make the configuration files world-readable.\n\nI'd prefer you not recommend that at all, and certainly not as the\nfirst alternative.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 08 Feb 2002 12:28:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Summary of new configuration file and data directory locations " }, { "msg_contents": "Tom Lane writes:\n\n> Can't we do both? If the default setup is to put config files in\n> a Postgres-specific directory, then let's make the default arrangement\n> be that that directory is Postgres-owned, mode 700, *and* the config\n> files are Postgres-owned and mode 600.\n\nThe problem with this is that the PostgreSQL-specific configuration file\ndirectory may be used by programs other than the server. E.g., the ODBC\ndriver puts stuff in there. In some future life there may be a global\npsqlrc file or the JDBC driver may have a global properties file (don't\nknow if that just made sense). So we'd have to make a subdirectory, say\n\"server\" (or \"secure\" or \"secret\" ...). Seems a bit ugly.\n\nMoreover, I don't know if we can make permission changes on directories\nduring installation (make install). (Read \"can\" as: ought to, while\nstaying within the vague confines of open-source software build system\nstandards.) For all we know, the directory may already be there and the\ninstaller told us to reuse it.\n\nHow is the situation on the broken editors these days? We might just want\nto put a note on the top of each critical file\n\n# Make sure this file is not readable by anyone except you.\n# If you edit it, make sure your editor does not change the permissions on\n# this file.\n# If in doubt, execute chmod go-a filename.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Fri, 8 Feb 2002 18:26:54 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Summary of new configuration file and data directory" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Moreover, I don't know if we can make permission changes on directories\n> during installation (make install). (Read \"can\" as: ought to, while\n> staying within the vague confines of open-source software build system\n> standards.) For all we know, the directory may already be there and the\n> installer told us to reuse it.\n\nAgreed, we should probably not try to change ownership/permission of an\nexisting directory. However, if we have to make a directory, it seems\nsensible to me to make it Postgres-owned and mode 700.\n\nYour concept of keeping client-side and server-side config files in the\nsame directory makes me very unhappy; that's a recipe for permission\nproblems if I ever saw one.\n\n> How is the situation on the broken editors these days?\n\nWell, I'm still using Emacs 19.34, which is a tad old, but it's got a\nproblem with not preserving file GIDs reliably. Dunno if this is fixed\nin newer releases.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 08 Feb 2002 18:43:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Summary of new configuration file and data directory locations " }, { "msg_contents": "Hi guys,\n\nQuick question. I'm getting the feeling these changes will impact any\ncompany which would like to have one installation of the binaries on\ntheir servers, but which run several instances of these binaries from\nthe one location.\n\ni.e.\n\n/opt/pgsql/7.2 <-- installed program\n\n/data1 <-- data dir and config files\n/data2 <-- data dir and config files\n/data3 <-- data dir and config files\n<etc>\n\npg_ctl start -D /data1\npg_ctl start -D /data2\npg_ctl start -D /data3\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\nTom Lane wrote:\n> \n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > So the premise is that in theory any file can live anywhere. And the\n> > access permissions of a file are solely controlled by its own permission\n> > bits and ownership, not what directory it may live in. Ultimately, the\n> > former way is more secure.\n> \n> <<itch>> I guess my thoughts on this are colored by bad experience with\n> tools that are sloppy about preserving ownership/permissions on edited\n> files. (I can recall being burnt this way by both Emacs and HP's \"SAM\"\n> admin tool. Perhaps recent versions don't have those bugs anymore.)\n> I am not at all convinced that \"the former way is more secure\" in\n> reality, even if it's cleaner in theory.\n> \n> Can't we do both? If the default setup is to put config files in\n> a Postgres-specific directory, then let's make the default arrangement\n> be that that directory is Postgres-owned, mode 700, *and* the config\n> files are Postgres-owned and mode 600. Anyone who wants to back off\n> from that is welcome to take responsibility for any security holes\n> they've created.\n> \n> > 2. Make sure the user account you created in step 1 can read the\n> > configuration files. There are a few ways to make this happen:\n> \n> > a. Make the configuration files world-readable.\n> \n> I'd prefer you not recommend that at all, and certainly not as the\n> first alternative.\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Mon, 11 Feb 2002 11:48:14 +1100", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Summary of new configuration file and data directory " }, { "msg_contents": "\nDid we come to a conclusion on this?\n\n---------------------------------------------------------------------------\n\nPeter Eisentraut wrote:\n> Tom Lane writes:\n> \n> > Can't we do both? If the default setup is to put config files in\n> > a Postgres-specific directory, then let's make the default arrangement\n> > be that that directory is Postgres-owned, mode 700, *and* the config\n> > files are Postgres-owned and mode 600.\n> \n> The problem with this is that the PostgreSQL-specific configuration file\n> directory may be used by programs other than the server. E.g., the ODBC\n> driver puts stuff in there. In some future life there may be a global\n> psqlrc file or the JDBC driver may have a global properties file (don't\n> know if that just made sense). So we'd have to make a subdirectory, say\n> \"server\" (or \"secure\" or \"secret\" ...). Seems a bit ugly.\n> \n> Moreover, I don't know if we can make permission changes on directories\n> during installation (make install). (Read \"can\" as: ought to, while\n> staying within the vague confines of open-source software build system\n> standards.) For all we know, the directory may already be there and the\n> installer told us to reuse it.\n> \n> How is the situation on the broken editors these days? We might just want\n> to put a note on the top of each critical file\n> \n> # Make sure this file is not readable by anyone except you.\n> # If you edit it, make sure your editor does not change the permissions on\n> # this file.\n> # If in doubt, execute chmod go-a filename.\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 21 Feb 2002 23:13:21 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Summary of new configuration file and data directory locations" }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > Did we come to a conclusion on this?\n> \n> I had hoped the people that originally demanded this functionality would\n> comment in this thread. Unless someone can provide a solution that keeps\n> the data in the configuration files safe without doing anything strange\n> during installation, the files will stay where they are.\n\nOK, is there any value in putting all the files in their own pgsql/etc\ndirectory or is the disturbance not worth it?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Feb 2002 00:15:55 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Summary of new configuration file and data directory locations" }, { "msg_contents": "Bruce Momjian writes:\n\n> Did we come to a conclusion on this?\n\nI had hoped the people that originally demanded this functionality would\ncomment in this thread. Unless someone can provide a solution that keeps\nthe data in the configuration files safe without doing anything strange\nduring installation, the files will stay where they are.\n\n>\n> ---------------------------------------------------------------------------\n>\n> Peter Eisentraut wrote:\n> > Tom Lane writes:\n> >\n> > > Can't we do both? If the default setup is to put config files in\n> > > a Postgres-specific directory, then let's make the default arrangement\n> > > be that that directory is Postgres-owned, mode 700, *and* the config\n> > > files are Postgres-owned and mode 600.\n> >\n> > The problem with this is that the PostgreSQL-specific configuration file\n> > directory may be used by programs other than the server. E.g., the ODBC\n> > driver puts stuff in there. In some future life there may be a global\n> > psqlrc file or the JDBC driver may have a global properties file (don't\n> > know if that just made sense). So we'd have to make a subdirectory, say\n> > \"server\" (or \"secure\" or \"secret\" ...). Seems a bit ugly.\n> >\n> > Moreover, I don't know if we can make permission changes on directories\n> > during installation (make install). (Read \"can\" as: ought to, while\n> > staying within the vague confines of open-source software build system\n> > standards.) For all we know, the directory may already be there and the\n> > installer told us to reuse it.\n> >\n> > How is the situation on the broken editors these days? We might just want\n> > to put a note on the top of each critical file\n> >\n> > # Make sure this file is not readable by anyone except you.\n> > # If you edit it, make sure your editor does not change the permissions on\n> > # this file.\n> > # If in doubt, execute chmod go-a filename.\n> >\n> > --\n> > Peter Eisentraut peter_e@gmx.net\n> >\n> >\n>\n>\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Fri, 22 Feb 2002 00:16:00 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Summary of new configuration file and data directory" } ]
[ { "msg_contents": "\nSomewhere after 7.2b2 (it looks like for 7.2b4) a change was made to\narray_out() in:\n\n src/backend/utils/adt/arrayfuncs.c,v 1.72 2001 /11/29 21:02:41 tgl\n\n \"Fix array_out's failure to backslash backslashes, per bug# 524.\n Also, remove brain-dead rule that double quotes are needed if\n and only if the datatype is pass-by-reference; neither direction\n of the implication holds water. Instead, examine the actual data\n string to see if it contains any characters that force us to quote\n it. Add some documentation about quoting of array values, which was\n previously explained nowhere AFAICT.\"\n\nThe older code quoted any \"pass by ref\" types, ie varlena's as opposed to\nints or floats. Which was perhaps clumsy, but at least was predictable.\nThat is, if it was a char or text type you could assume it was quoted, eg\n\n { \"foo\", \"bar\", \"foo bar\", \"foo.bar\", \"foo,bar\" }.\n\nThe new code goes to the trouble of scanning the string for embedded\ncommas, curlys, backslashes and whitespace. If it finds some it arranges\nfor quotes, otherwise it suppresses the quotes anywhere it is possible\nnot to use them.\n\nThere is now no easy way for a client to know what the output will look\nlike as it now depends on the specific data values, eg: \n \n { foo, bar, \"foo bar\", foo.bar, \"foo,bar\", \"foo\\\"bar\" }\n\nThis pretty much breaks any client that does not have a scanner at least\nas intelligent as the array_in() scanner. Which includes most perl\nprograms, and pretty much all shell scripts using psql and arrays.\n\nSince Jdbc did not support arrays well until very recently, I suspect it\nbroke a few hand written java array scanners too. It did ours.\n\nFinally, I simply don't see the need for this change. There is no pressing\nworld shortage of double-quote characters, so there seems little reason to\nconserve them. Especially at the cost of breaking existing clients and\ncomplicating future client code (exercise: write a shell function to\nscan the new style of array result correctly).\n\nI view this change as a bug and would like to see it backed out. \n\n\n<rant>\n\nWe monitor the hackers list, test with most betas, read the release\nnotes carefully. And were completely blindsided when our production apps\nfell over after we upgraded from 7.2b2 to the release. Grrrr....\n\nSo right now we have reverted to our previous version and deferred our\nour upgrade until we decide whether to patch out this change (and maintain\nanother patch) or to rewrite some of our client code (at least we have\nsource so we have this option, some don't). \n\nI would really like to hear that I have persuaded you that it is a bug\nand will be fixed in a future release.\n\nAlternatively that you plan not to slipstream client visible formatting\nchanges into late betas. Especially without any mention in the release\nnotes.\n\nI really wish one could upgrade from release to release of postgres without\na db reload, or rewriting client software, and especially not without\nboth.\n\nGrrrr....\n\nGrrrr....\n\n</rant>\n\nRegards\n\n-dg\n\n-- \nDavid Gould dg@nextbus.com (or davidg@dnai.com)\nCriminal: A person with predatory instincts who has not sufficient capital\nto form a corporation. - Howard Scott\n", "msg_date": "Thu, 7 Feb 2002 16:59:39 -0800", "msg_from": "David Gould <dg@nextbus.com>", "msg_from_op": true, "msg_subject": "7.2 - changed array_out() - quotes vs no quotes" }, { "msg_contents": "David Gould <dg@nextbus.com> writes:\n> Somewhere after 7.2b2 (it looks like for 7.2b4) a change was made to\n> array_out() in:\n\nI will take the blame for that.\n\n> I view this change as a bug and would like to see it backed out. \n\nThe old behavior was certainly broken and I will not accept a proposal\nto back out the change entirely.\n\nWhat you are really saying is that you'd prefer the choice of quotes or\nno quotes to be driven by the datatype rather than by the data value.\nThat's a legitimate gripe, but where shall we get the knowledge of whether\nthe datatype might sometimes emit strings that need quoting? Using the\npass-by-reference flag is *completely* wrong; the fact that it chanced\nnot to fail in your application does not make it less wrong.\n\nThe only way I could see to make the behavior totally predictable at\nthe datatype level (while not being broken) is to always quote every\narray element. However, that would likely break some other people's\noversimplified parsers, so I'm not convinced it's a net win. Perhaps\nyou should fix your application code rather than relying on a\nnever-documented behavior.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 08 Feb 2002 00:29:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2 - changed array_out() - quotes vs no quotes " }, { "msg_contents": "On Fri, Feb 08, 2002 at 12:29:46AM -0500, Tom Lane wrote:\n> David Gould <dg@nextbus.com> writes:\n> > Somewhere after 7.2b2 (it looks like for 7.2b4) a change was made to\n> > array_out() in:\n> \n> I will take the blame for that.\n\nWell, it would have been nice if it had been in the release notes...\n\n> > I view this change as a bug and would like to see it backed out. \n> \n> The old behavior was certainly broken and I will not accept a proposal\n> to back out the change entirely.\n\nI am not defending the old behaviour, just the fact that it _was_\npredictable. It was simple to know which types are pass by ref and it is\nsimple to allow for it. I am not saying that pass by ref is the right\ncriteria, just that predictable is good.\n \n> What you are really saying is that you'd prefer the choice of quotes or\n> no quotes to be driven by the datatype rather than by the data value.\n\nYes. I think it is not excessive to insist that types have stable,\npredicatable representations. The other types do, why should arrays be\neven more special?\n\n> That's a legitimate gripe, but where shall we get the knowledge of whether\n> the datatype might sometimes emit strings that need quoting? Using the\n\nHmmm, maybe the type designers should deal with this. And if they don't,\nwell quote em.\n\nOr, you don't even need the quotes, you could just promise never\nto insert white space and to always escape embedded commas and curlys.\nSo a dumb client could simply split on un-escaped commas and be done.\nI could live with that as long as it never changed again, at least without\nsome warning.\n\n> pass-by-reference flag is *completely* wrong; the fact that it chanced\n> not to fail in your application does not make it less wrong.\n\n\"chanced not to fail\". Nice. \n\n> The only way I could see to make the behavior totally predictable at\n> the datatype level (while not being broken) is to always quote every\n> array element. However, that would likely break some other people's\n\nFine with me. That is what it did before.\n\n> oversimplified parsers, so I'm not convinced it's a net win. Perhaps\n> you should fix your application code rather than relying on a\n> never-documented behavior.\n\n\"relying on a never-documented behavior\" ..., so it is now my fault there\nis no documentation? Heh.\n\nBesides, this isn't even strictly true, in the html documentation\n(doc/arrays.html \"chapter 6\") that ships with 7.2 there is a nice example:\n\n SELECT schedule[1:2][1:1] FROM sal_emp WHERE name = 'Bill';\n\n schedule\n --------------------\n {{\"meeting\"},{\"\"}}\n (1 row)\n\nwhich shows quotes around the strings. Likewise, the doc quotes strings in\nevery example of an insert. Further, it says:\n\n \"Observe that to write an array value, we enclose the element values\n within curly braces and separate them by commas. If you know C, this\n is not unlike the syntax for initializing structures.\"\n\nWhich suggests that the syntax should be similar to C. Which requires\nstrings to be quoted.\n\nAnd, since the \"undocumented behaviour\" has been there since the dawn\nof time, or at least since:\n\n src/backend/utils/adt/arrayfuncs.c\n Revision 1.1, Tue Jul 9 06:22:03 1996 UTC (5 years, 7 months ago) by scrappy\n\nit is likely that _not_ _changing_ _it_ would not break any of the last\nsix years worth of existing \"other people's oversimplified parsers\".\n\n... breathes ...\n\nI don't mean to be rude about this, I have a lot of respect for postgres,\nall of the postgres developers and the overall process.\n\nBut to slip a client visible change late in a beta cycle to a specific\nformat that has been stable since UC Berkeley freed the code, and then\nto suggest that is just a some silly user relying on\n\"never-documented behavior\" is almost comical.\n\nSeriously, one point of a database is to insulate client applications\nfrom the exact representation and layout of the data. Which is not\naccomplished by making arbitrary changes to simple things like strings\nthat make them take a yards and yards of code to parse.\n\nI think a clearly explained rule about what types are quoted (and either\nqoted or not quoted, not sometimes this and sometimes that) would be\na nice addition to the documentation, especially if the code then\nwas made to match it. And it was in the release notes.\n\nBut, as it stands, I still see a bug.\n\n-dg\n\n\n-- \nDavid Gould dg@nextbus.com (or davidg@dnai.com)\n'Some people, when confronted with a problem, think \"I know, I'll use\n regular expressions\". Now they have two problems.' -- Jamie Zawinski\n", "msg_date": "Fri, 8 Feb 2002 00:02:36 -0800", "msg_from": "David Gould <dg@nextbus.com>", "msg_from_op": true, "msg_subject": "Re: 7.2 - changed array_out() - quotes vs no quotes" }, { "msg_contents": "David Gould <dg@nextbus.com> writes:\n> Yes. I think it is not excessive to insist that types have stable,\n> predicatable representations. The other types do, why should arrays be\n> even more special?\n\nThe representation is stable and predictable. You're simply hoping to\navoid building smarts into your parser for it. Unfortunately, some\ndegree of smarts are *necessary* if you are going to deal with array\nitems containing arbitrary text. I can hardly believe that a client\nprogram that can deal with backslash-escapes is going to have trouble\nremoving quotes.\n\n> Or, you don't even need the quotes, you could just promise never\n> to insert white space and to always escape embedded commas and curlys.\n\nNo, we can't, because that would break applications that rely on the\nexisting rules for array input: leading whitespace is insignificant\nunless quoted. Besides, weren't you complaining because the quotes\ndisappeared? The above variant would still break your code.\n\n> So a dumb client could simply split on un-escaped commas and be done.\n\nI hardly think that a client that can tell the difference between an\nescaped comma and an un-escaped one qualifies as \"dumb\".\n\nWe could perhaps dispense with quotes on output if we escaped leading\nspaces. For example, instead of\n\t\" foo\"\nemit\n\t\\ foo\nI don't think this is a step forward in readability, though. And\nincreased reliance on backslashes instead of double quotes won't really\nmake anyone's life easier --- for example, you'd have to remember to\ndouble them when sending the same value back to the SQL parser.\n\n>> The only way I could see to make the behavior totally predictable at\n>> the datatype level (while not being broken) is to always quote every\n>> array element.\n\n> Fine with me. That is what it did before.\n\nNo, it has never done that. In particular, I do not wish to change the\nlongstanding no-quotes behavior for arrays of integers. That *would*\nbreak other people's code. (One of the things I hoped to accomplish\nwith this change is to extend the same no-quotes behavior to floats and\nnumerics.)\n\n> But to slip a client visible change late in a beta cycle to a specific\n> format that has been stable since UC Berkeley freed the code,\n\nIt's been broken since Berkeley, too; the fact that no one complained\ntill a month or two ago just indicates how little arrays are used, IMHO.\nI doubt you'd be any less annoyed no matter when in the development\ncycle we'd done this.\n\nI do agree that it'd be better if this had been called out in the\nrelease notes. We don't currently have any process for ensuring that\nminor incompatibilities get noted in the release notes. Bruce makes up\nthe notes based on scanning the CVS logs after the fact, and if he\nmisses the significance of an entry, it's missed. Maybe we can do\nbetter than that --- adding an entry to a release-notes-to-be file when\nthe change is made might be more reliable.\n\nIt's also true that the SGML documentation is sadly deficient on this\npoint; but then, its discussion of arrays is overly terse in just about\nevery respect. Someone want to volunteer to expand it?\n\n> Seriously, one point of a database is to insulate client applications\n> from the exact representation and layout of the data. Which is not\n> accomplished by making arbitrary changes to simple things like strings\n> that make them take a yards and yards of code to parse.\n\nProperly parsing arrays of text values is going to require dealing with\nbackslash-escapes in any case; seems to me that that's what will take\n\"yards and yards\" of code. Stripping off optional quotes is trivial\nby comparison. On the other hand, parsing arrays of integers is pretty\ntrivial since you know there are no escapable characters anywhere.\nI don't favor pushing complexity out of the one case and into the other.\n\nI'm willing to consider the output-no-quotes-at-all approach if people\nthink that's a superior solution. Comments anyone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 08 Feb 2002 10:39:06 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2 - changed array_out() - quotes vs no quotes " }, { "msg_contents": "Tom Lane wrote:\n<snip>\n> \n> I'm willing to consider the output-no-quotes-at-all approach if people\n> think that's a superior solution. Comments anyone?\n\nKeeping the quotes seems better, purely because of readability.\n\ni.e. \" xyzzy \" is better than ,\\ xyzzy\\ , or similar.\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Sat, 09 Feb 2002 03:34:32 +1100", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: 7.2 - changed array_out() - quotes vs no quotes" }, { "msg_contents": "\n\nThe issue is that changing a user interface will break people's\ncode. It is clear that a user interface change was made and\nit is also clear that it broke people's code. In any organization\nthat is a bug.\n\nI did this job of gating server user interfaces for commercial postgres\nbased systems for several years. It is not always a nice job. The\nneed to carefully ensure that interfaces were (almost) always backward\ncompatible was given, as was the necessity of highlighting any unavoidable\nchanges in interfaces so that the users would be impacted in the smallest\npossible way.\n\nIn this case, the container aspect of the datatype requires more studied\nparsing that most others with or without the changes. Also, the array\nhandling routines in jdbc were only completed in 7.2, so we can assume\nthat java users had to do their own parsing of the representation of\nthis container type.\n\nThe counter argument that there were no complaints\nis not particularly relevent since few people have been using the\nbeta versions and have been waiting for 7.2 to stabilize since it requires\na significant effort to do the initdb. It is safe to say that the\nmasses will be a bit slowere on the adoption of an initdb release than\nothers.\n\nYou all have been stellar at supporting postgresql and responding to users\nin very helpful and immediate ways. We have benefited directly from that.\nKeep up the good work by not changing user interfaces for anything other\nthan unavoidable circumstances.\n\nelein\n\nTom Lane wrote:\n\n> David Gould <dg@nextbus.com> writes:\n> \n>>Yes. I think it is not excessive to insist that types have stable,\n>>predicatable representations. The other types do, why should arrays be\n>>even more special?\n>>\n> \n> The representation is stable and predictable. You're simply hoping to\n> avoid building smarts into your parser for it. Unfortunately, some\n> degree of smarts are *necessary* if you are going to deal with array\n> items containing arbitrary text. I can hardly believe that a client\n> program that can deal with backslash-escapes is going to have trouble\n> removing quotes.\n> \n> \n>>Or, you don't even need the quotes, you could just promise never\n>>to insert white space and to always escape embedded commas and curlys.\n>>\n> \n> No, we can't, because that would break applications that rely on the\n> existing rules for array input: leading whitespace is insignificant\n> unless quoted. Besides, weren't you complaining because the quotes\n> disappeared? The above variant would still break your code.\n> \n> \n>>So a dumb client could simply split on un-escaped commas and be done.\n>>\n> \n> I hardly think that a client that can tell the difference between an\n> escaped comma and an un-escaped one qualifies as \"dumb\".\n> \n> We could perhaps dispense with quotes on output if we escaped leading\n> spaces. For example, instead of\n> \t\" foo\"\n> emit\n> \t\\ foo\n> I don't think this is a step forward in readability, though. And\n> increased reliance on backslashes instead of double quotes won't really\n> make anyone's life easier --- for example, you'd have to remember to\n> double them when sending the same value back to the SQL parser.\n> \n> \n>>>The only way I could see to make the behavior totally predictable at\n>>>the datatype level (while not being broken) is to always quote every\n>>>array element.\n>>>\n> \n>>Fine with me. That is what it did before.\n>>\n> \n> No, it has never done that. In particular, I do not wish to change the\n> longstanding no-quotes behavior for arrays of integers. That *would*\n> break other people's code. (One of the things I hoped to accomplish\n> with this change is to extend the same no-quotes behavior to floats and\n> numerics.)\n> \n> \n>>But to slip a client visible change late in a beta cycle to a specific\n>>format that has been stable since UC Berkeley freed the code,\n>>\n> \n> It's been broken since Berkeley, too; the fact that no one complained\n> till a month or two ago just indicates how little arrays are used, IMHO.\n> I doubt you'd be any less annoyed no matter when in the development\n> cycle we'd done this.\n> \n> I do agree that it'd be better if this had been called out in the\n> release notes. We don't currently have any process for ensuring that\n> minor incompatibilities get noted in the release notes. Bruce makes up\n> the notes based on scanning the CVS logs after the fact, and if he\n> misses the significance of an entry, it's missed. Maybe we can do\n> better than that --- adding an entry to a release-notes-to-be file when\n> the change is made might be more reliable.\n> \n> It's also true that the SGML documentation is sadly deficient on this\n> point; but then, its discussion of arrays is overly terse in just about\n> every respect. Someone want to volunteer to expand it?\n> \n> \n>>Seriously, one point of a database is to insulate client applications\n>>from the exact representation and layout of the data. Which is not\n>>accomplished by making arbitrary changes to simple things like strings\n>>that make them take a yards and yards of code to parse.\n>>\n> \n> Properly parsing arrays of text values is going to require dealing with\n> backslash-escapes in any case; seems to me that that's what will take\n> \"yards and yards\" of code. Stripping off optional quotes is trivial\n> by comparison. On the other hand, parsing arrays of integers is pretty\n> trivial since you know there are no escapable characters anywhere.\n> I don't favor pushing complexity out of the one case and into the other.\n> \n> I'm willing to consider the output-no-quotes-at-all approach if people\n> think that's a superior solution. Comments anyone?\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n\n-- \n--------------------------------------------------------\nelein@nextbus.com \n(510)420-3120 \nwww.nextbus.com\n\tspinning to infinity, hallelujah\n--------------------------------------------------------\n\n", "msg_date": "Fri, 08 Feb 2002 12:02:41 -0800", "msg_from": "Elein <elein@nextbus.com>", "msg_from_op": false, "msg_subject": "Re: 7.2 - changed array_out() - quotes vs no quotes" } ]
[ { "msg_contents": " From 7.1.3 -> 7.2, heap_fetch changed from:\n\nextern void heap_fetch(Relation relation, Snapshot snapshot, \\\n\tHeapTuple tup, Buffer *userbuf);\n\nto\n\nextern void heap_fetch(Relation relation, Snapshot snapshot, \\\nHeapTuple tup, Buffer *userbuf, IndexScanDesc iscan);\n\nCan any of the core people give me a quick 101 on what the IndexScanDesc\nvar is now used for / how to reference it / where to learn more about it\nin the code?\n\nThanks all.\n\n- Brandon\n\n\n\n----------------------------------------------------------------------------\n c: 646-456-5455 h: 201-798-4983\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n", "msg_date": "Thu, 7 Feb 2002 20:51:34 -0500 (EST)", "msg_from": "bpalmer <bpalmer@crimelabs.net>", "msg_from_op": true, "msg_subject": "heap_fetch" } ]
[ { "msg_contents": "With the replication code that's being worked on, pthreads are used a\ngood deal. Do I recall that it's bad juju to use pthreads (in\nthte non-portable sense)?\n\nThanks again.\n\n- Brandon\n\n\n----------------------------------------------------------------------------\n c: 646-456-5455 h: 201-798-4983\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n", "msg_date": "Thu, 7 Feb 2002 21:13:35 -0500 (EST)", "msg_from": "bpalmer <bpalmer@crimelabs.net>", "msg_from_op": true, "msg_subject": "A few more questions.." } ]
[ { "msg_contents": "Hmmm...some good comments, some bad comments, some offensive comments and a\nlot of FUD from MySQL zealots who don't know any better:\n\nhttp://slashdot.org/article.pl?sid=02/02/07/0212218&mode=thread\n\nChris\n\n", "msg_date": "Fri, 8 Feb 2002 10:22:42 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "PostgreSQL 7.2 on SlashDot" }, { "msg_contents": "Christopher Kings-Lynne writes:\n > Hmmm...some good comments, some bad comments, some offensive\n > comments and a lot of FUD from MySQL zealots who don't know any\n\nIt's a shame the website doesn't have the full info on it - i.e. the\n7.2 documentation.\n\nLee.\n", "msg_date": "Fri, 8 Feb 2002 09:44:58 +0000", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": false, "msg_subject": "PostgreSQL 7.2 on SlashDot" }, { "msg_contents": "On Fri, 8 Feb 2002, Lee Kindness wrote:\n\n> Christopher Kings-Lynne writes:\n> > Hmmm...some good comments, some bad comments, some offensive\n> > comments and a lot of FUD from MySQL zealots who don't know any\n>\n> It's a shame the website doesn't have the full info on it - i.e. the\n> 7.2 documentation.\n\nConsidering I just got the go-ahead on the docs last nite, they'll be\nthere today. I'd rather the docs were delayed a few days than to put\nsomething up that wasn't complete.\n\nWhat's really a shame is that you didn't ASK why it wasn't there!\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 8 Feb 2002 06:19:33 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 7.2 on SlashDot" }, { "msg_contents": "Vince,\n\nVince Vielhaber writes:\n > On Fri, 8 Feb 2002, Lee Kindness wrote:\n > > It's a shame the website doesn't have the full info on it - i.e. the\n > > 7.2 documentation.\n > Considering I just got the go-ahead on the docs last nite, they'll be\n > there today. I'd rather the docs were delayed a few days than to put\n > something up that wasn't complete.\n > What's really a shame is that you didn't ASK why it wasn't there!\n\nSorry, I wasn't meaning anything personal! However I'm sure you agree\nthat with a project like PostgreSQL the website is a core aspect.\n\nUndoubtedly the postgresql.org mirrors got their highest traffic\nlevels for ages - people looking for 7.2 information and not finding\nit (well it's there, but in the tarballs). us.postgresql.org certainly\nhad a heavy load - the interactive docs couldn't connect to the\ndatabase.\n\nPerhaps the 7.2 announcement should have been held back until the\nwebsite was up-to-date?\n\nThanks, Lee.\n", "msg_date": "Fri, 8 Feb 2002 11:31:59 +0000", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 7.2 on SlashDot" }, { "msg_contents": "On Fri, 8 Feb 2002, Lee Kindness wrote:\n\n> Vince,\n>\n> Vince Vielhaber writes:\n> > On Fri, 8 Feb 2002, Lee Kindness wrote:\n> > > It's a shame the website doesn't have the full info on it - i.e. the\n> > > 7.2 documentation.\n> > Considering I just got the go-ahead on the docs last nite, they'll be\n> > there today. I'd rather the docs were delayed a few days than to put\n> > something up that wasn't complete.\n> > What's really a shame is that you didn't ASK why it wasn't there!\n>\n> Sorry, I wasn't meaning anything personal! However I'm sure you agree\n> that with a project like PostgreSQL the website is a core aspect.\n>\n> Undoubtedly the postgresql.org mirrors got their highest traffic\n> levels for ages - people looking for 7.2 information and not finding\n> it (well it's there, but in the tarballs). us.postgresql.org certainly\n> had a heavy load - the interactive docs couldn't connect to the\n> database.\n\nJust tried it and it worked fine. The interactive docs aren't on the\nus mirror anyway. The link points back to the main site. But the 7.2\ndocs won't be available interactively until they're finalized and a note\nto that effect is under the search box on the idocs site.\n\n> Perhaps the 7.2 announcement should have been held back until the\n> website was up-to-date?\n\nnot my department.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 8 Feb 2002 06:54:59 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 7.2 on SlashDot" }, { "msg_contents": "Vince Vielhaber writes:\n\n> Considering I just got the go-ahead on the docs last nite, they'll be\n> there today. I'd rather the docs were delayed a few days than to put\n> something up that wasn't complete.\n\nVince, the docs were finalized when the release came out. Whatever was in\nthe release tarball is the final documentation.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Fri, 8 Feb 2002 11:42:36 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 7.2 on SlashDot" }, { "msg_contents": "On Fri, 8 Feb 2002, Peter Eisentraut wrote:\n\n> Vince Vielhaber writes:\n>\n> > Considering I just got the go-ahead on the docs last nite, they'll be\n> > there today. I'd rather the docs were delayed a few days than to put\n> > something up that wasn't complete.\n>\n> Vince, the docs were finalized when the release came out. Whatever was in\n> the release tarball is the final documentation.\n\nThat's what I incorrectly assumed when 7.1 came out. What was released\nstill needed some cleanup. So until I know for a fact that the docs are\nin good enough shape to be put on the website, they're not put there.\n\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 8 Feb 2002 11:45:22 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 7.2 on SlashDot" }, { "msg_contents": "Vince Vielhaber <vev@michvhf.com> writes:\n>> Vince, the docs were finalized when the release came out. Whatever was in\n>> the release tarball is the final documentation.\n\n> That's what I incorrectly assumed when 7.1 came out. What was released\n> still needed some cleanup. So until I know for a fact that the docs are\n> in good enough shape to be put on the website, they're not put there.\n\nThat seems overly strict. Why not put up what you have ASAP? If they\nneed to be updated later, then replace 'em ... but better to have\nnot-quite-final docs than no docs.\n\nOr at least put up a link to the development docs saying not-quite-final\ndocs can be found _over_here_. The problem isn't that there are no docs\navailable, it's that there's no link in the place that non-developers\nwould expect to look.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 08 Feb 2002 13:09:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 7.2 on SlashDot " }, { "msg_contents": "On Fri, 8 Feb 2002, Tom Lane wrote:\n\n> Vince Vielhaber <vev@michvhf.com> writes:\n> >> Vince, the docs were finalized when the release came out. Whatever was in\n> >> the release tarball is the final documentation.\n>\n> > That's what I incorrectly assumed when 7.1 came out. What was released\n> > still needed some cleanup. So until I know for a fact that the docs are\n> > in good enough shape to be put on the website, they're not put there.\n>\n> That seems overly strict. Why not put up what you have ASAP? If they\n> need to be updated later, then replace 'em ... but better to have\n> not-quite-final docs than no docs.\n\nWell lessee. Until Peter's comment earlier, there has been no mention\nabout the docs being final or even close. There was no doc freeze. The\nlast comments I recall seeing about them at all was in early December.\nAnd from that I'm to assume that they're in good enough shape? One thing\nI learned in the last five years here, never assume anything. But at this\npoint it's irrelevant since I don't intent for it to be my problem when\n7.3 is released.\n\n> Or at least put up a link to the development docs saying not-quite-final\n> docs can be found _over_here_. The problem isn't that there are no docs\n> available, it's that there's no link in the place that non-developers\n> would expect to look.\n\nNow that would be an unrecoverable mistake. Any implications at all that\npoint to the development docs as being release quality will lead people to\nexpect the development docs to always be at release quality which is\nsimply untrue. It will invite linking to them and the problems that are\nassociated with that. Remember? That's one of the reasons I seperated\nthe two sites.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 8 Feb 2002 13:51:19 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 7.2 on SlashDot " }, { "msg_contents": "Vince Vielhaber <vev@michvhf.com> writes:\n> On Fri, 8 Feb 2002, Tom Lane wrote:\n>> Or at least put up a link to the development docs saying not-quite-final\n>> docs can be found _over_here_. The problem isn't that there are no docs\n>> available, it's that there's no link in the place that non-developers\n>> would expect to look.\n\n> Now that would be an unrecoverable mistake. Any implications at all that\n> point to the development docs as being release quality will lead people to\n> expect the development docs to always be at release quality which is\n> simply untrue. It will invite linking to them and the problems that are\n> associated with that. Remember? That's one of the reasons I seperated\n> the two sites.\n\nNo, no, I'm not suggesting a *permanent* link from the user docs page\nover to devel docs. I'm suggesting that during this interval when we\nhave a release out but no frozen docs, we should have a link from the\nuser docs page saying \"not-quite-frozen docs for 7.2 are over there\".\nWhen you install final docs you replace that link with 'em.\n\nIf the current devel docs are not pretty damn usable at this stage,\nwe have big problems ;-). The fact that they may not be quite the final\nPDFs or whatever doesn't bother me. The fact that people can't find\nthem in the place they'd expect to look does bother me.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 08 Feb 2002 14:01:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 7.2 on SlashDot " } ]
[ { "msg_contents": "We all understand the reasons why one MUST dump and restore to upgrade to 7.2,\nbut I'd like to make a general call to arms that this (or 7.3) should be the\nlast release to require this.\n\nIt doesn't look good. One should be able to upgrade in place, and even revert\nto an older version, if nessisary.\n\nWhat do you all think?\n", "msg_date": "Thu, 07 Feb 2002 21:56:22 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Why dump/restore to upgrade?" }, { "msg_contents": "At 9:56 PM -0500 2/7/02, mlw wrote:\n>We all understand the reasons why one MUST dump and restore to upgrade to 7.2,\n>but I'd like to make a general call to arms that this (or 7.3) should be the\n>last release to require this.\n>\n>It doesn't look good. One should be able to upgrade in place, and even revert\n>to an older version, if nessisary.\n>\n>What do you all think?\n\n\nAt the very least, wouldn't it be easy for pg to automate the existing process?\nSay, dump/restore using textfiles in /tmp, and maybe even spit out a backup copy of the old database as a big tarball?\n\nIMHO, most of the complaints about PostgreSQL revolve around initial setup and inital ease of use. Sure, none of those issues matter once you're used to the system, but they wouldn't be difficult to fix either.\n\n-pmb\n\n\n", "msg_date": "Thu, 7 Feb 2002 19:32:59 -0800", "msg_from": "Peter Bierman <bierman@apple.com>", "msg_from_op": false, "msg_subject": "Re: Why dump/restore to upgrade?" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> but I'd like to make a general call to arms that this (or 7.3) should be the\n> last release to require this.\n\nWe will never make such a commitment, at least not in the foreseeable\nfuture.\n\nI would like to see more attention paid to supporting cross-version\nupgrades via pg_upgrade (or some improved version thereof) when\npractical, which it should be more often than not. But to bind\nourselves forever to the current on-disk format is sheer folly.\nAnd if you have to convert the datafile format then you might as\nwell dump and reload.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 08 Feb 2002 00:44:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why dump/restore to upgrade? " }, { "msg_contents": "On Friday 08 February 2002 12:44 am, Tom Lane wrote:\n> mlw <markw@mohawksoft.com> writes:\n> > but I'd like to make a general call to arms that this (or 7.3) should be\n> > the last release to require this.\n\n> We will never make such a commitment, at least not in the foreseeable\n> future.\n\nWhy?\n\n> But to bind\n> ourselves forever to the current on-disk format is sheer folly.\n\nCertainly true. But upgradability!=bound-to-the-same-format.\n\n> And if you have to convert the datafile format then you might as\n> well dump and reload.\n\nNo. Dump - reload is folly. And it doesn't always work. And that's the \nproblem.\n\nI've fought this problem a long time. Too long of a time. And it is a \nproblem. Unfortunately, it is a problem that is going to require some deep \nthought and hackery.\n\nI believe it should be this simple for our users:\n\n1.) Postmaster starts, finds old files. It's OK with that.\n\n2.) A connection starts a postgres backend. When the backend starts, it sees \nthe old format tree and adjusts to it as best it can -- if this means fewer \nfeatures, well, it'll just have to get over it. Send a warning down the \nconnection that it is in reduced functionality mode or some such.\n\n3.) An SQL command could then be issued down the connection that would, in a \ntransaction-safe manner, convert the data on the disk into the newest format. \nUntil the magic bullet upgrade command is sent, the backend operates in a \nreduced functionality mode -- maybe even read-only if necessary. \n\nIn this mode, a safer pg_dump can be executed -- how many times now have we \ntold people to use a newer pg_dump to dump an old version database? Just \nhaving read-only access to the old data through the new backend would in \nreality be much better than the fiasco we have now -- then we can safely \npg_dump the data, stop postmaster, initdb, start postmaster, and reload the \ndata.\n\nIf the conversion program is large enough to cause worry about backend bloat, \nthen make it standalone and not let postmaster start up on old data -- \npg_upgrade on steroids.\n\nNo, this isn't a core feature. Yes, there are features that are better uses \nof developer time. Sure, there is a partially working pg_upgrade utility, \nfor which I thank Bruce for weathering it out upon. \n\nBUT OUR UPGRADE PROCESS STINKS. \n\nSorry for yelling. But this touches a raw nerve for me. My apologies if I \nhave offended anyone -- PostgreSQL is just too good an RDBMS to suffer from \nthis problem. The developers here have put too much hard work of high \nquality for anyone to disparage PostgreSQL due to the lack of good solid \nupgrading. And I'm not upset at any one person -- it's the program; the \nprocess; and the users that rely on our code which cause me to be this \nvehement on this subject.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 8 Feb 2002 01:35:40 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Why dump/restore to upgrade?" }, { "msg_contents": "Tom Lane wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n> > but I'd like to make a general call to arms that this (or 7.3) should be the\n> > last release to require this.\n> \n> We will never make such a commitment, at least not in the foreseeable\n> future.\n\nHere's the problem. If you have a database that is in service, you can not\nupgrade postgres on that machine without taking it out of service for the\nduration of a backup/restore. A small database is not a big deal, a large\ndatabase is a problem. A system could be out of service for hours.\n\nFor a mission critical installation, this is really unacceptable.\n\n> \n> I would like to see more attention paid to supporting cross-version\n> upgrades via pg_upgrade (or some improved version thereof) when\n> practical, which it should be more often than not. But to bind\n> ourselves forever to the current on-disk format is sheer folly.\n> And if you have to convert the datafile format then you might as\n> well dump and reload.\n\nThe backup/restore to upgrade will be a deal breaker for many installations. If\nyou want more people using PostgreSQL, you need to accept that this is a very\nreal problem, and one which should be addressed as an unacceptable behavior.\n\nI don't want to say \"Other databases do it, why can't PostgreSQL\" because that\nisn't the point. Databases can be HUGE, pg_dumpall can take an hour or more to\nrun. Then, it takes longer to restore because indexes have to be recreated.\n", "msg_date": "Fri, 08 Feb 2002 07:29:49 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: Why dump/restore to upgrade?" }, { "msg_contents": "On Fri, 8 Feb 2002, mlw wrote:\n\n> > We will never make such a commitment, at least not in the foreseeable\n> > future.\n>\n> Here's the problem. If you have a database that is in service, you can\n> not upgrade postgres on that machine without taking it out of service\n> for the duration of a backup/restore. A small database is not a big\n> deal, a large database is a problem. A system could be out of service\n> for hours.\n>\n> For a mission critical installation, this is really unacceptable.\n\nCheap solution: replication. Bring up a slave with the\nnew version. Do the initial sync. Wait for a quiet\ntime, and switch over (possibly still replicating to\nthe old one so you can revent the upgrade).\n\nThe overhead would be fairly high, but it's a cheap way\nto get the required properties, and is easy to revert.\n\nMatthew.\n\n", "msg_date": "Fri, 8 Feb 2002 13:25:59 +0000 (GMT)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": false, "msg_subject": "Re: Why dump/restore to upgrade?" }, { "msg_contents": "Hi everyone,\n\nmlw wrote:\n> \n> Tom Lane wrote:\n<snip> \n> For a mission critical installation, this is really unacceptable.\n> \n> >\n> > I would like to see more attention paid to supporting cross-version\n> > upgrades via pg_upgrade (or some improved version thereof) when\n> > practical, which it should be more often than not. But to bind\n> > ourselves forever to the current on-disk format is sheer folly.\n> > And if you have to convert the datafile format then you might as\n> > well dump and reload.\n> \n> The backup/restore to upgrade will be a deal breaker for many installations. If\n> you want more people using PostgreSQL, you need to accept that this is a very\n> real problem, and one which should be addressed as an unacceptable behavior.\n> \n> I don't want to say \"Other databases do it, why can't PostgreSQL\" because that\n> isn't the point. Databases can be HUGE, pg_dumpall can take an hour or more to\n> run. Then, it takes longer to restore because indexes have to be recreated.\n\nHere's a thought for a method which doesn't yet exist, but as we're\nabout to start into the next version of PostgreSQL it might be worth\nconsidering.\n\na) Do a pg_dump of the old-version database in question. It takes note\nof the latest transaction numbers in progress.\n\nb) On a seperate machine, start doing a restore of the data, into the\nnew version database.\n\nc) Take the old-version database out-of-production, so no new\ntransactions are done on it.\n\nd) Run a yet-to-be-created utility which then takes a look at the\ndifference between the two, and updates the new-version database with\nthe entries which have changed since the first snapshot.\n\ne) Put the new-version database into production, assuming the\napplications hitting it have already been tested for compatibility with\nthe new version.\n\nYou could skip step c) (taking the old database offline) and do instead\nthe syncronisation step with it online if there has been a large\ntimeframe of changes, and THEN (one the differences are minimal) take\nthe old-version database offline and do another syncronisation for the\nremaining differences.\n\nHowever, I get the feeling a lot of this kind of thing is very similar\nto some of the approaches to replication already around or being\ndeveloped.\n\nIt might be simpler to make the old-version database a master replica,\nmake the new-version database a slave replica of it, then once they're\nin sync cut over to the new system (again assuming the applications\nusing it have been tested for compatibility with the new version).\n\nThe theory sounds alright, but in practise it might not be that easy. \nWe can live in hope however. :)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Sat, 09 Feb 2002 00:29:58 +1100", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Why dump/restore to upgrade?" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n> I don't want to say \"Other databases do it, why can't PostgreSQL\" because\n> that isn't the point. Databases can be HUGE, pg_dumpall can take an hour or\n> more to run. Then, it takes longer to restore because indexes have to be\n> recreated.\n\nA way to put PostgreSQL into a read only mode could help this (as well as \nmany other things.) That way they could at least let users have access to \nthe data the entire time that the dump restore process is going. It could \ntake a lot of disk space, but it would help the case where time is a bigger \nproblem than space.\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.6 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE8Y/oc8BXvT14W9HARAiGgAJ9ckgaq2YHupc/1Wl6RlRNJ5UulNwCeNWrt\nZrVN4qo+L9nDckpIPQ80MYM=\n=1WXQ\n-----END PGP SIGNATURE-----\n", "msg_date": "Fri, 8 Feb 2002 10:17:29 -0600", "msg_from": "\"Matthew T. O'Connor\" <matthew@zeut.net>", "msg_from_op": false, "msg_subject": "Re: Why dump/restore to upgrade?" }, { "msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n\n> On Friday 08 February 2002 12:44 am, Tom Lane wrote:\n> > mlw <markw@mohawksoft.com> writes:\n> > > but I'd like to make a general call to arms that this (or 7.3) should be\n> > > the last release to require this.\n> \n> > We will never make such a commitment, at least not in the foreseeable\n> > future.\n> \n> Why?\n> \n> > But to bind\n> > ourselves forever to the current on-disk format is sheer folly.\n> \n> Certainly true. But upgradability!=bound-to-the-same-format.\n> \n> > And if you have to convert the datafile format then you might as\n> > well dump and reload.\n> \n> No. Dump - reload is folly. And it doesn't always work. And that's the \n> problem.\n\nAdd some unicode into it for even more fun. Bah.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "08 Feb 2002 11:24:27 -0500", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Why dump/restore to upgrade?" } ]
[ { "msg_contents": "-----Original Message-----\nFrom: mlw [mailto:markw@mohawksoft.com]\nSent: Thursday, February 07, 2002 6:56 PM\nTo: PostgreSQL-development\nSubject: [HACKERS] Why dump/restore to upgrade?\n\n\nWe all understand the reasons why one MUST dump and restore to upgrade\nto 7.2,\nbut I'd like to make a general call to arms that this (or 7.3) should be\nthe\nlast release to require this.\n\nIt doesn't look good. One should be able to upgrade in place, and even\nrevert\nto an older version, if nessisary.\n\nWhat do you all think?\n>>-------------------------------------------------------------------\nI think sometimes it might be good to change the internal structure.\n\nRdb has made me do dump and load.\nSQL*Server has made me do dump and load.\n\nHowever, if there is a change to internal structure that requires such a\ntransition, it is traditional to change the MAJOR version number since\nthe change is traumatic.\n\nSomething that can make the transition a lot less painful is to\nautomatically output the SQL schema in a manner such that dependencies\nare not violated. (e.g. don't try to create a view before the table is\ndefined, stuff like that).\n\nTo try to say \"The internal structure will never need to change again\"\nwill lead to either certain failure or stagnation because you can't \nupdate the internals.\n<<-------------------------------------------------------------------\n", "msg_date": "Thu, 7 Feb 2002 19:28:49 -0800", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: Why dump/restore to upgrade?" } ]
[ { "msg_contents": "Hello\nI was making a reay dumb database.\njust to play.\nIts an agenda. I set 2 tables thus far, one for people with unique attribs to each people (birth, address and the like) and one for phones, since some people have more than one, or cell phones, etc.\n\nI wanted to make a query of name,birth date and phones, and have them grouped by name (so the name appeared only once) to get something like\n\nsomebody |xxx-yyyy| mm-dd-yyyy\n |xxx-yyyy|\nsomebody |xxx-yyyy| mm-dd-yyyy\nelse | |\n\nBut i guess its not possible right now?\nI did install the headers too, so i was just wondering if i could write an aggregate function to concatenate any type. In the docs, its not listed, and there are only aggregate functions that opperate on numbers.\n\nIf you can point me as to where to start looking to do some coding server side on it, i would appreciate that.\n\n-- \nICQ: 15605359 Bicho\n =^..^=\nFirst, they ignore you. Then they laugh at you. Then they fight you. Then you win. Mahatma Gandhi.\n........Por que no pensaran los hombres como los animales? Pink Panther........\n-------------------------------気検体の一致------------------------------------\n暑さ寒さも彼岸まで。\nアン アン アン とっても大好き\n\n", "msg_date": "Thu, 7 Feb 2002 22:40:04 -0600", "msg_from": "David Eduardo Gomez Noguera <davidgn@servidor.unam.mx>", "msg_from_op": true, "msg_subject": "aggregate functions only for numbers?" }, { "msg_contents": "David Eduardo Gomez Noguera <davidgn@servidor.unam.mx> writes:\n> I wanted to make a query of name,birth date and phones, and have them\n> grouped by name (so the name appeared only once) to get something like\n\nYou could probably do what you want with a custom aggregate. This is\ndiscussed in the mail list archives, and I think there is an article\nabout it on techdocs.postgresql.org.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 08 Feb 2002 00:11:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: aggregate functions only for numbers? " } ]
[ { "msg_contents": "Dear all,\n\nDon't flame me, this is just a reminder to tell that we would like\n- CREATE OR ALTER VIEW,\n- CREATE OR ALTER TRIGGER.\nadded to PostgreSQL.\nNot to say ALTER TABLE DROP COLUMN would be nice too.\n\nThis will help us provide a better GUI environment for pgAdmin2.\n\nBest regards,\nJean-Michel POURE\n", "msg_date": "Fri, 8 Feb 2002 09:58:10 +0100", "msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>", "msg_from_op": true, "msg_subject": "Feature request for 7.3 and pgAdmin : CREATE OR ALTER VIEW, TRIGGER" }, { "msg_contents": "Hi everybody, \nI would like to support Jean-Michel in asking for these\nfeatures, in particular the ALTER TABLE DROP COLUMN\nfeature... the applications change, so the db schema\nmust follow them... the way to solve the problem now\n(using temp tables) is annoying at best (althought better\nthan nothing :-)\n\nBest regards\nAndrea Aime\n\n\nJean-Michel POURE wrote:\n> \n> Dear all,\n> \n> Don't flame me, this is just a reminder to tell that we would like\n> - CREATE OR ALTER VIEW,\n> - CREATE OR ALTER TRIGGER.\n> added to PostgreSQL.\n> Not to say ALTER TABLE DROP COLUMN would be nice too.\n> \n> This will help us provide a better GUI environment for pgAdmin2.\n> \n> Best regards,\n> Jean-Michel POURE\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n", "msg_date": "Fri, 08 Feb 2002 11:32:35 +0100", "msg_from": "\"Andrea Aime\" <aaime@comune.modena.it>", "msg_from_op": false, "msg_subject": "Re: Feature request for 7.3 and pgAdmin : CREATE OR ALTER " }, { "msg_contents": "On Fri, 8 Feb 2002, Jean-Michel POURE wrote:\n\n> This will help us provide a better GUI environment for pgAdmin2.\nBy the way: I'm not on any pgAdmin2 list but regarding to feature\nrequests I could add something:\n - Portability to free operating systems (wxGTK is nice and portable\n and you would fill a big gap in the free software world)\n\nKind regards\n\n Andreas.\n", "msg_date": "Fri, 8 Feb 2002 11:42:24 +0100 (CET)", "msg_from": "\"Tille, Andreas\" <TilleA@rki.de>", "msg_from_op": false, "msg_subject": "Re: Feature request for 7.3 and pgAdmin : CREATE OR ALTER VIEW," } ]