threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "\nHere's my response to the inaccurate article cmp produced. After\nchatting with Marc I decided to post it myself.\n\nSince I know Ned reads this list, I formally request that he also\ninsists PUBLICALLY that cmp correct their inaccuracies. I'm rather\ndisappointed (for lack of a more descriptive word) that Bruce has not\nalready done so.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n---------- Forwarded message ----------\nDate: Sun, 15 Apr 2001 00:30:20 -0400 (EDT)\nFrom: Vince Vielhaber <vev@postgresql.org>\nTo: prooney@cmp.com\nCc: crnltr2edit@cmp.com\nSubject: Fast Forward\n\n\nWhere do you get your info? Do you just make it up? PostgreSQL is\nnot a product of Great Bridge and never has been. It's 100% independant.\nIs Linux a keyword you figure you can use to draw readers? Won't take\nlong before folks determine you're full of it. The PostgreSQL team takes\ngreat pride (not to be confused with great bridge) in ensuring that the\nwork we do runs on ALL platforms; be it Mac's OSX, FreeBSD 4.3, or even\nWindows 2000. So why do you figure this is a Great Bridge product? Why\ndo you figure it's Linux only? What is it with you writers lately? Are\nyou getting lazy and simply using Linux as a quick out for a paycheck?\n\nI rarely (if ever) sign my name as a member of the PostgreSQL team, but\nthis time I'm making an exception 'cuze you crossed the line. I also\nvery rarely get involved in this politlcal crap, but you've crossed\nthe line on that as well. The ENTIRE PostgreSQL team awaits your\nWRITTEN apology as does every support organisation listed on the website.\n\nVince Vielhaber - Webmaster for PostgreSQL.org\n\n\n\n\n\n\n",
"msg_date": "Sun, 15 Apr 2001 01:17:15 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": true,
"msg_subject": "Fast Forward (fwd)"
},
{
"msg_contents": "On Sun, Apr 15, 2001 at 01:17:15AM -0400, Vince Vielhaber wrote:\n> \n> Here's my response to the inaccurate article cmp produced. After\n> chatting with Marc I decided to post it myself.\n> ... \n> Where do you get your info? Do you just make it up? PostgreSQL is\n> not a product of Great Bridge and never has been. It's 100% independant.\n> Is Linux a keyword you figure you can use to draw readers? Won't take\n> long before folks determine you're full of it. The PostgreSQL team takes\n> great pride (not to be confused with great bridge) in ensuring that the\n> work we do runs on ALL platforms; be it Mac's OSX, FreeBSD 4.3, or even\n> Windows 2000. So why do you figure this is a Great Bridge product? Why\n> do you figure it's Linux only? What is it with you writers lately? Are\n> you getting lazy and simply using Linux as a quick out for a paycheck?\n\nThis is probably a good time to point out that this is the _worst_\n_possible_ response to erroneous reportage. The perception by readers\nwill not be that the reporter failed, but that PostgreSQL advocates are \nrabid weasels who don't appreciate favorable attention, and are dangerous \nto write anything about. You can bet this reporter and her editor will \ntreat the topic very circumspectly (i.e. avoid it) in the future. \nWhen they have to mention it, their reporting will be colored by their \npersonal experience. They (and their readers) don't run the code, \nso they must get their impressions from those who do. \n\nMost reporters are ignorant, most reporters are lazy, and many\nare both. It's part of the job description. Getting angry about\nit is like getting angry at birds for fouling their cage. Their\njob is to regurgitate what they're given, and quickly. They have no \ntime to learn the depths, or to write coherently about it, or even \nto check facts.\n\nNone of the errors in the article matter. Nobody will develop an\nenduring impression of PG from them. What matters is that PG is being \nmentioned in the same article with Oracle. In her limited way, she\ndid the PG community the biggest favor in her limited power, and all \nwe can do is attack?\n\nIt will be harder than the original mailings, but I urge each who\nwrote to write again and apologize for attacking her. Thank her \ngraciously for making an effort, and offer to help her check her \nfacts next time. PostgreSQL needs friends in the press, even if\nthey are ignorant or lazy. It doesn't need any enemies in the press.\n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Sat, 14 Apr 2001 23:47:38 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Fast Forward (fwd)"
},
{
"msg_contents": "On Sat, 14 Apr 2001, Nathan Myers wrote:\n\n> This is probably a good time to point out that this is the _worst_\n> _possible_ response to erroneous reportage. The perception by readers\n> will not be that the reporter failed, but that PostgreSQL advocates\n> are rabid weasels who don't appreciate favorable attention, and are\n\nfavorable attention??\n\n> dangerous to write anything about. You can bet this reporter and her\n> editor will treat the topic very circumspectly (i.e. avoid it) in the\n> future.\n\nwoo hoo, if that is the result, then I think Vince did us a great service,\nnot dis-service ...\n\n> Most reporters are ignorant, most reporters are lazy, and many are\n> both. It's part of the job description. Getting angry about it is\n> like getting angry at birds for fouling their cage. Their job is to\n> regurgitate what they're given, and quickly. They have no time to\n> learn the depths, or to write coherently about it, or even to check\n> facts.\n\nOut of all the articles on PgSQL that I've read over the years, this one\nshould have been shot before it hit the paper (so to say) ... it was the\nmost blatantly inaccurate article I've ever read ...\n\n> It will be harder than the original mailings, but I urge each who\n> wrote to write again and apologize for attacking her.\n\nIn a way, I think you are right .. I think the attack was aimed at the\nwrong ppl :( She obviously didn't get *any* of her information from ppl\nthat belong *in* the Pg community, or that have any knowledge of how it\nworks, or of its history :(\n\n",
"msg_date": "Sun, 15 Apr 2001 11:44:48 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Fast Forward (fwd)"
},
{
"msg_contents": "To top it all off, their comments are broken -- I submitted mine and it\ndisplays Marc's again (until you click on the link of course)..\n\n*sigh* they must be using MySQL. :-)\n\n-Mitch\n\n----- Original Message -----\nFrom: \"The Hermit Hacker\" <scrappy@hub.org>\nTo: <pgsql-hackers@postgreSQL.org>\nSent: Sunday, April 15, 2001 10:44 AM\nSubject: Re: Fast Forward (fwd)\n\n\n> On Sat, 14 Apr 2001, Nathan Myers wrote:\n>\n> > This is probably a good time to point out that this is the _worst_\n> > _possible_ response to erroneous reportage. The perception by readers\n> > will not be that the reporter failed, but that PostgreSQL advocates\n> > are rabid weasels who don't appreciate favorable attention, and are\n>\n> favorable attention??\n>\n> > dangerous to write anything about. You can bet this reporter and her\n> > editor will treat the topic very circumspectly (i.e. avoid it) in the\n> > future.\n>\n> woo hoo, if that is the result, then I think Vince did us a great service,\n> not dis-service ...\n>\n> > Most reporters are ignorant, most reporters are lazy, and many are\n> > both. It's part of the job description. Getting angry about it is\n> > like getting angry at birds for fouling their cage. Their job is to\n> > regurgitate what they're given, and quickly. They have no time to\n> > learn the depths, or to write coherently about it, or even to check\n> > facts.\n>\n> Out of all the articles on PgSQL that I've read over the years, this one\n> should have been shot before it hit the paper (so to say) ... it was the\n> most blatantly inaccurate article I've ever read ...\n>\n> > It will be harder than the original mailings, but I urge each who\n> > wrote to write again and apologize for attacking her.\n>\n> In a way, I think you are right .. I think the attack was aimed at the\n> wrong ppl :( She obviously didn't get *any* of her information from ppl\n> that belong *in* the Pg community, or that have any knowledge of how it\n> works, or of its history :(\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Sun, 15 Apr 2001 13:05:31 -0400",
"msg_from": "\"Mitch Vincent\" <mitch@venux.net>",
"msg_from_op": false,
"msg_subject": "Re: Fast Forward (fwd)"
},
{
"msg_contents": "> *sigh* they must be using MySQL. :-)\n\n*rofl*\n",
"msg_date": "Sun, 15 Apr 2001 17:36:21 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: Fast Forward (fwd)"
},
{
"msg_contents": "> \n> Here's my response to the inaccurate article cmp produced. After\n> chatting with Marc I decided to post it myself.\n> \n> Since I know Ned reads this list, I formally request that he also\n> insists PUBLICALLY that cmp correct their inaccuracies. I'm rather\n> disappointed (for lack of a more descriptive word) that Bruce has not\n> already done so.\n\nI haven't had time to read the article. Easter weekend, ya know. \n\nNot sure how corrections are made to on-line articles, but I would think\nthe publisher would be glad to fix what is wrong.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 15 Apr 2001 13:59:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fast Forward (fwd)"
},
{
"msg_contents": "> \n> Here's my response to the inaccurate article cmp produced. After\n> chatting with Marc I decided to post it myself.\n> \n> Since I know Ned reads this list, I formally request that he also\n> insists PUBLICALLY that cmp correct their inaccuracies. I'm rather\n> disappointed (for lack of a more descriptive word) that Bruce has not\n> already done so.\n\nOne more thing. This is a minor media outlet, so I don't get too worked\nup to fix it right away. I assume no one is there on the weekend\nanyway. If it was a major distributor of information, I would be more\ninclined to get it fixed rapidly.\n\nI did skim the article, but did not read it. When I don't recognize the\npublisher, I have a tendency to just ignore PostgreSQL press.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 15 Apr 2001 14:14:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fast Forward (fwd)"
},
{
"msg_contents": "> > It will be harder than the original mailings, but I urge each who\n> > wrote to write again and apologize for attacking her.\n> \n> In a way, I think you are right .. I think the attack was aimed at the\n> wrong ppl :( She obviously didn't get *any* of her information from ppl\n> that belong *in* the Pg community, or that have any knowledge of how it\n> works, or of its history :(\n\nThis echos my earlier observation. Many of these writers for minor\npublications just throw the information together. I can't be bothered\nto jump every time tney make a major mistake because they have not done\nany work on their part to get the correct story.\n\nNot that it shouldn't be fixed. I just don't get worked up over it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 15 Apr 2001 14:18:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fast Forward (fwd)"
},
{
"msg_contents": "On Sun, Apr 15, 2001 at 11:44:48AM -0300, The Hermit Hacker wrote:\n> On Sat, 14 Apr 2001, Nathan Myers wrote:\n> \n> > This is probably a good time to point out that this is the _worst_\n> > _possible_ response to erroneous reportage. The perception by readers\n> > will not be that the reporter failed, but that PostgreSQL advocates\n> > are rabid weasels who don't appreciate favorable attention, and are\n> \n> favorable attention??\n\nYes, totally favorable. There wasn't a hint of the condescension \ntypically accorded free software. All of the details you find so \nobjectionable (April vs. June? \"The\" marketing arm vs. \"a\" marketing\narm?) would not even be noticed by a non-cultist.\n\n> > dangerous to write anything about. You can bet this reporter and her\n> > editor will treat the topic very circumspectly (i.e. avoid it) in the\n> > future.\n> \n> woo hoo, if that is the result, then I think Vince did us a great service,\n> not dis-service ...\n\nFalse. \n\nThis may have been the reporter's and the editor's first direct\nexposure to free software advocates. You guys came across as \nhate-filled religious whackos, and that reflects on all of us. \n\n> > Most reporters are ignorant, most reporters are lazy, and many are\n> > both. It's part of the job description. Getting angry about it is\n> > like getting angry at birds for fouling their cage. Their job is to\n> > regurgitate what they're given, and quickly. They have no time to\n> > learn the depths, or to write coherently about it, or even to check\n> > facts.\n> \n> Out of all the articles on PgSQL that I've read over the years, this one\n> should have been shot before it hit the paper (so to say) ... it was the\n> most blatantly inaccurate article I've ever read ...\n\nIt had a number of minor errors, easily corrected. The next will \nprobably talk about what a bunch of nasty cranks and lunatics \nPostgreSQL fans are, unless you who wrote can display a lot more \nfinesse in your apologies. Thanks a lot, guys.\n\n> > It will be harder than the original mailings, but I urge each who\n> > wrote to write again and apologize for attacking her.\n> \n> In a way, I think you are right .. I think the attack was aimed at the\n> wrong ppl :( She obviously didn't get *any* of her information from ppl\n> that belong *in* the Pg community, or that have any knowledge of how it\n> works, or of its history :(\n\nHow is this reporter going to have developed contacts within the \ncommunity? She has just started. Now you've burnt her to a crisp, \nand she will figure the less contact with that \"community\" she has, \nthe happier she'll be. Her editor will know that mentioning PG in\nany context will result in a raft of hate mail from cranks, and will \ntreat press releases from our community with the scorn they have earned.\n\nReporters are fragile creatures, and must be gently guided toward the\nlight. They will always get facts wrong, but that matter not at all.\nThe overall tone of the writing is the only thing that stays with their\nequally dim audience. That dim audience controls the budgets for \ntechnology deployment, including databases. Next time you propose a\ndeployment on PG instead of Oracle, thank Vince et al. when it's \ndismissed as a crank toy.\n\nFinally, their talkback page was most probably implemented _not_ with \nMySQL, but with MS SQL Server. These intramural squabbles (between \nMySQL and PG, between Linux and BSD, between NetBSD and OpenBSD) are \njustifiably seen as pathetic in the outside world. Respectful attention \namong projects doesn't just create a better impression, it also allows \nyou, maybe, to learn something. (MySQL is not objectively as good as \nPG, but those guys are doing something right, in their presentation, \nthat some of us could learn from.)\n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Sun, 15 Apr 2001 18:53:37 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Fast Forward (fwd)"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Not that it shouldn't be fixed. I just don't get worked up over it.\n\nWell, in a way I regret bringing it to the attention of the community --\njust in a small way.\n\nBut at the same time I realized that I was not the right one at that\ntime to craft a reply -- after all, I'm a Baptist preacher. And we tend\ntowards the 'getting worked up' side of things. So I've learned that\nthere are times I shouldn't post at all. Although, that's been a long\nhard learning experience, one that I am still, to use my wife's\ncharitable words, 'gaining mastery of.'\n\nIt should be fixed. But community members need a levelheaded response\nto a likely honest omission of facts. I may very well do so, tomorrow. \nOne of the worst facets of the Linux community, for example, is its\noften predatoryform of 'advocacy' -- this group is more civil than that,\nIMHO.\n\nShe likely is off for the weekend, and hasn't read any responses yet.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Sun, 15 Apr 2001 22:15:37 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Fast Forward (fwd)"
},
{
"msg_contents": "> \n> Here's my response to the inaccurate article cmp produced. After\n> chatting with Marc I decided to post it myself.\n> \n> Since I know Ned reads this list, I formally request that he also\n> insists PUBLICALLY that cmp correct their inaccuracies. I'm rather\n> disappointed (for lack of a more descriptive word) that Bruce has not\n> already done so.\n\nOK, I read the article. Seems there are two major mistakes. First,\nthey equate Great Bridge with PostgreSQL, and second, they don't know\nthe history of PostgreSQL.\n\nIf they fixed those two mistakes, the article would be OK. Seems like\nthey have already been contacted and hopefully they will correct this.\n(Not sure how they do that.)\n\nIn reading the article, I have to ask myself, \"How upset would I be if\nthey had equated PostgreSQL, Inc or another company with PostgreSQL?\" \nWell, I certainly wouldn't like it, and would try to get it corrected.\nHowever, I wouldn't consider it a major problem. The press makes\nmistakes like this all the time, and this press outlet is just too small\nto worry about.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 15 Apr 2001 22:16:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fast Forward (fwd)"
},
{
"msg_contents": "\nthe thing that pissed me off the most, and set me off, was the totally\nblatant errors ... we've had other articles written, with a GB slant to\nthem, that didn't get my feathers in a ruffle ... the fact that they\n*talked* with GB, got quotes from them and some of their partners, and\n*still* got the facts wrong, is what got me heated ...\n\nOn Sun, 15 Apr 2001, Bruce Momjian wrote:\n\n> >\n> > Here's my response to the inaccurate article cmp produced. After\n> > chatting with Marc I decided to post it myself.\n> >\n> > Since I know Ned reads this list, I formally request that he also\n> > insists PUBLICALLY that cmp correct their inaccuracies. I'm rather\n> > disappointed (for lack of a more descriptive word) that Bruce has not\n> > already done so.\n>\n> OK, I read the article. Seems there are two major mistakes. First,\n> they equate Great Bridge with PostgreSQL, and second, they don't know\n> the history of PostgreSQL.\n>\n> If they fixed those two mistakes, the article would be OK. Seems like\n> they have already been contacted and hopefully they will correct this.\n> (Not sure how they do that.)\n>\n> In reading the article, I have to ask myself, \"How upset would I be if\n> they had equated PostgreSQL, Inc or another company with PostgreSQL?\"\n> Well, I certainly wouldn't like it, and would try to get it corrected.\n> However, I wouldn't consider it a major problem. The press makes\n> mistakes like this all the time, and this press outlet is just too small\n> to worry about.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://www.postgresql.org/search.mpl\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Sun, 15 Apr 2001 23:54:26 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Fast Forward (fwd)"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > Not that it shouldn't be fixed. I just don't get worked up over it.\n> \n> Well, in a way I regret bringing it to the attention of the community --\n> just in a small way.\n> \n> But at the same time I realized that I was not the right one at that\n> time to craft a reply -- after all, I'm a Baptist preacher. And we tend\n> towards the 'getting worked up' side of things. So I've learned that\n> there are times I shouldn't post at all. Although, that's been a long\n> hard learning experience, one that I am still, to use my wife's\n> charitable words, 'gaining mastery of.'\n\nFunny you should mention this, because I was going to post something\nabout this today too. \n\nSome people have wondered why I don't comment on some postings that\nobviously relate to me. I avoid rapidly replying to topics that attack\nme, my employer, my religious beliefs, or other things that are\npersonally related to me. The reason is that my entering the discussion\nmay prevent people from openly expressing their opinions on the topic,\nand that is usually bad.\n\nWhether I agree with their opinions or not, they are valid feelings\npeople have. In a way, all feelings are valid. If someone feels a\ncertain way, you can't argue they don't feel that way, because obviously\nthey do.\n\nYou can tell people why they shouldn't feel a certain way, but \npreventing them from expressing their feelings is usually a bad thing,\nunless their expression is hurting other people. (Hurting my feelings\nis OK.)\n\nI usually sit back until everyone's cards/feelings are on the table, and\nthen decide if my saying anything will help or hurt.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 15 Apr 2001 23:24:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fast Forward (fwd)"
},
{
"msg_contents": "I wanted to comment on how we handled this article.\n\nSeems the author did not understand the company/open-source\nrelationship. This is not a huge surprise. I have to explain it to my\nfriends and relatives all the time. Now, our way of dealing with users\nwho ask questions is to gently lead them to the answer. In this case,\nwe have insulted the author. Hard to see how our users get gentle\ntreatment while authors get insulted.\n\nNow, you can say that press people have to live up to a higher standard,\nand therefore deserve to be insulted when they get things wrong. If\nthat is people's opinion, I can't argue with that. However, consider\nhow many press people think Linux is developed by Red Hat. I bet\nthere's a lot.\n\nThe author didn't get the Berkely/FreeBSD/PostgreSQL relationship right\neither. Seems the author has a \"relationship\" problem. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 15 Apr 2001 23:33:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fast Forward (fwd)"
},
{
"msg_contents": "> You can tell people why they shouldn't feel a certain way, but \n> preventing them from expressing their feelings is usually a bad thing,\n> unless their expression is hurting other people. (Hurting my feelings\n> is OK.)\n> \n> I usually sit back until everyone's cards/feelings are on the table, and\n> then decide if my saying anything will help or hurt.\n\nOh, one more thing. I jump right into discussions if I can find a joke\nin the situation. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 15 Apr 2001 23:36:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fast Forward (fwd)"
},
{
"msg_contents": "> PostgreSQL 7.1 on Red Hat Linux 7.1[1]: All 76 tests passed.\n> --\n> Trond Eivind Glomsr�d\n> Red Hat, Inc.\n\nIt's about time RH saw the light and sync'd their release labeling with\nours ;)\n\n - Thomas\n",
"msg_date": "Mon, 16 Apr 2001 17:24:19 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: 7.1 on 7.1"
},
{
"msg_contents": "PostgreSQL 7.1 on Red Hat Linux 7.1[1]: All 76 tests passed. \n\nI'll submit it to the website soonish.\n\n[1] Available this morning, http://www.redhat.com/about/presscenter/2001/press_sevenone.html\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "16 Apr 2001 14:13:14 -0400",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "7.1 on 7.1"
},
{
"msg_contents": "Trond Eivind Glomsr�d wrote:\n> \n> PostgreSQL 7.1 on Red Hat Linux 7.1[1]: All 76 tests passed.\n> \n> I'll submit it to the website soonish.\n> \n> [1] Available this morning, http://www.redhat.com/about/presscenter/2001/press_sevenone.html\n\nAnd RPMs are also available for 7.1 on 7.1 in our RPM area.\n\nRed Hat 7.1 is _nice_. The PostgreSQL speed appears to be very good,\ncompared to 6.2/7.0 with the 2.2 kernel.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 16 Apr 2001 14:24:08 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: 7.1 on 7.1"
},
{
"msg_contents": "teg@redhat.com (Trond Eivind Glomsr�d) writes:\n\n> PostgreSQL 7.1 on Red Hat Linux 7.1[1]: All 76 tests passed. \n> \n> I'll submit it to the website soonish.\n> \n> [1] Available this morning, http://www.redhat.com/about/presscenter/2001/press_sevenone.html\n\nI've not been able to submit it - the URL\nhttp://www.postgresql.org/~vev/regress/ (which is referred to from the developer pages) results in a 404. \n\n \n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "17 Apr 2001 15:55:58 -0400",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: 7.1 on 7.1"
}
] |
[
{
"msg_contents": "\nTom and Peter,\n\nAfter looking at and thinking about the direction of the PostgreSQL\nproject and the release of 7.1, I wanted to personally thank the both\nof you for your hard work and contributions to the project which without\nyour efforts there might not only not be a 7.1, PostgreSQL might not be\nwhere it is at all. Knowing the both of you as I've been fortunate of,\nI know you're both rather modest but nonetheless are still deserving of my\nheartfelt thanks.\n\nSincerely,\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Sun, 15 Apr 2001 01:53:07 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": true,
"msg_subject": "Thankyou!"
}
] |
[
{
"msg_contents": "Hi pgsql-hackers,\n\n Could anyone advise me how to do modular test in any partial\nPostgreSQL's modules?\n I am interested in the PostgreSQL development. I have begun study the\nDBMS source code by developer documentation provided by postgresql.org\nespecially internal.ps that is the best explaination for developer\nbeginner, I think. \n Moreover, the RedHat SourceNavigator I have found, is a great tools for\nme. Without it, I might not able to get more understanding of the\nPostgreSQL source code.\n Now I am concentrating on the Executor module. I plan to create a new\nJoin Executor let's say ParallelJoin to enhance the Join operator\nprocessing. As this moment, I may use the HybridJoin algorithm\nimplementing in the HashJoin module as my guidance but the NestLoop and\nMergeJoin may be considered in the furture.\n\n What I would like to know is, if I have changed some ot the modules,\nhow can I use GNU gdb to debug the modified codes?\n\n I am sorry if the question is disturb your mailing list. I know this is\nnot the issue related in your TODO list. However I have expected to your\nresponse.\n\n thanks and regards,\n Werachart.\n \n\n",
"msg_date": "Sun, 15 Apr 2001 14:05:16 +0700 (ICT)",
"msg_from": "Werachart Jantarataeme <wjt@ziif.com>",
"msg_from_op": true,
"msg_subject": "How to do the modular test"
},
{
"msg_contents": "> What I would like to know is, if I have changed some ot the modules,\n> how can I use GNU gdb to debug the modified codes?\n\nYou can run the backend directly from gdb:\n\n $ gdb postgres\n (set breakpoint)\n > b <routine_to_breakpoint>\n (tell gdb to begin)\n > run -D <path_to_database>\n (enter query at prompt)\n > select xyz from abc...\n (will run until it hits the breakpoint(s))\n\n - Thomas\n",
"msg_date": "Mon, 16 Apr 2001 04:50:35 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: How to do the modular test"
}
] |
[
{
"msg_contents": "Hi,\n\nSorry, per OOC...\n\nSomebody could tell me where I can find documentation online about SQL2\nand SQL3 (especially the last).\n\nThis the SQL3 approved finally?\n\nThat match is between SQL92,SQL96,SQL99 and the SQL2 and SQL3?\n\nThanks,\n\nSergio\n",
"msg_date": "Sun, 15 Apr 2001 08:43:12 -0300",
"msg_from": "Sergio Pili <sergiop@sinectis.com.ar>",
"msg_from_op": true,
"msg_subject": "SQL"
}
] |
[
{
"msg_contents": "Folks,\n\nBy now, I imagine a number of people have seen the piece on the\nComputer Reseller News website about Great Bridge and PostgreSQL.\nWhile I think we're all happy to see the increased visibility for\nPostgreSQL (especially as compared to the Oracles of the world),\nit's fair to say the article wasn't perfect. As Nathan Myers\nobserved in another post, they rarely are. ;-)\n\nI thought the reporter did a good job of talking about Great\nBridge's business model and how we work with resellers and\nthird-party software developers (which after all is the focus of the\nmagazine). Sure, there were some minor errors of fact, like the\nconfusion over PostgreSQL's Berkeley origins, and the use of the\nword \"licensing.\"\n\nBut of greater concern to us, and the reason I'm writing this note,\nis the lack of clarity about the open source community that has\nbuilt, and continues to build this software. Great Bridge is one\ncompany, one member of a large community, and a relative newcomer to\nthe party. We employ several leading PostgreSQL developers, and\ngive back to the project in many ways, but at the end of the day,\nwe're still only a very small part of the larger project - which\nprecedes us by many years, and could very easily survive us as\nwell. We are *a* marketing channel for PostgreSQL (not *the*\nchannel), provide services around the software, and release a\nQA-certified distribution (bundled with other tools and\napplications), but we know that it's not *our* software. It's\neveryone's, and I'm sorry the article didn't adequately represent\nthat reality.\n\nHaving said that, I'd ask everyone to take a deep breath, as Nathan\nsuggested, and realize that it's still early in the adoption cycle\nfor open source in the larger business world and the mass media.\nThere will continue to be nuances that seem blindingly obvious to\nus, but slip right through the reporting and editing process in the\ntrade press. That's ok, as long as we correct those errors, as\ndelicately as possible ;-)\n\nWe all have a shared stake in PostgreSQL being more widely used and\nappreciated, and how we respond to things like this will go a long\nway toward furthering that goal. You can all be justifiably proud\nof the work that's gone into PostgreSQL, leading up to the terrific\n7.1 release; a big part of Great Bridge's job as a marketing\norganization is to make sure the world finds out about it - an\nongoing job that we take very seriously.\n\nIf anyone has any questions about Great Bridge's position on this\nkind of stuff, please feel free to email me off-list.\n\nThanks,\nNed\n\n--\n----------------------------------------------------\nNed Lilly e: ned@greatbridge.com\nVice President w: www.greatbridge.com\nEvangelism / Hacker Relations v: 757.233.5523\nGreat Bridge, LLC f: 757.233.5555\n\n\n",
"msg_date": "Sun, 15 Apr 2001 09:52:35 -0400",
"msg_from": "Ned Lilly <ned@greatbridge.com>",
"msg_from_op": true,
"msg_subject": "CRN article"
},
{
"msg_contents": "\nSo, to sum up ... the article did a good job of representing Great Bridge,\ndid a great injustice (a slap in the face, so to say) to the PostgreSQL\ncommunity as a whole, so Great Bridge has no intention of correcting the\nsituation?\n\nJust to clarify your position, of course ...\n\nOn Sun, 15 Apr 2001, Ned Lilly wrote:\n\n> Folks,\n>\n> By now, I imagine a number of people have seen the piece on the\n> Computer Reseller News website about Great Bridge and PostgreSQL.\n> While I think we're all happy to see the increased visibility for\n> PostgreSQL (especially as compared to the Oracles of the world),\n> it's fair to say the article wasn't perfect. As Nathan Myers\n> observed in another post, they rarely are. ;-)\n>\n> I thought the reporter did a good job of talking about Great\n> Bridge's business model and how we work with resellers and\n> third-party software developers (which after all is the focus of the\n> magazine). Sure, there were some minor errors of fact, like the\n> confusion over PostgreSQL's Berkeley origins, and the use of the\n> word \"licensing.\"\n>\n> But of greater concern to us, and the reason I'm writing this note,\n> is the lack of clarity about the open source community that has\n> built, and continues to build this software. Great Bridge is one\n> company, one member of a large community, and a relative newcomer to\n> the party. We employ several leading PostgreSQL developers, and\n> give back to the project in many ways, but at the end of the day,\n> we're still only a very small part of the larger project - which\n> precedes us by many years, and could very easily survive us as\n> well. We are *a* marketing channel for PostgreSQL (not *the*\n> channel), provide services around the software, and release a\n> QA-certified distribution (bundled with other tools and\n> applications), but we know that it's not *our* software. It's\n> everyone's, and I'm sorry the article didn't adequately represent\n> that reality.\n>\n> Having said that, I'd ask everyone to take a deep breath, as Nathan\n> suggested, and realize that it's still early in the adoption cycle\n> for open source in the larger business world and the mass media.\n> There will continue to be nuances that seem blindingly obvious to\n> us, but slip right through the reporting and editing process in the\n> trade press. That's ok, as long as we correct those errors, as\n> delicately as possible ;-)\n>\n> We all have a shared stake in PostgreSQL being more widely used and\n> appreciated, and how we respond to things like this will go a long\n> way toward furthering that goal. You can all be justifiably proud\n> of the work that's gone into PostgreSQL, leading up to the terrific\n> 7.1 release; a big part of Great Bridge's job as a marketing\n> organization is to make sure the world finds out about it - an\n> ongoing job that we take very seriously.\n>\n> If anyone has any questions about Great Bridge's position on this\n> kind of stuff, please feel free to email me off-list.\n>\n> Thanks,\n> Ned\n>\n> --\n> ----------------------------------------------------\n> Ned Lilly e: ned@greatbridge.com\n> Vice President w: www.greatbridge.com\n> Evangelism / Hacker Relations v: 757.233.5523\n> Great Bridge, LLC f: 757.233.5555\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Sun, 15 Apr 2001 11:22:00 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: CRN article"
}
] |
[
{
"msg_contents": "The \"Current Release Docs\" on the PostgreSQL website still look 7.0.Xish.. \n\nJust an FYI... \n\n-Mitch\n\n\n",
"msg_date": "Sun, 15 Apr 2001 13:11:34 -0400",
"msg_from": "\"Mitch Vincent\" <mitch@venux.net>",
"msg_from_op": true,
"msg_subject": "The \"Current Release Docs\" "
},
{
"msg_contents": "On Sun, Apr 15, 2001 at 01:11:34PM -0400, Mitch Vincent wrote:\n> The \"Current Release Docs\" on the PostgreSQL website still look 7.0.Xish.. \n\nI can't finh the 7.1 PS docs anywhere. The stuff in the doc directory\non the FTP sites is from last year, and the docs in the tar files\nare all SGML and HTML. I could really use a printable copy, but\nlast time I tried to generate it from the SGML, it was a nightmare\n(and afterward, I discovered that the release docs are apparently\nmade by hand anyway).\n\nAm I missing something?\n-- \nChristopher Masto Senior Network Monkey NetMonger Communications\nchris@netmonger.net info@netmonger.net http://www.netmonger.net\n\nFree yourself, free your machine, free the daemon -- http://www.freebsd.org/\n",
"msg_date": "Mon, 16 Apr 2001 16:46:05 -0400",
"msg_from": "Christopher Masto <chris+pg-general@netmonger.net>",
"msg_from_op": false,
"msg_subject": "No printable 7.1 docs?"
},
{
"msg_contents": "\nThey're not ready yet.\n\nVince.\n\nOn Mon, 16 Apr 2001, Christopher Masto wrote:\n\n> On Sun, Apr 15, 2001 at 01:11:34PM -0400, Mitch Vincent wrote:\n> > The \"Current Release Docs\" on the PostgreSQL website still look 7.0.Xish..\n>\n> I can't finh the 7.1 PS docs anywhere. The stuff in the doc directory\n> on the FTP sites is from last year, and the docs in the tar files\n> are all SGML and HTML. I could really use a printable copy, but\n> last time I tried to generate it from the SGML, it was a nightmare\n> (and afterward, I discovered that the release docs are apparently\n> made by hand anyway).\n>\n> Am I missing something?\n>\n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Mon, 16 Apr 2001 16:52:45 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: No printable 7.1 docs?"
},
{
"msg_contents": "> They're not ready yet.\n\nSince they were deemed non-essential for this release, and since the\nrelease schedule is not built around their creation, I no longer feel\nobligated to have them finished on the release date. A nice change from\nthe deadlines I've been working on for the last three years or so :)\n\nThis is the first release with the \"no hardcopy\" policy, and user\nfeedback is certainly desirable and appreciated.\n\nI hope to be able to find time to finish them soon; a couple are\nessentially done already, but the Reference Manual will be problematic\nas usual (the jade RTF output is not handled by M$Word, and exhibits the\nsame symptoms as when read by Applixware).\n\n - Thomas\n",
"msg_date": "Tue, 17 Apr 2001 00:07:26 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: No printable 7.1 docs?"
},
{
"msg_contents": "I know this isn't really hackers traffic, but...\n\none of the servers in www.postgresql.org is\n\nhttp://postgresql.bbksys.com/\n\nwhich is giving me 404 errors..\n\nI've mailed webmaster@, but thought this should be mailed on anyway..\n\n- brandon\n\nb. palmer, bpalmer@crimelabs.net\npgp: www.crimelabs.net/bpalmer.pgp5\n\n\n",
"msg_date": "Mon, 16 Apr 2001 22:31:58 -0400 (EDT)",
"msg_from": "bpalmer <bpalmer@crimelabs.net>",
"msg_from_op": false,
"msg_subject": "broken web server?"
},
{
"msg_contents": "On Tue, Apr 17, 2001 at 12:07:26AM +0000, Thomas Lockhart wrote:\n> > They're not ready yet.\n> \n> Since they were deemed non-essential for this release, and since the\n> release schedule is not built around their creation, I no longer feel\n> obligated to have them finished on the release date. A nice change from\n> the deadlines I've been working on for the last three years or so :)\n> \n> This is the first release with the \"no hardcopy\" policy, and user\n> feedback is certainly desirable and appreciated.\n\nMy feedback at this time is mostly the desire to know a bit better\nwhat prevents the hardcopy docs from being built automatically. I am\ncurrently having some trouble compiling jadetex, so I can't take a\nlook at the generated PDF yet, but I assume there's something wrong\nwith it. That seems like a big deficiency in the doc tools, which\nsuprises me, given that they're rather large projects that have been\nused by other large projects for quite a while.\n\nMy interest is partly to be able to compile the docs on my own, and\npartly research - I'm involved in the development of an application\nthat has some hefty documentation requirements, and I was hoping that\nSGML + free software would come to the rescue. If it's just a matter\nof time and effort, this may be an big enough area of overlap with\nwork that I can spend Official Time and/or Official Money on it.\n-- \nChristopher Masto Senior Network Monkey NetMonger Communications\nchris@netmonger.net info@netmonger.net http://www.netmonger.net\n\nFree yourself, free your machine, free the daemon -- http://www.freebsd.org/\n",
"msg_date": "Tue, 17 Apr 2001 00:19:06 -0400",
"msg_from": "Christopher Masto <chris+pg-hackers@netmonger.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: No printable 7.1 docs?"
},
{
"msg_contents": "\nRemoved from active rotation and site admin notified.\n\nVince.\n\n\nOn Mon, 16 Apr 2001, bpalmer wrote:\n\n> I know this isn't really hackers traffic, but...\n>\n> one of the servers in www.postgresql.org is\n>\n> http://postgresql.bbksys.com/\n>\n> which is giving me 404 errors..\n>\n> I've mailed webmaster@, but thought this should be mailed on anyway..\n>\n> - brandon\n>\n> b. palmer, bpalmer@crimelabs.net\n> pgp: www.crimelabs.net/bpalmer.pgp5\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 17 Apr 2001 06:11:37 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: broken web server?"
},
{
"msg_contents": "> My feedback at this time is mostly the desire to know a bit better\n> what prevents the hardcopy docs from being built automatically. I am\n> currently having some trouble compiling jadetex, so I can't take a\n> look at the generated PDF yet, but I assume there's something wrong\n> with it. That seems like a big deficiency in the doc tools, which\n> suprises me, given that they're rather large projects that have been\n> used by other large projects for quite a while.\n\nHmm. Actually, afaik we were the first large open source project to\nsuccessfully use the jade toolset for docs. Others have used our project\nas an example to help get them going, since as you have already found\nout getting the tool chain completely set up is not trivial.\n\nThere are at least a few reasons why automatically generating hardcopy\nwithout a final adjustment step is not currently feasible:\n\n1) Table column alignment is not ideal. Many tables are generated with\nsame-width columns and some end up with large indents on each column.\nThey just don't fit on the page without adjustments. That is for RTF;\ntable support in jadetex has always been problematic.\n\n2) Reference pages have a problem. It has *always* been the case that\nour reference pages do not format correctly. afaict it is is a problem\nwith jade->RTF (since the problem shows up in both Applixware and\nM$Word) but I do not have much insight into RTF conventions so have not\ntracked it down. Very time-consuming hand-editing is required :(\n\n3) Page breaks are not always ideal. Some hand adjustments are desirable\nto get a better flow to the docs, especially wrt examples and lists; you\ndon't want them breaking between pages if you can avoid it, especially\nwith short examples.\n\n4) Table of contents page numbers are not correct in the RTF output, so\na new ToC must be generated in Applixware or Word.\n\n> My interest is partly to be able to compile the docs on my own, and\n> partly research - I'm involved in the development of an application\n> that has some hefty documentation requirements, and I was hoping that\n> SGML + free software would come to the rescue. If it's just a matter\n> of time and effort, this may be an big enough area of overlap with\n> work that I can spend Official Time and/or Official Money on it.\n\nSGML+freeSW will do what you need. You can generate hardcopy\nautomatically, but I'm not sure it is realistic to expect a toolset to\ndo this for a complicated document without *any* hand adjustments. The\ntime-saving leverage is still tremendous though, and using these tools\nis a net win imho.\n\nGood luck!\n\n - Thomas\n",
"msg_date": "Tue, 17 Apr 2001 13:25:32 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: No printable 7.1 docs?"
},
{
"msg_contents": "Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n> 3) Page breaks are not always ideal. Some hand adjustments are desirable\n> to get a better flow to the docs, especially wrt examples and lists; you\n> don't want them breaking between pages if you can avoid it, especially\n> with short examples.\n\nThis objection, at least, could be eliminated if the standard hardcopy\npath were through TeX (which I assume is what jadetex does). TeX\nunderstands just fine about discouraging or completely preventing page\nbreaks within certain groups of lines. In general, TeX is a lot better\nsuited for book-quality typesetting than any other open-source tool I've\nheard of.\n\nIt seems to me that all of the other problems you enumerate are simply\nbugs in the doc toolchain. We've worked around them rather than tried\nto fix them because that was the shortest path to a result, but if Chris\nwants to tackle actually fixing them, that would sure be nice. Based on\nyour comments here, my recommendation would be to forget RTF entirely;\ninstead, work on getting out the kinks in the TeX pathway.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Apr 2001 11:01:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: No printable 7.1 docs? "
},
{
"msg_contents": "Tom Lane writes:\n\n> It seems to me that all of the other problems you enumerate are simply\n> bugs in the doc toolchain. We've worked around them rather than tried\n> to fix them because that was the shortest path to a result, but if Chris\n> wants to tackle actually fixing them, that would sure be nice. Based on\n> your comments here, my recommendation would be to forget RTF entirely;\n> instead, work on getting out the kinks in the TeX pathway.\n\nThe consensus of the authors and others that know what they're saying is\nessentially that jadetex can't be fixed without a complete rewrite of the\nJade TeX backend (jadetex != Jade TeX backend). And currently there's\nlittle to no interest or manpower for sweeping changes in Jade.\n\nThe presently most future-proof free software way to use TeX for\nformatting DocBook is PassiveTeX, which works through XML and XSL FO.\nI've tried it once and if I'm not mistaken I got a readable PDF file part\nof the time. If anyone's interested in helping with the tool chain, look\nthere first.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Tue, 17 Apr 2001 18:15:16 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: No printable 7.1 docs? "
},
{
"msg_contents": "> It seems to me that all of the other problems you enumerate are simply\n> bugs in the doc toolchain. We've worked around them rather than tried\n> to fix them because that was the shortest path to a result, but if Chris\n> wants to tackle actually fixing them, that would sure be nice. Based on\n> your comments here, my recommendation would be to forget RTF entirely;\n> instead, work on getting out the kinks in the TeX pathway.\n\nThat is at odds with the current thinking of the jade/dsssl community\n(which uses both TeX and RTF), but if someone wants to try it would not\nhurt I suppose.\n\nThe problem with TeX output is that it is *not* adjustable after the\nfact, and that the jade support for things like tables is not adequate\nfor a real-world document. The jadetex author had concluded that really\nfixing it was too much work. Many of the original development community\nhas moved on to XML tools development, but afaik none are ready for use.\n\nOne might want to research the current state of the tools as a starting\npoint for improvements. At the moment, I'd love someone to dive in to\nthe RTF reference page problem; I can send samples on request :)\n\n - Thomas\n",
"msg_date": "Tue, 17 Apr 2001 16:16:39 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: Re: No printable 7.1 docs?"
},
{
"msg_contents": "> Hmm. Actually, afaik we were the first large open source project to\n> successfully use the jade toolset for docs. Others have used our project\n> as an example to help get them going, since as you have already found\n> out getting the tool chain completely set up is not trivial.\n> \n> There are at least a few reasons why automatically generating hardcopy\n> without a final adjustment step is not currently feasible:\n> \n> 1) Table column alignment is not ideal. Many tables are generated with\n> same-width columns and some end up with large indents on each column.\n> They just don't fit on the page without adjustments. That is for RTF;\n> table support in jadetex has always been problematic.\n> \n> 2) Reference pages have a problem. It has *always* been the case that\n> our reference pages do not format correctly. afaict it is is a problem\n> with jade->RTF (since the problem shows up in both Applixware and\n> M$Word) but I do not have much insight into RTF conventions so have not\n> tracked it down. Very time-consuming hand-editing is required :(\n> \n> 3) Page breaks are not always ideal. Some hand adjustments are desirable\n> to get a better flow to the docs, especially wrt examples and lists; you\n> don't want them breaking between pages if you can avoid it, especially\n> with short examples.\n> \n> 4) Table of contents page numbers are not correct in the RTF output, so\n> a new ToC must be generated in Applixware or Word.\n\nCan I add one more issue:\n\n5) We have been working for translating docs into Japanese using\n EUC_JP encoding. Converting to HTML is no problem, but we cannot\n get correct results for sgml-> RTF conversion at all. The\n translated docs are just not be able to read, showing random\n characters. It seems that openjade supports multibyte encodings at\n least according to their manuals, but I can not get it working. I\n have asked to dssslist but I have not gotten usefull helps yet.\n\n A qustion comes to my mind: Is really sgml is an appropriate doc\n format for us? For me, LaTeX seems more handy. It can generate HTML\n using latex2html, and of course can produce beautiful hard copies\n AUTOMATICALLY for English and other languages including Japanese.\n\nBTW, I see some odd results from the 7.1 HTML docs. For example, in\nparser-stage.html,\n\n\"Figure \\ref{parsetree} shows the parse tree...\"\n\nWhat is the \"\\ref{parsetree}\"?\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 18 Apr 2001 09:59:32 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Re: No printable 7.1 docs?"
},
{
"msg_contents": "> 5) We have been working for translating docs into Japanese using\n> EUC_JP encoding. Converting to HTML is no problem, but we cannot\n> get correct results for sgml-> RTF conversion at all. The\n> translated docs are just not be able to read, showing random\n> characters. It seems that openjade supports multibyte encodings at\n> least according to their manuals, but I can not get it working. I\n> have asked to dssslist but I have not gotten usefull helps yet.\n\nSorry you are seeing trouble. I missed seeing your traffic on the dsssl\nlist to which I am subscribed; which one are you using?\n\n> A qustion comes to my mind: Is really sgml is an appropriate doc\n> format for us? For me, LaTeX seems more handy. It can generate HTML\n> using latex2html, and of course can produce beautiful hard copies\n> AUTOMATICALLY for English and other languages including Japanese.\n\nThere is a difference between using techniques which markup content\n(DocBook, XML, etc) as opposed to those which markup appearence (latex).\nThe \"wave of the future\" is content markup, for a variety of reasons,\nunless of course the pundits are sadly mistaken and reaching beyond\ntheir grasp. Which is a possibility ;)\n\nI'll submit that the time I take tweaking output for hardcopy is no more\ntime that would be spent tweaking latex to get optimal appearance.\n\n> BTW, I see some odd results from the 7.1 HTML docs. For example, in\n> parser-stage.html,\n> \"Figure \\ref{parsetree} shows the parse tree...\"\n> What is the \"\\ref{parsetree}\"?\n\nLooks sort of like latex, eh? :)\n\nThey are residual markup for graphics from Stephan's Master's Thesis\nwhich were never transcribed from the originals (gifs?) to a usable\nformat.\n\nThrough disk crashes, system upgrades, and a failed backup device I\n*may* no longer have his original tarball. Does anyone else?\n\n - Thomas\n",
"msg_date": "Wed, 18 Apr 2001 04:48:24 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: Re: No printable 7.1 docs?"
},
{
"msg_contents": "Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n> There is a difference between using techniques which markup content\n> (DocBook, XML, etc) as opposed to those which markup appearence (latex).\n\nPerhaps I'm stuck in the eighties when I did my thesis in LaTeX, but\nI was of the impression that what's considered good style in LaTeX *is*\ncontent-based markup. Sure, a LaTeXer may occasionally be forced to\nthrow in low-level stuff like a \\pagebreak to get nice looking results\n... but I fail to understand how this is different from the\noutput-oriented tweaking you do to the current Postgres docs.\n\n> I'll submit that the time I take tweaking output for hardcopy is no more\n> time that would be spent tweaking latex to get optimal appearance.\n\nExcept that the LaTeXer does it once. You have to do it over again from\nscratch, very laboriously, every time you want to generate good output.\nThis is a step forward?\n\nBottom line: I see very little reason to believe that SGML + available\ntools represents any real technical advance over TeX + its available\ntools. In fact, if one wants decent-looking output it seems to be a\nsubstantial regression. Perhaps it's only that TeX has more than a\nten-year lead in being developed into a usable tool, but from what I can\nsee from here, the SGML tools we are using are incredibly inferior to\nwhat's been available for a long time for TeX.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Apr 2001 01:57:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: No printable 7.1 docs? "
},
{
"msg_contents": "> Perhaps I'm stuck in the eighties when I did my thesis in LaTeX, but\n> I was of the impression that what's considered good style in LaTeX *is*\n> content-based markup. Sure, a LaTeXer may occasionally be forced to\n> throw in low-level stuff like a \\pagebreak to get nice looking results\n> ... but I fail to understand how this is different from the\n> output-oriented tweaking you do to the current Postgres docs.\n\nThat particular operation is the same, and done for the same reasons.\n\n> > I'll submit that the time I take tweaking output for hardcopy is no more\n> > time that would be spent tweaking latex to get optimal appearance.\n> \n> Except that the LaTeXer does it once. You have to do it over again from\n> scratch, very laboriously, every time you want to generate good output.\n> This is a step forward?\n\nNot true. If you embed pagebreak commands *in the source* then those\nbreaks *must* be reevaluated every time the docs change. If content is\nadded or removed, the appropriate place for a page break will likely\nchange, so things must be tweaked again. From source, not from something\nclose to final form.\n\n> Bottom line: I see very little reason to believe that SGML + available\n> tools represents any real technical advance over TeX + its available\n> tools. In fact, if one wants decent-looking output it seems to be a\n> substantial regression. Perhaps it's only that TeX has more than a\n> ten-year lead in being developed into a usable tool, but from what I can\n> see from here, the SGML tools we are using are incredibly inferior to\n> what's been available for a long time for TeX.\n\nNo argument that TeX is a wonderful tool. But it is trading one set of\nproblems for another, not fixing every criticism you have.\n\nAt the moment, my life will be easier without having to argue religion,\nso I can get back to preparing docs ;)\n\n - Thomas\n",
"msg_date": "Wed, 18 Apr 2001 13:14:23 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: Re: No printable 7.1 docs?"
},
{
"msg_contents": "Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n>> This is a step forward?\n\n> Not true. If you embed pagebreak commands *in the source* then those\n> breaks *must* be reevaluated every time the docs change. If content is\n> added or removed, the appropriate place for a page break will likely\n> change, so things must be tweaked again.\n\nOf course, but my point is that you don't have to revisit such decisions\nin areas of the docs that haven't changed since last time. The\nimportance of this depends on the stability of the docs, naturally...\n\n> No argument that TeX is a wonderful tool. But it is trading one set of\n> problems for another, not fixing every criticism you have.\n\nAgreed --- but the toolchain we are currently using seems to have\nconsiderably more than its fair share of problems.\n\n> At the moment, my life will be easier without having to argue religion,\n> so I can get back to preparing docs ;)\n\nCertainly we aren't going to change toolchains at this point in the 7.1\ncycle. I'm just opining that it would make sense to take another look\nat an SGML-to-TeX-based process in the future --- especially if we have\nsomeone who is willing to put active effort into improving the docs\ntoolchain.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Apr 2001 10:06:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: No printable 7.1 docs? "
},
{
"msg_contents": "> 5) We have been working for translating docs into Japanese using\n> EUC_JP encoding. Converting to HTML is no problem, but we cannot\n> get correct results for sgml-> RTF conversion at all. The\n> translated docs are just not be able to read, showing random\n> characters. It seems that openjade supports multibyte encodings at\n> least according to their manuals, but I can not get it working. I\n> have asked to dssslist but I have not gotten usefull helps yet.\n> \n> A qustion comes to my mind: Is really sgml is an appropriate doc\n> format for us? For me, LaTeX seems more handy. It can generate HTML\n> using latex2html, and of course can produce beautiful hard copies\n> AUTOMATICALLY for English and other languages including Japanese.\n\nTatsuo, when I added SGML reference pages to the back of my book, I took\nthe HTML-generated output from SGML and loaded that into LaTeX. I did\nhave to do a few things:\n\n\tconvert SGML to HTML\n\thtml2latex\n\tadd * to \\subsection* ?\n\tremove \\newline\n\tremove \\backslash\n\tremove \\begin_inset Figure { ? } to ?\n\tremove trailing space from Description\n\tno table conversion\n\tchange $$ to $ $\n\tno SQL query conversion, all on one line , program listing and synopsis\n\tspace-period and space-comma\n\nIt actually was pretty quick. The fixes were more cleaning up strange\nconversion from HTML to LaTeX.\n\nAs far as I can see, SGML gives us rich content tags, so we can do\nthings like pull the SGML ref manual pages headings right into pgsql's\nhelp system. However, what it doesn't give you is much control over\nappearance except how to map the tags to appearance. You can't tweek\nappearance in SGML unless you make special tags for certain appearances.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Apr 2001 15:34:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: No printable 7.1 docs?"
},
{
"msg_contents": "> Bruce Momjian writes:\n> \n> > However, what it doesn't give you is much control over\n> > appearance except how to map the tags to appearance. You can't tweek\n> > appearance in SGML unless you make special tags for certain appearances.\n> \n> How do you derive this conclusion? SGML gives you a boatload of ways to\n> tweak appearance through style sheets. No need to make new tags either\n> (although it sometimes doesn't hurt).\n\nYou can control the appearance of tags, but can you make certain tags\nappear differently from other tags of the same time. I assume you are\nsaying style sheets do that. Do you have to do the style sheet for each\ntype of output? I would assume you do.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Apr 2001 16:16:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: No printable 7.1 docs?y"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> However, what it doesn't give you is much control over\n> appearance except how to map the tags to appearance. You can't tweek\n> appearance in SGML unless you make special tags for certain appearances.\n\nHow do you derive this conclusion? SGML gives you a boatload of ways to\ntweak appearance through style sheets. No need to make new tags either\n(although it sometimes doesn't hurt).\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Wed, 18 Apr 2001 22:25:24 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: No printable 7.1 docs?"
},
{
"msg_contents": "> Tatsuo, when I added SGML reference pages to the back of my book, I took\n> the HTML-generated output from SGML and loaded that into LaTeX. I did\n> have to do a few things:\n> \n> \tconvert SGML to HTML\n> \thtml2latex\n> \tadd * to \\subsection* ?\n> \tremove \\newline\n> \tremove \\backslash\n> \tremove \\begin_inset Figure { ? } to ?\n> \tremove trailing space from Description\n> \tno table conversion\n> \tchange $$ to $ $\n> \tno SQL query conversion, all on one line , program listing and synopsis\n> \tspace-period and space-comma\n> \n> It actually was pretty quick. The fixes were more cleaning up strange\n> conversion from HTML to LaTeX.\n\nLooks nice, but I'm afraid I have to do all the work above for 489\nHTML files:-)\n\nWhat I'm doing now is trying to fix openjade. It is written in C++,\nand I hate C++, no way...\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 19 Apr 2001 10:40:28 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Re: No printable 7.1 docs?"
},
{
"msg_contents": "> > It actually was pretty quick. The fixes were more cleaning up strange\n> > conversion from HTML to LaTeX.\n> \n> Looks nice, but I'm afraid I have to do all the work above for 489\n> HTML files:-)\n> \n> What I'm doing now is trying to fix openjade. It is written in C++,\n> and I hate C++, no way...\n\nI cat'ed them all together, pulled them up in an editor with macros, and\nwent to town for a few hours.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Apr 2001 21:42:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: No printable 7.1 docs?y"
},
{
"msg_contents": "> > 5) We have been working for translating docs into Japanese using\n> > EUC_JP encoding. Converting to HTML is no problem, but we cannot\n> > get correct results for sgml-> RTF conversion at all. The\n> > translated docs are just not be able to read, showing random\n> > characters. It seems that openjade supports multibyte encodings at\n> > least according to their manuals, but I can not get it working. I\n> > have asked to dssslist but I have not gotten usefull helps yet.\n> \n> Sorry you are seeing trouble. I missed seeing your traffic on the dsssl\n> list to which I am subscribed; which one are you using?\n\ndssslist@lists.mulberrytech.com\n\n> > BTW, I see some odd results from the 7.1 HTML docs. For example, in\n> > parser-stage.html,\n> > \"Figure \\ref{parsetree} shows the parse tree...\"\n> > What is the \"\\ref{parsetree}\"?\n> \n> Looks sort of like latex, eh? :)\n> \n> They are residual markup for graphics from Stephan's Master's Thesis\n> which were never transcribed from the originals (gifs?) to a usable\n> format.\n> \n> Through disk crashes, system upgrades, and a failed backup device I\n> *may* no longer have his original tarball. Does anyone else?\n\nThat's too bad. Did it posted to one of our mailing list?\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 19 Apr 2001 11:51:54 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Re: No printable 7.1 docs?"
},
{
"msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n>> They are residual markup for graphics from Stephan's Master's Thesis\n>> which were never transcribed from the originals (gifs?) to a usable\n>> format.\n>> \n>> Through disk crashes, system upgrades, and a failed backup device I\n>> *may* no longer have his original tarball. Does anyone else?\n\n> That's too bad. Did it posted to one of our mailing list?\n\nBefore you get too excited about resurrecting the still-commented-out\nportions of Stephan's documentation, bear in mind that it was written\nfor 6.3 or thereabouts, and large parts of it are now obsolete. In\nparticular, almost none of his union/intersect/except implementation\nsurvives in 7.1.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Apr 2001 00:55:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: No printable 7.1 docs? "
},
{
"msg_contents": "On Thu, 19 Apr 2001, Tatsuo Ishii wrote:\n\n> > It actually was pretty quick. The fixes were more cleaning up strange\n> > conversion from HTML to LaTeX.\n>\n> Looks nice, but I'm afraid I have to do all the work above for 489\n> HTML files:-)\n\nIt's not all that bad. There's really only 486, the other three are gifs.\n\n:)\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 19 Apr 2001 05:49:45 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: No printable 7.1 docs?"
},
{
"msg_contents": "Tatsuo Ishii writes:\n\n> > Sorry you are seeing trouble. I missed seeing your traffic on the dsssl\n> > list to which I am subscribed; which one are you using?\n>\n> dssslist@lists.mulberrytech.com\n\nThe mailing list you should be on is docbook-apps@lists.oasis-open.org\n(see http://lists.oasis-open.org), which is more about docbook processing\nand less about dsssl programming (since that won't help you).\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Thu, 19 Apr 2001 17:09:22 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: No printable 7.1 docs?"
},
{
"msg_contents": "> > > Sorry you are seeing trouble. I missed seeing your traffic on the dsssl\n> > > list to which I am subscribed; which one are you using?\n> > dssslist@lists.mulberrytech.com\n> The mailing list you should be on is docbook-apps@lists.oasis-open.org\n> (see http://lists.oasis-open.org), which is more about docbook processing\n> and less about dsssl programming (since that won't help you).\n\nAh, thanks for the tip. I've been on the dsssl list forever, and it *is*\nrelated to the toolset we are using.\n\nbtw, I've poked a bit at the rtf generated for the reference pages, and\nhave found that the rtf emits gratuitous \"\\keepn\" flags (keep the\nparagraph with the *following* paragraph).\n\nIf I use sed to change these to \"\\keep\" flags (keep the paragraph itself\ntogether) then the pages look *much* better. Will experiment a bit more,\nbut I'm on the road to an easier way to generate the reference pages.\n\n - Thomas\n",
"msg_date": "Thu, 19 Apr 2001 15:41:54 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: Re: No printable 7.1 docs?"
},
{
"msg_contents": "> > > Sorry you are seeing trouble. I missed seeing your traffic on the dsssl\n> > > list to which I am subscribed; which one are you using?\n> >\n> > dssslist@lists.mulberrytech.com\n> \n> The mailing list you should be on is docbook-apps@lists.oasis-open.org\n> (see http://lists.oasis-open.org), which is more about docbook processing\n> and less about dsssl programming (since that won't help you).\n\nThanks. Seems I subscribed wrong list...\n--\nTatsuo Ishii\n",
"msg_date": "Fri, 20 Apr 2001 10:35:37 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Re: No printable 7.1 docs?"
}
] |
[
{
"msg_contents": "There was a discussion once about using 64 bit long long compiler support to\nincrease the size of the transaction ids to solve the wrap around problem. I\nunderstand that there is a different solution for this now.\n\nHowever, my question is: Are we to the point where int64's can be used in\nmainstream code yet, or are there supported platforms that this will not work\nwith? And if not, when (if ever) will such capability be standardized?\n\nThe reason why I ask is I would like to experiment with a variable length\nbase-(2^32) numeric type that I hope might be accepted someday, and\nbase-(2^32) operations need long long support to implement in a\nstraightforward fashion.\n\n- Mark Butler\n",
"msg_date": "Sun, 15 Apr 2001 18:38:52 -0600",
"msg_from": "Mark Butler <butlerm@middle.net>",
"msg_from_op": true,
"msg_subject": "Int64 (long long) Supporting Compiler Requirement Status?"
},
{
"msg_contents": "Mark Butler <butlerm@middle.net> writes:\n> However, my question is: Are we to the point where int64's can be used in\n> mainstream code yet, or are there supported platforms that this will not work\n> with? And if not, when (if ever) will such capability be standardized?\n\nI don't foresee ever being willing to *require* int64 support. It'll\nalways be optional.\n\n> The reason why I ask is I would like to experiment with a variable length\n> base-(2^32) numeric type that I hope might be accepted someday, and\n> base-(2^32) operations need long long support to implement in a\n> straightforward fashion.\n\nI really doubt that base 2^32 would be enough faster than base 10000 to\nbe worth taking any portability risks for.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 15 Apr 2001 23:27:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Int64 (long long) Supporting Compiler Requirement Status? "
}
] |
[
{
"msg_contents": "What does this error mean - and how can I avoid it in the future?\n\npostmaster: StreamConnection: accept: Too many open files in system\n\nAny help would be much appreciated!\n\n\nRyan Mahoney\n",
"msg_date": "Mon, 16 Apr 2001 03:43:50 GMT",
"msg_from": "ryan@paymentalliance.net",
"msg_from_op": true,
"msg_subject": "Too many open files in system"
},
{
"msg_contents": "ryan@paymentalliance.net wrote:\n> \n> What does this error mean - and how can I avoid it in the future?\n> \n> postmaster: StreamConnection: accept: Too many open files in system\n> \n> Any help would be much appreciated!\n> \n> Ryan Mahoney\n\nIt would be helpful if you could give more information, but if you are running\nLinux, you may need to bump up the maximum number of open files. \n\ncat /proc/sys/fs/file-nr\n\nWill spit out three numbers:\n\naaaa bbbb cccc\n\naaaa is the maximum number of files you have opened.\nbbbb is the number of files which are currently open\ncccc is the system configured maximum number of files.\n\nif aaaa is very close to cccc than you should adjust the maximum number of\nfiles your system can use\n\necho 16384 > /proc/sys/fs/file-max \n\nwhere 16384 is the desired number.\n\n-- \nI'm not offering myself as an example; every life evolves by its own laws.\n------------------------\nhttp://www.mohawksoft.com\n",
"msg_date": "Mon, 16 Apr 2001 08:22:44 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Too many open files in system"
}
] |
[
{
"msg_contents": "\nOK, looks like we have a Tru64 problem with 7.1 too. Can you tell us\nhow to get a NAN value. Zero is not it. I see the following mentions\nof NAN in the code. Does NAN exist in one of your /usr/include files?\n\n\n\ninclude/port/qnx4.h:18:#ifndef NAN\ninclude/port/qnx4.h:24:#define NAN (*(const double *) __nan)\ninclude/port/qnx4.h:25:#endif /* NAN */\ninclude/port/solaris.h:43:#ifndef NAN\ninclude/port/solaris.h:51:#define NAN \\\ninclude/port/solaris.h:58:#define NAN (0.0/0.0)\ninclude/port/solaris.h:62:#endif /* not NAN */\ninclude/utils/timestamp.h:64:#ifdef NAN\ninclude/utils/timestamp.h:65:#define DT_INVALID (NAN)\ninclude/utils/timestamp.h:80:#ifdef NAN\ninclude/utils/timestamp.h:104:#ifdef NAN\nbackend/port/qnx4/isnan.c:22: return !memcmp(&dsrc, &NAN, sizeof(double));\nbackend/utils/adt/float.c:106:#ifndef NAN\nbackend/utils/adt/float.c:107:#define NAN (0.0/0.0)\nbackend/utils/adt/float.c:251: val = NAN;\nbackend/utils/adt/numeric.c:45:#ifndef NAN\nbackend/utils/adt/numeric.c:46:#define NAN (0.0/0.0)\nbackend/utils/adt/numeric.c:1718: PG_RETURN_FLOAT8(NAN);\nbackend/utils/adt/numeric.c:1763: PG_RETURN_FLOAT4((float4) NAN);\nbackend/utils/adt/numeric.c:2269: var->sign = NUMERIC_POS; /* anything but NAN... */\n\n\n\n> It _doesn't_ compile on Tru64:\n> gmake -C adt SUBSYS.o\n> gmake[4]: Entering directory `/usr/users/dcarmich/postgresql-7.1/src/\n> backend/utils/adt'\n> cc -std -O4 -Olimit 2000 -I../../../../src/include -c -o float.o float.c\n> cc: Error: float.c, line 251: In this statement, the libraries on this \n> platform do not yet support compile-time evaluation of the constant \n> expression \"0.0/0.0\". (constfoldns)\n> val = NAN;\n> ------------------------------^\n> gmake[4]: *** [float.o] Error 1\n> gmake[4]: Leaving directory `/usr/users/dcarmich/postgresql-7.1/src/backend\n> /utils/adt'\n> gmake[3]: *** [adt-recursive] Error 2\n> gmake[3]: Leaving directory `/usr/users/dcarmich/postgresql-7.1/src/backend\n> /utils'\n> gmake[2]: *** [utils-recursive] Error 2\n> gmake[2]: Leaving directory `/usr/users/dcarmich/postgresql-7.1/src/\n> backend'\n> gmake[1]: *** [all] Error 2\n> gmake[1]: Leaving directory `/usr/users/dcarmich/postgresql-7.1/src'\n> gmake: *** [all] Error 2\n> \n> (Sorry about the quoting, don't know how to remove it in VMS EDT.)\n> \n> >\n> >Please try 7.1. It should work fine.\n> >\n> >> ============================================================================\n> >> POSTGRESQL BUG REPORT TEMPLATE\n> >> ============================================================================\n> >>\n> >>\n> >> Your name\t\t:\tDouglas Carmichael\n> >> Your email address\t:\tdcarmich@ourservers.net\n> >>\n> >>\n> >> System Configuration\n> >> ---------------------\n> >> Architecture (example: Intel Pentium) \t: DEC/Compaq Alpha\n> >>\n> >> Operating System (example: Linux 2.0.26 ELF) \t: Compaq Tru64 UNIX v5.0A rev 1094\n> >>\n> >> PostgreSQL version (example: PostgreSQL-7.0): PostgreSQL-7.0.3\n> >>\n> >> Compiler used (example: gcc 2.8.0)\t\t: Compaq C T6.4-212 (dtk)\n> >>\n> >>\n> >> Please enter a FULL description of your problem:\n> >> ------------------------------------------------\n> >>\n> >> I have patches to src/backend/utils/adt/float.c and\n> >> src/backend/utils/adt/numeric.c to get PostgreSQL 7.0.3 to compile on Tru64\n> >> v5.0A with Compaq C T6.4-212 (dtk).\n> >>\n> >> Please describe a way to repeat the problem. Please try to provide a\n> >> concise reproducible example, if at all possible:\n> >> ----------------------------------------------------------------------\n> >>\n> >>\n> >> N/A\n> >>\n> >>\n> >> If you know how this problem might be fixed, list the solution below:\n> >> ---------------------------------------------------------------------\n> >>\n> >>\n> >> diff -cr postgresql-7.0.3_old/src/backend/utils/adt/float.c postgresql-7.0.3/src/backend/utils/adt/float.c\n> >> *** postgresql-7.0.3_old/src/backend/utils/adt/float.c\tWed Apr 12 12:15:49 2000\n> >> --- postgresql-7.0.3/src/backend/utils/adt/float.c\tFri Apr 13 22:13:10 2001\n> >> ***************\n> >> *** 68,74 ****\n> >> #include \"utils/builtins.h\"\n> >>\n> >> #ifndef NAN\n> >> ! #define NAN\t\t(0.0/0.0)\n> >> #endif\n> >>\n> >> #ifndef SHRT_MAX\n> >> --- 68,74 ----\n> >> #include \"utils/builtins.h\"\n> >>\n> >> #ifndef NAN\n> >> ! #define NAN\t\t0\n> >> #endif\n> >>\n> >> #ifndef SHRT_MAX\n> >> diff -cr postgresql-7.0.3_old/src/backend/utils/adt/numeric.c postgresql-7.0.3/src/backend/utils/adt/numeric.c\n> >> *** postgresql-7.0.3_old/src/backend/utils/adt/numeric.c\tWed Apr 12 12:15:50 2000\n> >> --- postgresql-7.0.3/src/backend/utils/adt/numeric.c\tFri Apr 13 22:15:55 2001\n> >> ***************\n> >> *** 41,47 ****\n> >> #endif\n> >>\n> >> #ifndef NAN\n> >> ! #define NAN\t\t(0.0/0.0)\n> >> #endif\n> >>\n> >>\n> >> --- 41,47 ----\n> >> #endif\n> >>\n> >> #ifndef NAN\n> >> ! #define NAN\t 0\n> >> #endif\n> >>\n> >> ---------------------------(end of broadcast)---------------------------\n> >> TIP 2: you can get off all lists at once with the unregister command\n> >> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> >>\n> >\n> >\n> >--\n> >Bruce Momjian | http://candle.pha.pa.us\n> >pgman@candle.pha.pa.us | (610) 853-3000\n> >+ If your life is a hard drive, | 830 Blythe Avenue\n> >+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 16 Apr 2001 00:32:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Patch for PostgreSQL 7.0.3 to compile on Tru64 UNIX \n\tv5.0A with Compaq C T6.4-212 (dtk)"
},
{
"msg_contents": "> OK, looks like we have a Tru64 problem with 7.1 too. Can you tell us\n> how to get a NAN value. Zero is not it. I see the following mentions\n> of NAN in the code. Does NAN exist in one of your /usr/include files?\n\nWe had at least three reports of successful compilation on Tru64 4.0[dg]\nand 5.0. So perhaps there is something else going on, or some other\noptional packages in an include path?\n\n - Thomas\n",
"msg_date": "Mon, 16 Apr 2001 04:42:00 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Patch for PostgreSQL 7.0.3 to compile on Tru64 UNIX\n\tv5.0A with Compaq C T6.4-212 (dtk)"
},
{
"msg_contents": "Thomas Lockhart wrote:\n\n> We had at least three reports of successful compilation on Tru64 4.0[dg]\n\nI can add up my experience of building on Tru64 4.0f (Compaq DS20E)\nwithout problems, using Digital's cc\n\n./configure --with-includes=/usr/local/include\n--with-libraries=/usr/local/lib --with-maxbackends=128 --with-perl\n--localstatedir=/var/run/pgsql --prefix=/usr/local/pgsql\n--mandir=/usr/local/man --docdir=/data/www/library\n\n version\n--------------------------------------------------------------\n PostgreSQL 7.1 on alphaev67-dec-osf4.0f, compiled by cc -std\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-2-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n",
"msg_date": "Tue, 17 Apr 2001 14:14:03 +0300",
"msg_from": "Alessio Bragadini <alessio@albourne.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Patch for PostgreSQL 7.0.3 to compile on Tru64 UNIX\n\tv5.0A"
},
{
"msg_contents": "Alessio Bragadini <alessio@albourne.com> writes:\n> Thomas Lockhart wrote:\n>> We had at least three reports of successful compilation on Tru64 4.0[dg]\n\n> I can add up my experience of building on Tru64 4.0f (Compaq DS20E)\n> without problems, using Digital's cc\n\nWould you check whether things still work on your platform if CC becomes\n\"cc -std -ieee\" rather than just \"cc -std\"? (Best way to check is to\nalter src/template/osf, then do the full configure/build cycle.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Apr 2001 10:39:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Patch for PostgreSQL 7.0.3 to compile on Tru64 UNIX\n\tv5.0A"
},
{
"msg_contents": "Tom Lane wrote:\n\n> > I can add up my experience of building on Tru64 4.0f (Compaq DS20E)\n> > without problems, using Digital's cc\n> \n> Would you check whether things still work on your platform if CC becomes\n> \"cc -std -ieee\" rather than just \"cc -std\"? (Best way to check is to\n> alter src/template/osf, then do the full configure/build cycle.)\n\nHere we go:\nAll of PostgreSQL successfully made. Ready to install.\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-2-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n",
"msg_date": "Tue, 17 Apr 2001 19:46:57 +0300",
"msg_from": "Alessio Bragadini <alessio@albourne.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Patch for PostgreSQL 7.0.3 to compile on\n\tTru64 UNIX v5.0A"
},
{
"msg_contents": "Alessio Bragadini <alessio@albourne.com> writes:\n> Tom Lane wrote:\n> I can add up my experience of building on Tru64 4.0f (Compaq DS20E)\n> without problems, using Digital's cc\n>> \n>> Would you check whether things still work on your platform if CC becomes\n>> \"cc -std -ieee\" rather than just \"cc -std\"? (Best way to check is to\n>> alter src/template/osf, then do the full configure/build cycle.)\n\n> Here we go:\n> All of PostgreSQL successfully made. Ready to install.\n\nSounds good; could you check the regress tests too?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Apr 2001 12:57:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Patch for PostgreSQL 7.0.3 to compile on Tru64 UNIX\n\tv5.0A"
},
{
"msg_contents": "Tom Lane wrote:\n\n> Sounds good; could you check the regress tests too?\n\nMmmmhhh... Failed the int8 test, but seems more a difference in the text\nof the error message. The others 75 were successful.\n\ndiff attached\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-2-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925",
"msg_date": "Tue, 17 Apr 2001 20:14:29 +0300",
"msg_from": "Alessio Bragadini <alessio@albourne.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Patch for PostgreSQL 7.0.3 to compile on\n\tTru64 UNIX v5.0A"
},
{
"msg_contents": "> Mmmmhhh... Failed the int8 test\n\nSorry, float8\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-2-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n",
"msg_date": "Tue, 17 Apr 2001 20:16:13 +0300",
"msg_from": "Alessio Bragadini <alessio@albourne.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Patch for PostgreSQL 7.0.3 to compile on\n\tTru64 UNIX v5.0A"
},
{
"msg_contents": "Alessio Bragadini <alessio@albourne.com> writes:\n> Tom Lane wrote:\n>> Sounds good; could you check the regress tests too?\n\n> *** ./expected/float8-fp-exception.out\tThu Mar 30 10:46:00 2000\n> --- ./results/float8.out\tTue Apr 17 20:09:17 2001\n> ***************\n> *** 214,220 ****\n> SET f1 = FLOAT8_TBL.f1 * '-1'\n> WHERE FLOAT8_TBL.f1 > '0.0';\n> SELECT '' AS bad, f.f1 * '1e200' from FLOAT8_TBL f;\n> ! ERROR: floating point exception! The last floating point operation either exceeded legal ranges or was a divide by zero\n> SELECT '' AS bad, f.f1 ^ '1e200' from FLOAT8_TBL f;\n> ERROR: pow() result is out of range\n> SELECT '' AS bad, ln(f.f1) from FLOAT8_TBL f where f.f1 = '0.0' ;\n> --- 214,220 ----\n> SET f1 = FLOAT8_TBL.f1 * '-1'\n> WHERE FLOAT8_TBL.f1 > '0.0';\n> SELECT '' AS bad, f.f1 * '1e200' from FLOAT8_TBL f;\n> ! ERROR: Bad float8 input format -- overflow\n> SELECT '' AS bad, f.f1 ^ '1e200' from FLOAT8_TBL f;\n> ERROR: pow() result is out of range\n> SELECT '' AS bad, ln(f.f1) from FLOAT8_TBL f where f.f1 = '0.0' ;\n\nThat's fairly strange. It doesn't seem to have a problem with the\nconstant '1e200' as such --- notice that the next query gets the\nexpected result. But why would we get \"Bad float8 input format\"\nfor a calculation-result overflow? Ideas anyone?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Apr 2001 13:51:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Patch for PostgreSQL 7.0.3 to compile on Tru64 UNIX\n\tv5.0A"
}
] |
[
{
"msg_contents": "\nNo, those don't do it. We need an actual NaN value. These are just\nflags, I think.\n\n\n\n> There are two things I found from fp_class.h, FP_SNAN (a signaling NaN), \n> and FP_QNAN (a quiet NaN). Don't know which you want:\n> alphapc.ourservers.net> grep FP_SNAN /usr/include/*\n> /usr/include/fp_class.h:#define FP_SNAN 0\n> alphapc.ourservers.net> grep FP_QNAN /usr/include/*\n> /usr/include/fp_class.h:#define FP_QNAN 1\n> \n> >> There is an isnan() function on v5.0a.\n> >> I'll be sending you the manpage.\n> >\n> >The problem is that we have to assign NAN to variables sometimes. That\n> >is why were were doing 0.0/0.0, to generate an NAN that could be used\n> >later in the code.\n> >\n> >I understand your compiler is saying it can't compute that at compile\n> >time. Maybe a computation that is preformed to set the value somehow.\n> >\n> >>\n> >> >\n> >> >OK, looks like we have a Tru64 problem with 7.1 too. Can you tell us\n> >> >how to get a NAN value. Zero is not it. I see the following mentions\n> >> >of NAN in the code. Does NAN exist in one of your /usr/include files?\n> >> >\n> >> >\n> >> >\n> >> >include/port/qnx4.h:18:#ifndef NAN\n> >> >include/port/qnx4.h:24:#define NAN (*(const double *) __nan)\n> >> >include/port/qnx4.h:25:#endif /* NAN */\n> >> >include/port/solaris.h:43:#ifndef NAN\n> >> >include/port/solaris.h:51:#define NAN \\\n> >> >include/port/solaris.h:58:#define NAN (0.0/0.0)\n> >> >include/port/solaris.h:62:#endif /* not NAN */\n> >> >include/utils/timestamp.h:64:#ifdef NAN\n> >> >include/utils/timestamp.h:65:#define DT_INVALID (NAN)\n> >> >include/utils/timestamp.h:80:#ifdef NAN\n> >> >include/utils/timestamp.h:104:#ifdef NAN\n> >> >backend/port/qnx4/isnan.c:22: return !memcmp(&dsrc, &NAN, sizeof(double));\n> >> >backend/utils/adt/float.c:106:#ifndef NAN\n> >> >backend/utils/adt/float.c:107:#define NAN (0.0/0.0)\n> >> >backend/utils/adt/float.c:251: val = NAN;\n> >> >backend/utils/adt/numeric.c:45:#ifndef NAN\n> >> >backend/utils/adt/numeric.c:46:#define NAN (0.0/0.0)\n> >> >backend/utils/adt/numeric.c:1718: PG_RETURN_FLOAT8(NAN);\n> >> >backend/utils/adt/numeric.c:1763: PG_RETURN_FLOAT4((float4) NAN);\n> >> >backend/utils/adt/numeric.c:2269: var->sign = NUMERIC_POS; /* anything but NAN... */\n> >> >\n> >> >\n> >> >\n> >> >> It _doesn't_ compile on Tru64:\n> >> >> gmake -C adt SUBSYS.o\n> >> >> gmake[4]: Entering directory `/usr/users/dcarmich/postgresql-7.1/src/\n> >> >> backend/utils/adt'\n> >> >> cc -std -O4 -Olimit 2000 -I../../../../src/include -c -o float.o float.c\n> >> >> cc: Error: float.c, line 251: In this statement, the libraries on this\n> >> >> platform do not yet support compile-time evaluation of the constant\n> >> >> expression \"0.0/0.0\". (constfoldns)\n> >> >> val = NAN;\n> >> >> ------------------------------^\n> >> >> gmake[4]: *** [float.o] Error 1\n> >> >> gmake[4]: Leaving directory `/usr/users/dcarmich/postgresql-7.1/src/backend\n> >> >> /utils/adt'\n> >> >> gmake[3]: *** [adt-recursive] Error 2\n> >> >> gmake[3]: Leaving directory `/usr/users/dcarmich/postgresql-7.1/src/backend\n> >> >> /utils'\n> >> >> gmake[2]: *** [utils-recursive] Error 2\n> >> >> gmake[2]: Leaving directory `/usr/users/dcarmich/postgresql-7.1/src/\n> >> >> backend'\n> >> >> gmake[1]: *** [all] Error 2\n> >> >> gmake[1]: Leaving directory `/usr/users/dcarmich/postgresql-7.1/src'\n> >> >> gmake: *** [all] Error 2\n> >> >>\n> >> >> (Sorry about the quoting, don't know how to remove it in VMS EDT.)\n> >> >>\n> >> >> >\n> >> >> >Please try 7.1. It should work fine.\n> >> >> >\n> >> >> >> ============================================================================\n> >> >> >> POSTGRESQL BUG REPORT TEMPLATE\n> >> >> >> ============================================================================\n> >> >> >>\n> >> >> >>\n> >> >> >> Your name\t\t:\tDouglas Carmichael\n> >> >> >> Your email address\t:\tdcarmich@ourservers.net\n> >> >> >>\n> >> >> >>\n> >> >> >> System Configuration\n> >> >> >> ---------------------\n> >> >> >> Architecture (example: Intel Pentium) \t: DEC/Compaq Alpha\n> >> >> >>\n> >> >> >> Operating System (example: Linux 2.0.26 ELF) \t: Compaq Tru64 UNIX v5.0A rev 1094\n> >> >> >>\n> >> >> >> PostgreSQL version (example: PostgreSQL-7.0): PostgreSQL-7.0.3\n> >> >> >>\n> >> >> >> Compiler used (example: gcc 2.8.0)\t\t: Compaq C T6.4-212 (dtk)\n> >> >> >>\n> >> >> >>\n> >> >> >> Please enter a FULL description of your problem:\n> >> >> >> ------------------------------------------------\n> >> >> >>\n> >> >> >> I have patches to src/backend/utils/adt/float.c and\n> >> >> >> src/backend/utils/adt/numeric.c to get PostgreSQL 7.0.3 to compile on Tru64\n> >> >> >> v5.0A with Compaq C T6.4-212 (dtk).\n> >> >> >>\n> >> >> >> Please describe a way to repeat the problem. Please try to provide a\n> >> >> >> concise reproducible example, if at all possible:\n> >> >> >> ----------------------------------------------------------------------\n> >> >> >>\n> >> >> >>\n> >> >> >> N/A\n> >> >> >>\n> >> >> >>\n> >> >> >> If you know how this problem might be fixed, list the solution below:\n> >> >> >> ---------------------------------------------------------------------\n> >> >> >>\n> >> >> >>\n> >> >> >> diff -cr postgresql-7.0.3_old/src/backend/utils/adt/float.c postgresql-7.0.3/src/backend/utils/adt/float.c\n> >> >> >> *** postgresql-7.0.3_old/src/backend/utils/adt/float.c\tWed Apr 12 12:15:49 2000\n> >> >> >> --- postgresql-7.0.3/src/backend/utils/adt/float.c\tFri Apr 13 22:13:10 2001\n> >> >> >> ***************\n> >> >> >> *** 68,74 ****\n> >> >> >> #include \"utils/builtins.h\"\n> >> >> >>\n> >> >> >> #ifndef NAN\n> >> >> >> ! #define NAN\t\t(0.0/0.0)\n> >> >> >> #endif\n> >> >> >>\n> >> >> >> #ifndef SHRT_MAX\n> >> >> >> --- 68,74 ----\n> >> >> >> #include \"utils/builtins.h\"\n> >> >> >>\n> >> >> >> #ifndef NAN\n> >> >> >> ! #define NAN\t\t0\n> >> >> >> #endif\n> >> >> >>\n> >> >> >> #ifndef SHRT_MAX\n> >> >> >> diff -cr postgresql-7.0.3_old/src/backend/utils/adt/numeric.c postgresql-7.0.3/src/backend/utils/adt/numeric.c\n> >> >> >> *** postgresql-7.0.3_old/src/backend/utils/adt/numeric.c\tWed Apr 12 12:15:50 2000\n> >> >> >> --- postgresql-7.0.3/src/backend/utils/adt/numeric.c\tFri Apr 13 22:15:55 2001\n> >> >> >> ***************\n> >> >> >> *** 41,47 ****\n> >> >> >> #endif\n> >> >> >>\n> >> >> >> #ifndef NAN\n> >> >> >> ! #define NAN\t\t(0.0/0.0)\n> >> >> >> #endif\n> >> >> >>\n> >> >> >>\n> >> >> >> --- 41,47 ----\n> >> >> >> #endif\n> >> >> >>\n> >> >> >> #ifndef NAN\n> >> >> >> ! #define NAN\t 0\n> >> >> >> #endif\n> >> >> >>\n> >> >> >> ---------------------------(end of broadcast)---------------------------\n> >> >> >> TIP 2: you can get off all lists at once with the unregister command\n> >> >> >> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> >> >> >>\n> >> >> >\n> >> >> >\n> >> >> >--\n> >> >> >Bruce Momjian | http://candle.pha.pa.us\n> >> >> >pgman@candle.pha.pa.us | (610) 853-3000\n> >> >> >+ If your life is a hard drive, | 830 Blythe Avenue\n> >> >> >+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> >> >>\n> >> >\n> >> >\n> >> >--\n> >> >Bruce Momjian | http://candle.pha.pa.us\n> >> >pgman@candle.pha.pa.us | (610) 853-3000\n> >> >+ If your life is a hard drive, | 830 Blythe Avenue\n> >> >+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> >>\n> >\n> >\n> >--\n> >Bruce Momjian | http://candle.pha.pa.us\n> >pgman@candle.pha.pa.us | (610) 853-3000\n> >+ If your life is a hard drive, | 830 Blythe Avenue\n> >+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 16 Apr 2001 01:22:36 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Patch for PostgreSQL 7.0.3 to compile on Tru64 UNIX \n\tv5.0A with Compaq C T6.4-212 (dtk)"
},
{
"msg_contents": "> No, those don't do it. We need an actual NaN value. These are just\n> flags, I think.\n> > >> >> gmake -C adt SUBSYS.o\n> > >> >> gmake[4]: Entering directory `/usr/users/dcarmich/postgresql-7.1/src/\n> > >> >> backend/utils/adt'\n> > >> >> cc -std -O4 -Olimit 2000 -I../../../../src/include -c -o float.o float.c\n> > >> >> cc: Error: float.c, line 251: In this statement, the libraries on this\n> > >> >> platform do not yet support compile-time evaluation of the constant\n> > >> >> expression \"0.0/0.0\". (constfoldns)\n> > >> >> val = NAN;\n> > >> >> ------------------------------^\n\nWhere does the \"-O4\" come from? That level of optimization probably is\nforcing the compile-time constant folding, which is causing trouble.\n\nTry backing off to \"-O2\" or turn off optimization all together and I'll\nbet it will compile. Another possibility is to go into the adt/\nsubdirectory, compile float.o by cutting and pasting the line above\n(substituting -O0 for -O4) and then go back up and resume the make from\nthe top.\n\n - Thomas\n",
"msg_date": "Mon, 16 Apr 2001 15:23:47 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Patch for PostgreSQL 7.0.3 to compile on Tru64 UNIX\n\tv5.0A with Compaq C T6.4-212 (dtk)"
},
{
"msg_contents": "Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n> cc -std -O4 -Olimit 2000 -I../../../../src/include -c -o float.o float.c\n> cc: Error: float.c, line 251: In this statement, the libraries on this\n> platform do not yet support compile-time evaluation of the constant\n> expression \"0.0/0.0\". (constfoldns)\n\n> Where does the \"-O4\" come from? That level of optimization probably is\n> forcing the compile-time constant folding, which is causing trouble.\n\nLooks like it's coming from src/template/osf. If Douglas can confirm\nthat a lower -O level makes the compiler complaint go away, then we need\nto change that template.\n\nBTW, the other arm of the osf template looks pretty bogus too: isn't it\nforcing no optimizations for gcc? I'd have expected CFLAGS=-O2 for gcc.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Apr 2001 12:47:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Patch for PostgreSQL 7.0.3 to compile on Tru64 UNIX\n\tv5.0A with Compaq C T6.4-212 (dtk)"
}
] |
[
{
"msg_contents": "May I please ask you guys a question about Postgres.pm. Right now I'm\nworking on a Red Hat 6.2, Apache 1.3.9 and Perl5. I've finally got\nApache setup which was no easy task even with a $80.00 Mohawk GUI\nadministration front end. But now I get the following from my perl/cgi\nprogram error:\n\nCan't locate loadable object for module Postgres in @INC (@INC contains:\n\n/usr/lib/perl5/5.00503/i386-linux\n/usr/lib/perl5/5.00503\n/usr/lib/perl5/site_perl/5.005/i386-linux\n/usr/lib/perl5/site_perl/5.005\n\nAnd the script_errors.log says:\nCan't locate Postgres.pm in @INC.....\n\nI moved a Postgres.pm from Solaris to all 4 of this directories in @INC\nand it is still not \"found.\" I did a word search at www.postgresql.org\nfor Postgres.pm and found nothing. I did find in 1998 someone had\nexactly\nthis problem with Postgres95 at:\n\nhttp://www.postgresql.org/mhonarc/pgsql-admin/1998-05/msg00063.html\n\nI loaded postgres using the download postgresql-7.0.3.tar.gz.\n\nCan anyone point me in the right direction?\n\nInfinite Thanks for your help and a super database product :o)\n\nAllan in Belgium\n\n\n\n",
"msg_date": "Mon, 16 Apr 2001 16:48:07 +0200",
"msg_from": "\"Allan C Huffman\" <huffman@ppc.pims.org>",
"msg_from_op": true,
"msg_subject": "RedHat/Postgres.pm Error"
},
{
"msg_contents": "Allan C Huffman wrote:\n\n> May I please ask you guys a question about Postgres.pm.\n\nWhat is Postgres.pm? Talking about DBD::Pg or something completely\ndifferent?\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-2-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n",
"msg_date": "Tue, 17 Apr 2001 14:17:14 +0300",
"msg_from": "Alessio Bragadini <alessio@albourne.com>",
"msg_from_op": false,
"msg_subject": "Re: RedHat/Postgres.pm Error"
}
] |
[
{
"msg_contents": "I have looked and I have looked, it is not immediately clear to me how integer\narrays are passed to C function.\n\ncreate table fubar (vars integer[]) ;\n\nselect c_function(vars) from fubar;\n\ninsert into fubar (vars) values ('{1,2,3,4,5,6}');\n\n\n........\n\nextern \"C\" c_function (varlena var)\n{\n\tint * pn = (int *)VARDATA(var);\n}\n\nNow, what should \"pn\" have in it? I don't see my values until later on in the\narray. I guess I am asking is what is the format of this type, and more\nimportantly, where is it documented. I looked in catalog and pg_types but it\nwasn't clear it was defined there.\n\n\n-- \n\nI'm not offering myself as an example; every life evolves by its own laws.\n------------------------\nhttp://www.mohawksoft.com\n",
"msg_date": "Mon, 16 Apr 2001 15:18:55 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "integer arrays"
},
{
"msg_contents": "mlw <markw@mohawksoft.com> writes:\n> I have looked and I have looked, it is not immediately clear to me how\n> integer arrays are passed to C function.\n\nSee src/include/utils/array.h, also src/backend/utils/adt/arrayfuncs.c\nand src/backend/utils/adt/arrayutils.c. Beware: this code is pretty\nmessy.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Apr 2001 15:49:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: integer arrays "
}
] |
[
{
"msg_contents": "The ability to place database objects into a logical partitioning of \ndata. For example, in Oracle, each user creates tables, views, \nsequences, synonyms, and snapshots in their own schema. So if I were \nto create a table called 'Employees', I could query it as:\n\nSELECT * FROM employees;\n\nBut another user would have to query it as:\n\nSELECT * FROM mascarm.employees;\n\nA common case for this is to logically divide schema by departments. \nYou could do that now in PostgreSQL in the form of multiple \ndatabases, but you couldn't query across them. For example, you might \nhave an \"Accounting\" schema, and an \"Inventory\" schema. \nOccassionally, the accountants need to join tables from accounting \nw/inventory. The inventory people (or the dba) would then grant \nappropriate privileges for the accountants to do that, but the \naccounts would have to fully qualify their queries:\n\nSELECT * FROM inventory.orders;\n\nSo, if you want a logical division that also contain some shared \ntables, views, or sequences (and hopefully snapshots, some day), in \nOracle, you can create public synonyms for the shared objects:\n\nCREATE PUBLIC SYNONYM employees FOR mascarm.employees;\n\nNow, anyone can query this table as:\n\nSELECT * FROM employees;\n\nIts a namespace thing, basically.\n\nHope that helps,\n\nMike Mascari\nmascarm@mascari.com\n\n-----Original Message-----\nFrom:\tChristopher Kings-Lynne [SMTP:chriskl@familyhealth.com.au]\nSent:\tMonday, April 16, 2001 10:17 PM\nTo:\tpgsql-hackers@postgresql.org\nSubject:\tRE: [HACKERS] Truncation of object names\n\nCall me thick as two planks, but when you guys constantly refer to \n'schema\nsupport' in PostgreSQL, what exactly are you referring to?\n\nChris\n\n",
"msg_date": "Mon, 16 Apr 2001 22:51:42 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": true,
"msg_subject": "RE: Truncation of object names "
}
] |
[
{
"msg_contents": "Hi,\n\nWhile doing some testing with Postgresql 7.1, I noticed some perculiar \nbehaviour with the JDBC driver. Selecting a single record from a table is \n5-10 times slower than doing an insert (even if the table only contains a \nsingle record, and the select query does not contain any join). On Oracle \ndatabase, an insert is always slower than a select when running the same \ntest.\n\nAt the moment, I've narrowed down the time consuming section of the code to \nExecSQL() method in the org.postgresql.Connection class. There seems to be a \n\"while\" loop over a \"switch\" statement which I yet need to figure out what \nit is doing. Anybody has any idea why this section of code is so slow?\n\nNote: I'm using Mandrake 7.2, IBM JDK 1.3 on 1Ghz Athlon with 256 RAM.\nThe Postgresql 7.1 is compiled with multibyte character turned on.\nThe Select statement itself takes 1-5 seconds which is almost 100 times \nslower than Oracle on the same PC. This is even without opening connection \nfactored in.\n\nThomas Hii\n_________________________________________________________________________\nGet Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.\n\n",
"msg_date": "Tue, 17 Apr 2001 12:13:05 +0800",
"msg_from": "\"Thomas Hii\" <duke_firehawk@hotmail.com>",
"msg_from_op": true,
"msg_subject": "JDBC Select Performance"
}
] |
[
{
"msg_contents": "CREATE FUNCTION userHasAll (int4,int4) RETURNS boolean AS '\nDECLARE\n row RECORD;\n kirakorow kirakok%ROWTYPE;\n userID ALIAS FOR $1;\n kirakoID ALIAS FOR $2;\n megvan int4:=0;\n kepdarabok INTEGER:=0;\n query text;\nBEGIN\n SELECT * INTO kirakorow FROM kirakok WHERE kirako_id=kirakoID;\n-- this works\n\n IF NOT FOUND THEN\n RAISE EXCEPTION ''Invalid kirakoID'';\n RETURN ''f'';\n END IF;\n\n kepdarabok:=kirakorow.kepdarabokx*kirakorow.kepdaraboky;\n megvan:=0;\n\n FOR row IN EXECUTE ''SELECT count(*) AS hits FROM talalatok WHERE userid='''''' || userID || '''''' AND jatek='''''' || kirakoID || '''''';'' LOOP\n-- this works too but if you replace it with the following row :\n-- FOR row IN SELECT count(*) AS hits FROM talalatok WHERE userid=userID AND jatek=kirakoID LOOP\n-- this executes as if the following query was issued\n-- FOR row IN SELECT count(*) AS hits FROM talalatok WHERE jatek=kirakoID LOOP\n megvan:=row.hits;\n END LOOP;\n-- the same applies to inline queries too. if issued with execute\n-- everything is fine, but if the query has more than one arguments\n-- the compiler dismisses all, except the last one\n \n IF megvan<>kepdarabok THEN\n RETURN ''f'';\n END IF;\n \n RETURN ''t'';\nEND;\n' LANGUAGE 'plpgsql';\n\n\n",
"msg_date": "Tue, 17 Apr 2001 10:04:05 +0200",
"msg_from": "Lehel Gyuro <lehel@bin.hu>",
"msg_from_op": true,
"msg_subject": "plpgsql problem"
},
{
"msg_contents": "Lehel Gyuro <lehel@bin.hu> writes:\n> -- the same applies to inline queries too. if issued with execute\n> -- everything is fine, but if the query has more than one arguments\n> -- the compiler dismisses all, except the last one\n\nThis is more than slightly hard to believe. There are thousands of\npeople using plpgsql, and you're the first to notice that it loses all\nbut the last WHERE qualifier? Nyet. There's more to it than that,\nsurely.\n\nPerhaps you could provide a *complete* example? The text of the\nfunction is far from enough to let someone else try to reproduce\nyour problem. We need a script that creates all the referenced\ntables, and puts sample data in them, and creates and invokes the\nfunction with appropriate test data. And perhaps you could tell us\nwhat output you got and what you expected to get, and why that led\nyou to conclude that there is a failure of the above-claimed form.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Apr 2001 00:30:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: plpgsql problem "
},
{
"msg_contents": "\nMy guess is that since userid and userID differ only in\ncase, it's probably not actually using the aliased version\nand instead is using only the column one. \n\nThe execute is different since you're effectively putting\nthe *value* of userID into the query as opposed to the word.\nI'd suggest renaiming the alias and seeing if that works.\n\nOn Tue, 17 Apr 2001, Lehel Gyuro wrote:\n\n> CREATE FUNCTION userHasAll (int4,int4) RETURNS boolean AS '\n> DECLARE\n> row RECORD;\n> kirakorow kirakok%ROWTYPE;\n> userID ALIAS FOR $1;\n> kirakoID ALIAS FOR $2;\n> megvan int4:=0;\n> kepdarabok INTEGER:=0;\n> query text;\n> BEGIN\n> SELECT * INTO kirakorow FROM kirakok WHERE kirako_id=kirakoID;\n> -- this works\n> \n> IF NOT FOUND THEN\n> RAISE EXCEPTION ''Invalid kirakoID'';\n> RETURN ''f'';\n> END IF;\n> \n> kepdarabok:=kirakorow.kepdarabokx*kirakorow.kepdaraboky;\n> megvan:=0;\n> \n> FOR row IN EXECUTE ''SELECT count(*) AS hits FROM talalatok WHERE userid='''''' || userID || '''''' AND jatek='''''' || kirakoID || '''''';'' LOOP\n> -- this works too but if you replace it with the following row :\n> -- FOR row IN SELECT count(*) AS hits FROM talalatok WHERE userid=userID AND jatek=kirakoID LOOP\n> -- this executes as if the following query was issued\n> -- FOR row IN SELECT count(*) AS hits FROM talalatok WHERE jatek=kirakoID LOOP\n> megvan:=row.hits;\n> END LOOP;\n> -- the same applies to inline queries too. if issued with execute\n> -- everything is fine, but if the query has more than one arguments\n> -- the compiler dismisses all, except the last one\n> \n> IF megvan<>kepdarabok THEN\n> RETURN ''f'';\n> END IF;\n> \n> RETURN ''t'';\n> END;\n> ' LANGUAGE 'plpgsql';\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n",
"msg_date": "Tue, 17 Apr 2001 22:54:42 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: plpgsql problem"
}
] |
[
{
"msg_contents": "\n> I was thinking SET because UPDATE does an auto-lock.\n\nOptimal would imho be a SET that gives a maximum amount of time in seconds \nthe client is willing to wait for any lock. But I liked the efficiency of Henryk's code.\n\n> \n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > I can imagine some people wanting this. However, 7.1 has new deadlock\n> > > detection code, so I would you make a 7.1 version and send it over. We\n> > > can get it into 7.2.\n> > \n> > I object strongly to any such \"feature\" in the low-level form that\n> > Henryk proposes, because it would affect *ALL* locking. Do you really\n> > want all your other transactions to go belly-up if, say, someone vacuums\n> > pg_class?\n\nYes, if a non batch client already blocked for over x seconds. Of course a more\nsophisticated client can send querycancel() but that involves a more complicated\nprogram (threads, timer ...).\n\n> > \n> > A variant of LOCK TABLE that explicitly requests a timeout might make\n> > sense, though.\n\nI do not think that a solution for one particular lock is very helpful. If your dml then \nblocks on some unforseen lock (parse, plan ...) , the client is in exactly the situation \nit tried to avoid in the first place.\n\nAndreas\n",
"msg_date": "Tue, 17 Apr 2001 10:26:25 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: timeout on lock feature"
},
{
"msg_contents": "Added to TODO:\n\n\t* Add SET parameter to timeout if waiting for lock too long \n \n> \n> > I was thinking SET because UPDATE does an auto-lock.\n> \n> Optimal would imho be a SET that gives a maximum amount of time in seconds \n> the client is willing to wait for any lock. But I liked the efficiency of Henryk's code.\n> \n> > \n> > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > > I can imagine some people wanting this. However, 7.1 has new deadlock\n> > > > detection code, so I would you make a 7.1 version and send it over. We\n> > > > can get it into 7.2.\n> > > \n> > > I object strongly to any such \"feature\" in the low-level form that\n> > > Henryk proposes, because it would affect *ALL* locking. Do you really\n> > > want all your other transactions to go belly-up if, say, someone vacuums\n> > > pg_class?\n> \n> Yes, if a non batch client already blocked for over x seconds. Of course a more\n> sophisticated client can send querycancel() but that involves a more complicated\n> program (threads, timer ...).\n> \n> > > \n> > > A variant of LOCK TABLE that explicitly requests a timeout might make\n> > > sense, though.\n> \n> I do not think that a solution for one particular lock is very helpful. If your dml then \n> blocks on some unforseen lock (parse, plan ...) , the client is in exactly the situation \n> it tried to avoid in the first place.\n> \n> Andreas\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 17 Apr 2001 10:14:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AW: timeout on lock feature"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Added to TODO:\n> \t* Add SET parameter to timeout if waiting for lock too long\n\nI repeat my strong objection to any global (ie, affecting all locks)\ntimeout. Such a \"feature\" will have unpleasant consequences.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Apr 2001 10:48:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AW: timeout on lock feature "
},
{
"msg_contents": "This option will be OPTIONAL.\n\nTom Lane wrote in message <23613.987518927@sss.pgh.pa.us>...\n>Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Added to TODO:\n>> * Add SET parameter to timeout if waiting for lock too long\n>\n>I repeat my strong objection to any global (ie, affecting all locks)\n>timeout. Such a \"feature\" will have unpleasant consequences.\n>\n> regards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n>http://www.postgresql.org/users-lounge/docs/faq.html\n\n\n",
"msg_date": "Wed, 18 Apr 2001 10:55:19 +0200",
"msg_from": "\"Henryk Szal\" <szal@doctorq.com.pl>",
"msg_from_op": false,
"msg_subject": "Re: AW: timeout on lock feature"
}
] |
[
{
"msg_contents": "Hi guys,\n\nI've just come up with a hypothetical which, in my opinion, points to a\nflaw in the foreign key implementation in Postgres. All tests were\nconducted on 7.1beta4 -- not the most up to date, but I have seen no\nreference to this in the mailing list/todo (ie, in 'foreign' under\nTODO.detail).\n\nSee as follows:\n\ntest=# create table a (a int, primary key(a));\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'a_pkey' for\ntable\n'a'\nCREATE\ntest=# create table b (b int references a(a) match full, primary key(b));\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'b_pkey' for\ntable\n'b'\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY\ncheck(s)\nCREATE\ntest=# insert into a values(1);\nINSERT 1754732 1\ntest=# insert into a values(2);\nINSERT 1754733 1\ntest=# insert into a values(3);\nINSERT 1754734 1\ntest=# insert into b values(1);\nINSERT 1754735 1\ntest=# insert into b values(2);\nINSERT 1754736 1\ntest=# delete from a;\nERROR: <unnamed> referential integrity violation - key in a still\nreferenced from b\ntest=# select * from a;\n a\n---\n 1\n 2\n 3\n\n\n----\n\nNow, table a has more tuples than b. In my opinion, the integrity test\nrelates only to those records in a which are in b (since it is a foreign\nkey reference). Isn't then the query valid for those tuples which do not\nresult in a violation of the referential integrity test? Shouldn't those\ntuples in a be deleted?\n\nGavin\n\n\n",
"msg_date": "Tue, 17 Apr 2001 18:58:36 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": true,
"msg_subject": "Foreign key checks/referential integrity."
},
{
"msg_contents": "\nHi,\n\n...\n> key reference). Isn't then the query valid for those tuples which do not\n> result in a violation of the referential integrity test? Shouldn't those\n> tuples in a be deleted?\n\nThe \"all or nothing\" approach causes this. And _here_ **I think** its\ncorrect behaviour. (IMHO user and backend transactions are not the same\nthing).\n\n// Jarmo\n\n",
"msg_date": "Tue, 17 Apr 2001 21:08:06 +0200",
"msg_from": "\"Jarmo Paavilainen\" <netletter@comder.com>",
"msg_from_op": false,
"msg_subject": "SV: Foreign key checks/referential integrity."
}
] |
[
{
"msg_contents": "No, they shouldn't. If you want to delete only those tuples that aren't\nreferenced in b then you must explicitly say so:\n\ndelete from a where not exists (select * from b where b.b = a.a);\n\nThe query that you tried will explicitly delete all rows from a, thus\nviolating the constraint on b. If even one row fails, then the transaction\nfails, rolling back any other deletes that may have been successful.\n\nCheers...\n\n\nMikeA\n\n\n\n>> -----Original Message-----\n>> From: Gavin Sherry [mailto:swm@linuxworld.com.au]\n>> Sent: 17 April 2001 09:59\n>> To: pgsql-hackers@postgresql.org\n>> Subject: [HACKERS] Foreign key checks/referential integrity.\n>> \n>> \n>> Hi guys,\n>> \n>> I've just come up with a hypothetical which, in my opinion, \n>> points to a\n>> flaw in the foreign key implementation in Postgres. All tests were\n>> conducted on 7.1beta4 -- not the most up to date, but I have seen no\n>> reference to this in the mailing list/todo (ie, in 'foreign' under\n>> TODO.detail).\n>> \n>> See as follows:\n>> \n>> test=# create table a (a int, primary key(a));\n>> NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index \n>> 'a_pkey' for\n>> table\n>> 'a'\n>> CREATE\n>> test=# create table b (b int references a(a) match full, \n>> primary key(b));\n>> NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index \n>> 'b_pkey' for\n>> table\n>> 'b'\n>> NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY\n>> check(s)\n>> CREATE\n>> test=# insert into a values(1);\n>> INSERT 1754732 1\n>> test=# insert into a values(2);\n>> INSERT 1754733 1\n>> test=# insert into a values(3);\n>> INSERT 1754734 1\n>> test=# insert into b values(1);\n>> INSERT 1754735 1\n>> test=# insert into b values(2);\n>> INSERT 1754736 1\n>> test=# delete from a;\n>> ERROR: <unnamed> referential integrity violation - key in a still\n>> referenced from b\n>> test=# select * from a;\n>> a\n>> ---\n>> 1\n>> 2\n>> 3\n>> \n>> \n>> ----\n>> \n>> Now, table a has more tuples than b. In my opinion, the \n>> integrity test\n>> relates only to those records in a which are in b (since it \n>> is a foreign\n>> key reference). Isn't then the query valid for those tuples \n>> which do not\n>> result in a violation of the referential integrity test? \n>> Shouldn't those\n>> tuples in a be deleted?\n>> \n>> Gavin\n>> \n>> \n>> \n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 2: you can get off all lists at once with the unregister command\n>> (send \"unregister YourEmailAddressHere\" to \n>> majordomo@postgresql.org)\n>> \n\n\n_________________________________________________________________________\nThis e-mail and any attachments are confidential and may also be privileged and/or copyright \nmaterial of Intec Telecom Systems PLC (or its affiliated companies). If you are not an \nintended or authorised recipient of this e-mail or have received it in error, please delete \nit immediately and notify the sender by e-mail. In such a case, reading, reproducing, \nprinting or further dissemination of this e-mail is strictly prohibited and may be unlawful. \nIntec Telecom Systems PLC. does not represent or warrant that an attachment hereto is free \nfrom computer viruses or other defects. The opinions expressed in this e-mail and any \nattachments may be those of the author and are not necessarily those of Intec Telecom \nSystems PLC. \n\nThis footnote also confirms that this email message has been swept by\nMIMEsweeper for the presence of computer viruses. \n__________________________________________________________________________\n\n\n\n\n\nRE: [HACKERS] Foreign key checks/referential integrity.\n\n\nNo, they shouldn't. If you want to delete only those tuples that aren't referenced in b then you must explicitly say so:\ndelete from a where not exists (select * from b where b.b = a.a);\n\nThe query that you tried will explicitly delete all rows from a, thus violating the constraint on b. If even one row fails, then the transaction fails, rolling back any other deletes that may have been successful.\nCheers...\n\n\nMikeA\n\n\n\n>> -----Original Message-----\n>> From: Gavin Sherry [mailto:swm@linuxworld.com.au]\n>> Sent: 17 April 2001 09:59\n>> To: pgsql-hackers@postgresql.org\n>> Subject: [HACKERS] Foreign key checks/referential integrity.\n>> \n>> \n>> Hi guys,\n>> \n>> I've just come up with a hypothetical which, in my opinion, \n>> points to a\n>> flaw in the foreign key implementation in Postgres. All tests were\n>> conducted on 7.1beta4 -- not the most up to date, but I have seen no\n>> reference to this in the mailing list/todo (ie, in 'foreign' under\n>> TODO.detail).\n>> \n>> See as follows:\n>> \n>> test=# create table a (a int, primary key(a));\n>> NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index \n>> 'a_pkey' for\n>> table\n>> 'a'\n>> CREATE\n>> test=# create table b (b int references a(a) match full, \n>> primary key(b));\n>> NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index \n>> 'b_pkey' for\n>> table\n>> 'b'\n>> NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY\n>> check(s)\n>> CREATE\n>> test=# insert into a values(1);\n>> INSERT 1754732 1\n>> test=# insert into a values(2);\n>> INSERT 1754733 1\n>> test=# insert into a values(3);\n>> INSERT 1754734 1\n>> test=# insert into b values(1);\n>> INSERT 1754735 1\n>> test=# insert into b values(2);\n>> INSERT 1754736 1\n>> test=# delete from a;\n>> ERROR: <unnamed> referential integrity violation - key in a still\n>> referenced from b\n>> test=# select * from a;\n>> a\n>> ---\n>> 1\n>> 2\n>> 3\n>> \n>> \n>> ----\n>> \n>> Now, table a has more tuples than b. In my opinion, the \n>> integrity test\n>> relates only to those records in a which are in b (since it \n>> is a foreign\n>> key reference). Isn't then the query valid for those tuples \n>> which do not\n>> result in a violation of the referential integrity test? \n>> Shouldn't those\n>> tuples in a be deleted?\n>> \n>> Gavin\n>> \n>> \n>> \n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 2: you can get off all lists at once with the unregister command\n>> (send \"unregister YourEmailAddressHere\" to \n>> majordomo@postgresql.org)\n>> \n\n\n\n_________________________________________________________________________\nThis e-mail and any attachments are confidential and may also be privileged and/or copyright \nmaterial of Intec Telecom Systems PLC (or its affiliated companies). If you are not an \nintended or authorised recipient of this e-mail or have received it in error, please delete \nit immediately and notify the sender by e-mail. In such a case, reading, reproducing, \nprinting or further dissemination of this e-mail is strictly prohibited and may be unlawful. \nIntec Telecom Systems PLC. does not represent or warrant that an attachment hereto is free \nfrom computer viruses or other defects. The opinions expressed in this e-mail and any \nattachments may be those of the author and are not necessarily those of Intec Telecom \nSystems PLC. \n\nThis footnote also confirms that this email message has been swept by\nMIMEsweeper for the presence of computer viruses. \n__________________________________________________________________________",
"msg_date": "Tue, 17 Apr 2001 11:34:24 +0100",
"msg_from": "Michael Ansley <Michael.Ansley@intec-telecom-systems.com>",
"msg_from_op": true,
"msg_subject": "RE: Foreign key checks/referential integrity."
}
] |
[
{
"msg_contents": "Hi\n\nI have been volunteered to give a talk at a Linux conference \n(http://www.linuxafrica.co.za).\n\nThe context is the following\n\nA Comparative Analysis of Opensource and Proprietry Database Technologies\n\nSeveral options exist for selecting a database on Opensource operating\nsystems such as Linux and FreeBSD. This talk is geared towards allowing\na suitable choice of database to be made depending on the context of use.\nThe talk focusses on the technical differences amoung various database \ntechnologies.\n\nAny pointers to existing material and references would be much appreciated.\n\nPlease respond directly to theo@flame.co.za\n\nRegards\nTheo\n\nPS My choice is PostgreSql!\n",
"msg_date": "Tue, 17 Apr 2001 14:27:31 +0200 (SAST)",
"msg_from": "Theo Kramer <theo@flame.co.za>",
"msg_from_op": true,
"msg_subject": "Talk on Open Source vs Proprietry databases"
}
] |
[
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I envisioned:\n\n> \tSET TIMEOUT TO 10;\n> \tUPDATE tab SET col = 3;\n> \tRESET TIMEOUT\n\n> Can't we get that work work properly? Let the timeout only apply to the\n> 'tab' table and none of the others.\n\nAs Henryk has implemented it, it WON'T only apply to the 'tab' table;\nit'll affect all locks grabbed anywhere, including those that the system\nlocks internally. That scares the heck out of me, Andreas' objections\nnotwithstanding.\n\n> Can't we exclude system tables from being affected by the timeout?\n\nHow will you do that? The lock manager makes a point of not knowing the\nsemantics associated with any particular lock tag. It's even less\nlikely to know the difference between a \"system\" grab and a \"user\" grab\non what might be the very same lock (consider an \"UPDATE pg_class\"\ncommand).\n\n> Requiring a LOCK statement that matches\n> the UPDATE/DELETE and wrapping the whole thing in a transaction seems\n> needlessly complex to me.\n\nAs opposed to your three-step proposal above? That doesn't look\nvery much simpler to me...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Apr 2001 11:16:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: AW: timeout on lock feature "
}
] |
[
{
"msg_contents": "\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Added to TODO:\n> > > \t* Add SET parameter to timeout if waiting for lock too long\n> > \n> > I repeat my strong objection to any global (ie, affecting all locks)\n> > timeout. Such a \"feature\" will have unpleasant consequences.\n> \n> I envisioned:\n> \n> \tSET TIMEOUT TO 10;\n> \tUPDATE tab SET col = 3;\n> \tRESET TIMEOUT\n> \n> Can't we get that work work properly? Let the timeout only apply to the\n> 'tab' table and none of the others. Can't we exclude system tables from\n> being affected by the timeout?\n\nWhy exactly would you be willing to wait longer for an implicit system table lock?\nIf this was the case you should also be willing to wait for the row lock, no ?\nThe timeout will be useful to let the client or user decide on an alternate course \nof action other that killing his application (without the need for timers or threads in \nthe client program).\n\n> Requiring a LOCK statement that matches\n> the UPDATE/DELETE and wrapping the whole thing in a transaction seems\n> needlessly complex to me.\n\nAgreed.\n\nAndreas\n",
"msg_date": "Tue, 17 Apr 2001 17:20:25 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: AW: timeout on lock feature"
},
{
"msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> The timeout will be useful to let the client or user decide on an\n> alternate course of action other that killing his application (without\n> the need for timers or threads in the client program).\n\nThis assumes (without evidence) that the client has a good idea of what\nthe timeout limit ought to be. I think this \"feature\" has no real use\nother than encouraging application programmers to shoot themselves in\nthe foot. I see no reason that we should make it easy to misdesign\napplications.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Apr 2001 11:31:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: timeout on lock feature "
},
{
"msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> The timeout will be useful to let the client or user decide on an\n> alternate course of action other that killing his application (without\n> the need for timers or threads in the client program).\n\nOkay, let's take a close look at this assumption.\n\n1. Why is 10 seconds (or 1, or 30) a magic number? If you've waited\nthat long, why wouldn't you be willing to wait a little longer? How\nwill you know what value to pick?\n\n2. If you do want a timeout to support an interactive application, seems\nto me that you want to specify it as a total time for the whole query,\nnot the maximum delay to acquire any individual lock. Under normal\ncircumstances lock delays are likely to be just a small part of total\nquery time.\n\n3. Since we already have deadlock detection, there is no need for\ntimeouts as a defense against deadlock. A timeout would only be useful\nto defend against other client applications that are sitting idle or\nexecuting long-running operations while holding locks that conflict\nwith your real-time query. This scenario strikes me as a flaw in the\noverall application design, which should be fixed by fixing those other\nclients and/or the lock usage. A lock timeout is just a bandaid\nto cope (poorly) with broken application design.\n\n4. The correct way to deal with overly-long queries in an interactive\napplication is to let the user interactively cancel queries (which we\nalready support). This is much better than any application-specified\nfixed timeout, because the application is unlikely to be aware of\nextenuating circumstances --- say, the system is heavily overloaded at\nthe moment because of lots of activity. I can think of few things more\nannoying than an application-set timeout that kills my unfinished query\nwhenever the system is under load.\n\nIn short, I think lock timeout is a solution searching in vain for a\nproblem. If we implement it, we are just encouraging bad application\ndesign.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Apr 2001 12:56:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: timeout on lock feature "
},
{
"msg_contents": "On Tue, Apr 17, 2001 at 12:56:11PM -0400, Tom Lane wrote:\n> In short, I think lock timeout is a solution searching in vain for a\n> problem. If we implement it, we are just encouraging bad application\n> design.\n\nI agree with Tom completely here.\n\nIn any real-world application the database is the key component of a \nlarger system: the work it does is the most finicky, and any mistakes\n(either internally or, more commonly, from misuse) have the most \nfar-reaching consequences. The responsibility of the database is to \nprovide a reliable and easily described and understood mechanism to \nbuild on. \n\nTimeouts are a system-level mechanism that to be useful must refer to \nsystem-level events that are far above anything that PG knows about. \nThe only way PG could apply reasonable timeouts would be for the \napplication to dictate them, but the application can better implement \nthem itself.\n\nYou can think of this as another aspect of the \"end-to-end\" principle: \nany system-level construct duplicated in a lower-level system component \ncan only improve efficiency, not provide the corresponding high-level \nservice. If we have timeouts in the database, they should be there to\nenable the database to better implement its abstraction, and not pretend \nto be a substitute for system-level timeouts.\n\nThere's no upper limit on how complicated a database interface can\nbecome (cf. Oracle). The database serves its users best by having \nthe simplest interface that can possibly provide the needed service. \n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Tue, 17 Apr 2001 14:01:19 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: timeout on lock feature"
},
{
"msg_contents": "> Timeouts are a system-level mechanism that to be useful must refer to \n> system-level events that are far above anything that PG knows about. \n> The only way PG could apply reasonable timeouts would be for the \n> application to dictate them, but the application can better implement \n> them itself.\n\nOK we have the following scenario\n\n Session A Session B\n\n begin begin\n\n insert -- on unique constraint\n\n insert -- on same unique constraint\n\n -- Session A becomes idle\n\n : -- Session B becomes ...\n\n\nor we have (Informix Online)\n\n Session A Session B\n\n set lock mode to wait [seconds] set lock mode to wait [seconds]\n\n begin begin \n\n insert -- on unique constraint\n\n insert -- on same unique constraint\n\n * resource not available error *\n\n -- Session B carries on\n\nOracle 7 (OCI) has oopt() call to set wait options for requested\nresources. Oracle 8 OCI has the same behaviour as PG ie. oopt() \nis no longer available.\n\nI believe that the ability to switch the database to either not wait\nfor resources, or wait a specified period or wait forever \n(default) is essential especially for interactive applications.\n\nRegards\nTheo\n",
"msg_date": "Wed, 18 Apr 2001 06:34:54 +0200 (SAST)",
"msg_from": "Theo Kramer <theo@flame.co.za>",
"msg_from_op": false,
"msg_subject": "Re: timeout on lock feature"
}
] |
[
{
"msg_contents": "\n> > I envisioned:\n> \n> > \tSET TIMEOUT TO 10;\n> > \tUPDATE tab SET col = 3;\n> > \tRESET TIMEOUT\n> \n> > Can't we get that work work properly? Let the timeout only \n> apply to the\n> > 'tab' table and none of the others.\n> \n> As Henryk has implemented it, it WON'T only apply to the 'tab' table;\n> it'll affect all locks grabbed anywhere, including those that the system\n> locks internally. That scares the heck out of me, Andreas' objections\n> notwithstanding.\n\nWhat exactly scares you ? Surely the deadlock resolution should\nhandle the above decision to ABORT in the same way it currently does.\nIf not we have something to fix, no?\n\nOf course this might rather be something to consider for 7.2 and not 7.1.1.\n\nAndreas\n",
"msg_date": "Tue, 17 Apr 2001 17:29:30 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: AW: timeout on lock feature "
}
] |
[
{
"msg_contents": "\n> > Added to TODO:\n> > \t* Add SET parameter to timeout if waiting for lock too long\n> \n> I repeat my strong objection to any global (ie, affecting all locks)\n> timeout. Such a \"feature\" will have unpleasant consequences.\n\nExcept that other people like myself, see those consequences \nas a pleasant thing :-) And we are talking about something that has to be\nrequested by the client explicitly (at least in a default installation).\n\nIt simply does make sense for an interactive client to not block \nmore than ~ 30 seconds.\n\nAndreas\n",
"msg_date": "Tue, 17 Apr 2001 17:30:13 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: AW: timeout on lock feature "
}
] |
[
{
"msg_contents": "Greetings which it is the driver JDBC that allows me the connection of \nMatlab and PostgreSQL and where I look for it there am I install.\nHe/she would have some simple example of the code that I should add Matlab.\nThank you.\n_________________________________________________________________________\nGet Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.\n\n",
"msg_date": "Tue, 17 Apr 2001 15:32:30 ",
"msg_from": "\"Francisco Alvarez\" <faove@hotmail.com>",
"msg_from_op": true,
"msg_subject": "doubts"
}
] |
[
{
"msg_contents": "\n> > The timeout will be useful to let the client or user decide on an\n> > alternate course of action other that killing his application (without\n> > the need for timers or threads in the client program).\n> \n> This assumes (without evidence) that the client has a good idea of what\n> the timeout limit ought to be.\n\nYes, the application programmer would need to know how long his users\nare willing to wait before they start to \"hammer at their monitors, kick their pc's \nor throw the mouse across the room :-O\". It must be made clear that it is very counter \nproductive to use too short timeouts. \n\nAndreas\n",
"msg_date": "Tue, 17 Apr 2001 17:37:47 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: AW: AW: timeout on lock feature "
}
] |
[
{
"msg_contents": "In our DB schema we have defined a class of tables containing important\ndata for which we'd like to keep an audit trail of any change. These\ntables have the following inheritance structure:\n\n +----> <table> (real, live table with constraints)\n<table>_type |\n +----> <table>_archive (archive without any constraints)\n\nThe parent <table>_type contains no data, is only used to define the\ncolumns common to <table> and <table>_archive.\n\nOn each UPDATE or DELETE to any <table> we would like to record the\nmodified/deleted row as is in the <table>_archive.\n\nHere is the trigger function that I'm working on:\n\n\tcreate function archive_row() returns opaque as '\n\tDECLARE\n\t\trec RECORD;\n\t\t/* initialise future query string\n\t\t */\n\t\tatt text := ''INSERT INTO '';\n\tBEGIN\n\t\t/* prepare the query, converting <table> to <table>_archive\n\t\t */\n\t\tatt := att || TG_RELNAME || ''_archive VALUES ('';\n\t\t/* get all column names for trigger <table> through PG system tables\n\t\t */\n\t\tFOR rec IN SELECT a.attname FROM pg_class c, pg_attribute a \n\t\t\t\tWHERE c.relname = TG_RELNAME AND a.attnum > 0 \n\t\t\t\tAND a.attrelid = c.oid ORDER BY a.attnum LOOP\n\t/*\t\tRAISE NOTICE ''column name for % is %'', TG_RELNAME, rec.attname;*/\n\t\t\tatt := att || ''OLD.'' || rec.attname || '','';\n\t\tEND LOOP;\n\t\t/* remove last coma, add closing paren\n\t\t */\n\t\tatt := rtrim(att,'','') || '')'';\n\t\tRAISE NOTICE ''query is %'', att;\n\t\tEXECUTE att;\n\t\tRETURN NEW;\n\tEND;\n\t' language 'plpgsql';\n\nThe EXECUTE gives the following error:\n\n\tpsql:archive.sql:40: ERROR: OLD used in non-rule query\n\nThe best solution would be to simply do:\n\n\tINSERT INTO table_archive SELECT OLD.*;\n\nbut it doesn't work.\n\nIs there a clean solution in pl/pgsql or should I directly try in C?\n\n-- \n THERAMENE: Prends soin apr�s ma mort de ma ch�re Aricie.\n Cher ami, si mon p�re un jour d�sabus�\n Plaint le malheur d'un fils faussement accus�,\n Pour apaiser mon sang et mon ombre plaintive,\n Dis-lui qu'avec douceur il traite sa captive,\n Qu'il lui rende... A ce mot ce h�ros expir�\n N'a laiss� dans mes bras qu'un corps d�figur�,\n Triste objet, o� des Dieux triomphe la col�re,\n Et que m�conna�trait l'oeil m�me de son p�re.\n (Ph�dre, J-B Racine, acte 5, sc�ne 6)\n",
"msg_date": "Tue, 17 Apr 2001 18:00:05 +0200",
"msg_from": "Louis-David Mitterrand <vindex@apartia.ch>",
"msg_from_op": true,
"msg_subject": "row archiving trigger function"
}
] |
[
{
"msg_contents": "Michael Ansley <Michael.Ansley@intec-telecom-systems.com> writes:\n> Sorry for my forgetfulness (and a search through geocrawler didn't turn up\n> anything useful), but what was the problem with something like NOWAIT?\n> e.g.: SELECT * FROM a FOR UPDATE NOWAIT;\n> where, if the required lock could not be obtained immediately, this\n> statement would raise an error.\n\nI have no objection to that ... it does not cover anything except FOR\nUPDATE, though, which is probably less general than some of the other\nfolks want.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Apr 2001 12:08:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: AW: timeout on lock feature "
},
{
"msg_contents": "Sorry for my forgetfulness (and a search through geocrawler didn't turn up\nanything useful), but what was the problem with something like NOWAIT?\n\ne.g.: SELECT * FROM a FOR UPDATE NOWAIT;\n\nwhere, if the required lock could not be obtained immediately, this\nstatement would raise an error.\n\nCheers...\n\n\nMikeA\n\n\n>> -----Original Message-----\n>> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n>> Sent: 17 April 2001 15:49\n>> To: Bruce Momjian\n>> Cc: Zeugswetter Andreas SB; Henryk Szal; pgsql-hackers@postgresql.org\n>> Subject: Re: AW: [HACKERS] timeout on lock feature \n>> \n>> \n>> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> > Added to TODO:\n>> > \t* Add SET parameter to timeout if waiting for lock too long\n>> \n>> I repeat my strong objection to any global (ie, affecting all locks)\n>> timeout. Such a \"feature\" will have unpleasant consequences.\n>> \n>> \t\t\tregards, tom lane\n>> \n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 5: Have you checked our extensive FAQ?\n>> \n>> http://www.postgresql.org/users-lounge/docs/faq.html\n>> \n\n\n_________________________________________________________________________\nThis e-mail and any attachments are confidential and may also be privileged and/or copyright \nmaterial of Intec Telecom Systems PLC (or its affiliated companies). If you are not an \nintended or authorised recipient of this e-mail or have received it in error, please delete \nit immediately and notify the sender by e-mail. In such a case, reading, reproducing, \nprinting or further dissemination of this e-mail is strictly prohibited and may be unlawful. \nIntec Telecom Systems PLC. does not represent or warrant that an attachment hereto is free \nfrom computer viruses or other defects. The opinions expressed in this e-mail and any \nattachments may be those of the author and are not necessarily those of Intec Telecom \nSystems PLC. \n\nThis footnote also confirms that this email message has been swept by\nMIMEsweeper for the presence of computer viruses. \n__________________________________________________________________________\n\n\n\n\n\nRE: AW: [HACKERS] timeout on lock feature \n\n\nSorry for my forgetfulness (and a search through geocrawler didn't turn up anything useful), but what was the problem with something like NOWAIT?\ne.g.: SELECT * FROM a FOR UPDATE NOWAIT;\n\nwhere, if the required lock could not be obtained immediately, this statement would raise an error.\n\nCheers...\n\n\nMikeA\n\n\n>> -----Original Message-----\n>> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n>> Sent: 17 April 2001 15:49\n>> To: Bruce Momjian\n>> Cc: Zeugswetter Andreas SB; Henryk Szal; pgsql-hackers@postgresql.org\n>> Subject: Re: AW: [HACKERS] timeout on lock feature \n>> \n>> \n>> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> > Added to TODO:\n>> > * Add SET parameter to timeout if waiting for lock too long\n>> \n>> I repeat my strong objection to any global (ie, affecting all locks)\n>> timeout. Such a \"feature\" will have unpleasant consequences.\n>> \n>> regards, tom lane\n>> \n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 5: Have you checked our extensive FAQ?\n>> \n>> http://www.postgresql.org/users-lounge/docs/faq.html\n>> \n\n\n\n_________________________________________________________________________\nThis e-mail and any attachments are confidential and may also be privileged and/or copyright \nmaterial of Intec Telecom Systems PLC (or its affiliated companies). If you are not an \nintended or authorised recipient of this e-mail or have received it in error, please delete \nit immediately and notify the sender by e-mail. In such a case, reading, reproducing, \nprinting or further dissemination of this e-mail is strictly prohibited and may be unlawful. \nIntec Telecom Systems PLC. does not represent or warrant that an attachment hereto is free \nfrom computer viruses or other defects. The opinions expressed in this e-mail and any \nattachments may be those of the author and are not necessarily those of Intec Telecom \nSystems PLC. \n\nThis footnote also confirms that this email message has been swept by\nMIMEsweeper for the presence of computer viruses. \n__________________________________________________________________________",
"msg_date": "Tue, 17 Apr 2001 17:09:55 +0100",
"msg_from": "Michael Ansley <Michael.Ansley@intec-telecom-systems.com>",
"msg_from_op": false,
"msg_subject": "RE: AW: timeout on lock feature "
}
] |
[
{
"msg_contents": "\nHi,\n\nI have just modified the jdbc 7.1rc4 source to let the PreparedStatement\nhandle null values in setXXX methods gracefully...\n\nAccording to JDBC a setXXX method should send a NULL if a null value is\nsupplied (and not raise an exception or other)\n\nHow can I contribute?\n\nJeroen\n",
"msg_date": "Tue, 17 Apr 2001 18:38:20 +0200",
"msg_from": "Jeroen Habets <jeroen.habets@framfab.nl>",
"msg_from_op": true,
"msg_subject": "Modified driver to better handle NULL values... "
},
{
"msg_contents": "Send over a context diff and we can get it into 7.2. You may want to\nshoot it to the JDBC list too.\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> \n> Hi,\n> \n> I have just modified the jdbc 7.1rc4 source to let the PreparedStatement\n> handle null values in setXXX methods gracefully...\n> \n> According to JDBC a setXXX method should send a NULL if a null value is\n> supplied (and not raise an exception or other)\n> \n> How can I contribute?\n> \n> Jeroen\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Apr 2001 15:44:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Modified driver to better handle NULL values...y"
}
] |
[
{
"msg_contents": "I just read on -general that it is envisoned to have a SET command to\ntemporarily change the effective user id (for superusers only), so that\npg_dump generated scripts won't have to use \\connect and be forced to run\nunder excessively loose permissons.\n\nThis isn't hard to do, in fact we probably only need a command to call an\nalready existing function. I dug around in SQL for a name and the closest\nthing was\n\nSET SESSION AUTHORIZATION <value specification> (clause 18.2)\n\nTerminology note: In SQL 'real user' == SESSION_USER, 'effective user' ==\nCURRENT_USER. So this command doesn't do it. But the logical choice\nwould obviously be\n\nSET CURRENT AUTHORIZATION <value specification>\n\nThis is nice, but the other end of the plan doesn't actually want to play\nalong. In clause 11.1 SR 2b) it is described that the owner of a new\nschema defaults to the *session* user. (Note that at the end of the day\ntables and other lowly objects won't have an owner anymore. In any case\nthey should currently behave in that aspect as schemas would.)\n\nI say we ignore this requirement, since it's not consistent with Unix\nanyway (Files created by setuid programs are owned by the euid.) and it\nwould destroy our nice plan. ;-)\n\nAnother restriction would be that a current user change cannot survive the\nend of a transaction. This is implied by the semantics of \"suid\"\nfunctions and the way we handle exceptions (elog). It could probably be\nhelped by saving the state of the \"authorization stack\" at the start of a\ntransaction. But I'm not sure whether this would be a desirable feature\nto have in the first place. Most schema commands are rollbackable now, so\nmaybe this won't be a large restriction for pg_dump's purposes.\n\nComments?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Tue, 17 Apr 2001 18:48:18 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Real/effective user"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Terminology note: In SQL 'real user' == SESSION_USER, 'effective user' ==\n> CURRENT_USER.\n\nNot sure about that. I suspect that we actually need three values:\n\n1. \"real user\" = what you originally authenticated to the postmaster.\n\n2. \"session user\" = what you can SET if your real identity is a superuser.\n\n3. \"current user\" = effective userid for permission checks.\n\ncurrent user is the value that would be pushed and popped during calls\nto setuid functions. The big reason for distinguishing current and\nsession user is that session user is what current user needs to revert to\nafter an elog.\n\nWhether SQL's SESSION_USER corresponds to the first or second of these\nconcepts remains to be determined.\n\n> This is nice, but the other end of the plan doesn't actually want to play\n> along. In clause 11.1 SR 2b) it is described that the owner of a new\n> schema defaults to the *session* user.\n\nI think we could still accept that, if we distinguish session and\ncurrent user as above. (I have not yet read the spec to see if it\nagrees though ;-))\n\nWhether this is a good idea is another question; if a setuid function\ndoes a CREATE, shouldn't the created object be owned by the setuid user?\nI'm not sure that I *want* to accept the SQL spec on this point.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Apr 2001 14:15:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Real/effective user "
},
{
"msg_contents": "Tom Lane writes:\n\n> 1. \"real user\" = what you originally authenticated to the postmaster.\n>\n> 2. \"session user\" = what you can SET if your real identity is a superuser.\n>\n> 3. \"current user\" = effective userid for permission checks.\n\nWe could have a Boolean variable \"authenticated user is superuser\" which\nwould serve as the permission to execute SET SESSION AUTHENTICATION, while\nwe would not actually be making the identity of the real/authenticated\nuser available (so as to not confuse things unnecessarily).\n\n> if a setuid function\n> does a CREATE, shouldn't the created object be owned by the setuid user?\n> I'm not sure that I *want* to accept the SQL spec on this point.\n\nMe neither.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Wed, 18 Apr 2001 21:36:34 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Real/effective user "
},
{
"msg_contents": "I proclaimed:\n\n> Tom Lane writes:\n>\n> > 1. \"real user\" = what you originally authenticated to the postmaster.\n> >\n> > 2. \"session user\" = what you can SET if your real identity is a superuser.\n> >\n> > 3. \"current user\" = effective userid for permission checks.\n>\n> We could have a Boolean variable \"authenticated user is superuser\" which\n> would serve as the permission to execute SET SESSION AUTHENTICATION, while\n> we would not actually be making the identity of the real/authenticated\n> user available (so as to not confuse things unnecessarily).\n\nI have implemented this; it seems to do what we need:\n\n$ ~/pg-install/bin/psql -U peter\n\npeter=# set session authorization 'joeblow';\nSET VARIABLE\npeter=# create table foo (a int);\nCREATE\npeter=# \\dt\n List of relations\n Name | Type | Owner\n-------+-------+---------\n foo | table | joeblow\n test | table | peter\n test2 | table | peter\n(3 rows)\n\nLibpq's PQuser() can no longer be trusted for up to date information, so\npsql's prompt, if set up that way, may be wrong, but I'm not sure whether\nthis is worth worrying about.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sat, 21 Apr 2001 17:43:02 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "SET SESSION AUTHORIZATION (was Re: Real/effective user)"
},
{
"msg_contents": "On Sat, Apr 21, 2001 at 05:43:02PM +0200, Peter Eisentraut wrote:\n\n> I have implemented this; it seems to do what we need:\n> \n> $ ~/pg-install/bin/psql -U peter\n> \n> peter=# set session authorization 'joeblow';\n> SET VARIABLE\n> peter=# create table foo (a int);\n> CREATE\n> peter=# \\dt\n> List of relations\n> Name | Type | Owner\n> -------+-------+---------\n> foo | table | joeblow\n> test | table | peter\n> test2 | table | peter\n> (3 rows)\n\n\n Great! With this feature is possible use persisten connection and \non-the-fly changing actual user, right? It's very useful for example\nweb application that checking user privilege via SQL layout.\n \n\nI have I question, what happen with this code:\n\n(connected as superuser)\n\n set session authorization 'userA';\n set session authorization 'userB';\n\nIMHO it must be disable, right must be something like:\n\n set session authorization 'userA';\n unset session authorization;\t\t<-- switch back to superuser \n set session authorization 'userB';\n\n..like as on Linux:\n\n# su - zakkr\n$ id -u\n1000\n$ su - jmarek\nPassword:\nsu: Authentication failure\nSorry.\n\n\t\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Mon, 23 Apr 2001 11:54:41 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: SET SESSION AUTHORIZATION (was Re: Real/effective user)"
},
{
"msg_contents": "Karel Zak writes:\n\n> Great! With this feature is possible use persisten connection and\n> on-the-fly changing actual user, right? It's very useful for example\n> web application that checking user privilege via SQL layout.\n\nA real persistent connection solution would require real session\nmanagement, especially the ability to reset configuration options made\nduring the previous session.\n\n> (connected as superuser)\n>\n> set session authorization 'userA';\n> set session authorization 'userB';\n>\n> IMHO it must be disable, right must be something like:\n>\n> set session authorization 'userA';\n> unset session authorization;\t\t<-- switch back to superuser\n> set session authorization 'userB';\n\nYou can't \"unset\" the session user, there must always be one because there\nis nothing below it.\n\n> ..like as on Linux:\n>\n> # su - zakkr\n> $ id -u\n> 1000\n> $ su - jmarek\n> Password:\n> su: Authentication failure\n> Sorry.\n\nThe difference here is that 'su' also starts a new session but set session\nauthorization changes the state of the current session. So 'su' is\nsimilar to\n\nSTART SESSION; -- Don't know if this is the syntax.\nSET SESSION AUTHORIZATION 'xxx';\n\nall in one command. When and if we get real session management we will\nprobably have the ability to revert user identity changes like you\nprobably imagine.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Mon, 23 Apr 2001 23:20:23 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: SET SESSION AUTHORIZATION (was Re: Real/effective\n user)"
}
] |
[
{
"msg_contents": "\n Hi:\n\n One of the most obvious things that should do a database is to check\nfor data integrity and values, but programming with PHP and other\nlanguages I see that we duplicate this task: one on the client side\n(with javascript), sometimes on server side (on PHP) and finally\non the database server.\n\n I think the server-side check could be made more easily if the database \nserver could return error messages in a way we can handle them. For \nexample, giving an error id (also helpful to translate) and the column and \nrow affected.\n\n Could be this implemented on Postgres?\n\n-- \nV�ctor R. Ruiz\nrvr@idecnet.com\n",
"msg_date": "Tue, 17 Apr 2001 17:52:34 +0100",
"msg_from": "\"=?ISO-8859-1?Q?V=EDctor?= R. Ruiz\" <rvr@infoastro.com>",
"msg_from_op": true,
"msg_subject": "Handling error messages"
}
] |
[
{
"msg_contents": "\nHi,\nWith PGSQL you can authenticate with KRB5 or a proprietary /etc/passwd\nIs it planned to use other methods of authenticating?\nLDAP?\nPAM?\nSASL?\n\nI'm in the process of centralizing authentication of users for all our\nservices but it's not possible in the current state of PGSQL.\n\nThanks for any idea\n-jec\n\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \nJean-Eric Cuendet\nLinkvest SA\nAv des Baumettes 19, 1020 Renens Switzerland\nTel +41 21 632 9043 Fax +41 21 632 9090\nhttp://www.linkvest.com E-mail: jean-eric.cuendet@linkvest.com\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \n\n\n",
"msg_date": "Tue, 17 Apr 2001 18:56:45 +0200",
"msg_from": "Jean-Eric Cuendet <Jean-Eric.Cuendet@linkvest.com>",
"msg_from_op": true,
"msg_subject": "Authentication with PGSQL"
},
{
"msg_contents": "Jean-Eric Cuendet <Jean-Eric.Cuendet@linkvest.com> writes:\n> With PGSQL you can authenticate with KRB5 or a proprietary /etc/passwd\n> Is it planned to use other methods of authenticating?\n> LDAP?\n> PAM?\n> SASL?\n\nI seem to recall some discussion of PAM support. Want to do the work?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Apr 2001 15:35:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Authentication with PGSQL "
}
] |
[
{
"msg_contents": "> > Added to TODO:\n> > \t* Add SET parameter to timeout if waiting for lock too long\n> \n> I repeat my strong objection to any global (ie, affecting all locks)\n> timeout. Such a \"feature\" will have unpleasant consequences.\n\nBut LOCK TABLE T IN ROW EXCLUSIVE MODE WITH TIMEOUT X will not give\nrequired results not only due to parser/planner locks - what if\nUPDATE T will have to wait for other transactions commit/abort\n(same row update)? Lock on pseudo-table is acquired in this case...\n\nVadim\n",
"msg_date": "Tue, 17 Apr 2001 10:14:25 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: AW: timeout on lock feature "
}
] |
[
{
"msg_contents": "This one probably needs the 'iron hand and the velvet paw' touch. The\niron hand to pound some sense into the author, and the velvet paw to\nmake him like having sense pounded into him. Title of article is 'Open\nSource Databases Won't Fly' --\nhttp://www.dqindia.com/content/enterprise/datawatch/101041201.asp\n\nHe seems to be ok on some things, though.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 17 Apr 2001 13:31:43 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": true,
"msg_subject": "Another news story in need of 'enlightenment'"
},
{
"msg_contents": "On Tue, Apr 17, 2001 at 01:31:43PM -0400, Lamar Owen wrote:\n> This one probably needs the 'iron hand and the velvet paw' touch. The\n> iron hand to pound some sense into the author, and the velvet paw to\n> make him like having sense pounded into him. Title of article is 'Open\n> Source Databases Won't Fly' --\n> http://www.dqindia.com/content/enterprise/datawatch/101041201.asp\n\nThis one is best just ignored. \n\nIt's content-free, just a his frightened opinions. The only thing \nthat will change his mind is the improvements planned for releases \n7.2 and 7.3, and lots of deployments. Few will read his rambling.\n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Tue, 17 Apr 2001 11:13:01 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Another news story in need of 'enlightenment'"
},
{
"msg_contents": "Thus spake Lamar Owen\n> This one probably needs the 'iron hand and the velvet paw' touch. The\n> iron hand to pound some sense into the author, and the velvet paw to\n> make him like having sense pounded into him. Title of article is 'Open\n> Source Databases Won't Fly' --\n> http://www.dqindia.com/content/enterprise/datawatch/101041201.asp\n\nI'm not sure there was even a point in there. The article was rambling\nand undirected. Completely apart from any content, this just seemed like\na badly written article. I'm not sure that it even merits consideration\nin this forum.\n\nI guess anyone can be published on the net.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Wed, 18 Apr 2001 05:18:23 -0400 (EDT)",
"msg_from": "darcy@druid.net (D'Arcy J.M. Cain)",
"msg_from_op": false,
"msg_subject": "Re: Another news story in need of 'enlightenment'"
},
{
"msg_contents": "\nI can't seem to get at the original anymore, but we talked to Dr.\nSoparkar, and is posted a 'followup' of the article to:\n\nhttp://linuxtoday.com/news_story.php3?ltsn=2001-04-16-009-21-PS-EL-HE-0038\n\nSince I can't seem to get to the original on dqindia.com, I can't comment\non what's changed ...\n\nOn Wed, 18 Apr 2001, D'Arcy J.M. Cain wrote:\n\n> Thus spake Lamar Owen\n> > This one probably needs the 'iron hand and the velvet paw' touch. The\n> > iron hand to pound some sense into the author, and the velvet paw to\n> > make him like having sense pounded into him. Title of article is 'Open\n> > Source Databases Won't Fly' --\n> > http://www.dqindia.com/content/enterprise/datawatch/101041201.asp\n>\n> I'm not sure there was even a point in there. The article was rambling\n> and undirected. Completely apart from any content, this just seemed like\n> a badly written article. I'm not sure that it even merits consideration\n> in this forum.\n>\n> I guess anyone can be published on the net.\n>\n> --\n> D'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\n> http://www.druid.net/darcy/ | and a sheep voting on\n> +1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Wed, 18 Apr 2001 09:20:35 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Another news story in need of 'enlightenment'"
},
{
"msg_contents": "> On Tue, Apr 17, 2001 at 01:31:43PM -0400, Lamar Owen wrote:\n> > This one probably needs the 'iron hand and the velvet paw' touch. The\n> > iron hand to pound some sense into the author, and the velvet paw to\n> > make him like having sense pounded into him. Title of article is 'Open\n> > Source Databases Won't Fly' --\n> > http://www.dqindia.com/content/enterprise/datawatch/101041201.asp\n> \n> This one is best just ignored. \n> \n> It's content-free, just a his frightened opinions. The only thing \n> that will change his mind is the improvements planned for releases \n> 7.2 and 7.3, and lots of deployments. Few will read his rambling.\n\nMy head hurt after I read it. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Apr 2001 15:52:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Another news story in need of 'enlightenment'"
}
] |
[
{
"msg_contents": "I noticed a quite strange behaviour of to_char() in 7.0 and 7.1. It treats \nabbreveated forms of a date completely wrong. Example:\n\n-- this one is ok\nmario=# select to_date('04.01.2001', 'dd.mm.yyyy');\n to_date\n------------\n 2001-01-04\n\n-- this is completly wrong, but NO error raised\nmario=# select to_date('4.01.2001', 'dd.mm.yyyy');\n to_date\n------------\n 0001-01-04\n\n-- completly wrong as well\nmario=# select to_date('4.1.2001', 'dd.mm.yyyy');\n to_date\n------------\n 0001-01-04\n\n\nIMO to_date() should either recognize the date, even if shorter than the mask \n(Oracle compatible), or raise an error. Currently it gives completly wrong \nresults, which is the worst option.\n\nI tried to fix this myself, but I'm lost within backend/utils/adt/formatting.c\n\n\n-- \n===================================================\n Mario Weilguni � � � � � � � � KPNQwest Austria GmbH\n�Senior Engineer Web Solutions Nikolaiplatz 4\n�tel: +43-316-813824 � � � � 8020 graz, austria\n�fax: +43-316-813824-26 � � � http://www.kpnqwest.at\n�e-mail: mario.weilguni@kpnqwest.com\n===================================================\n",
"msg_date": "Tue, 17 Apr 2001 19:46:19 +0200",
"msg_from": "Mario Weilguni <mweilguni@sime.com>",
"msg_from_op": true,
"msg_subject": "Strange behaviour of to_date()"
},
{
"msg_contents": "On Tue, Apr 17, 2001 at 07:46:19PM +0200, Mario Weilguni wrote:\n> I noticed a quite strange behaviour of to_char() in 7.0 and 7.1. It treats \n> abbreveated forms of a date completely wrong. Example:\n> \n> -- this one is ok\n> mario=# select to_date('04.01.2001', 'dd.mm.yyyy');\n> to_date\n> ------------\n> 2001-01-04\n> \n> -- this is completly wrong, but NO error raised\n> mario=# select to_date('4.01.2001', 'dd.mm.yyyy');\n> to_date\n> ------------\n> 0001-01-04\n> \n> -- completly wrong as well\n> mario=# select to_date('4.1.2001', 'dd.mm.yyyy');\n> to_date\n> ------------\n> 0001-01-04\n \n\n Really bug? What you obtain from 'dd.mm.yyyy' in to_char()\n\ntest=# select to_char('04.01.2001'::date, 'dd.mm.yyyy');\n to_char\n------------\n 04.01.2001\n(1 row)\n\n\n '04.01.2001' and '4.1.2001' are *different* strings with *different*\nformat masks....\n\n\n See (and read docs):\n\ntest=# select to_char('04.01.2001'::date, 'FMdd.FMmm.yyyy');\n to_char\n----------\n 4.1.2001\n(1 row)\n\ntest=# select to_date('4.1.2001', 'FMdd.FMmm.yyyy');\n to_date\n------------\n 2001-01-04\n(1 row)\n\n\n Yes, Oracle support using not exact format mask, but Oracle's to_date\nis very based on date/time and not support others things:\n\nSVRMGR> select to_date('333.222.4.1.2001', '333.222.FMdd.FMmm.yyyy') from\ndual;\nTO_DATE('\n---------\nORA-01821: date format not recognized\n\n\ntest=# select to_date('333.222.4.1.2001', '333.222.FMdd.FMmm.yyyy');\n to_date\n------------\n 2001-01-04\n(1 row)\n\nor nice:\n\ntest=# select to_date('33304333.1.2001', '333dd333.FMmm.yyyy');\n to_date\n------------\n 2001-01-04\n(1 row)\n\n\n And primarily Oracle's to_date() is designed for operation that in\nPG is solved via timestamp/date cast. For example you can use in\nOracle to_date('4.1.2001') without format mask and it's same thing\nas 4.1.2001::date cast('4.1.2001' as date) in PG. \n\n The to_char()/to_date() works as say docs :-)\n\n\n Better support for not exact masks is in my TODO fo 7.2.\n\n\t\t\tKarel \n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Wed, 18 Apr 2001 10:47:04 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: Strange behaviour of to_date()"
},
{
"msg_contents": "Am Mittwoch, 18. April 2001 10:47 schrieben Sie:\n(...)\n>\n> Yes, Oracle support using not exact format mask, but Oracle's to_date\n> is very based on date/time and not support others things:\n>\n> SVRMGR> select to_date('333.222.4.1.2001', '333.222.FMdd.FMmm.yyyy') from\n> dual;\n> TO_DATE('\n> ---------\n> ORA-01821: date format not recognized\n>\n>\n> test=# select to_date('333.222.4.1.2001', '333.222.FMdd.FMmm.yyyy');\n> to_date\n> ------------\n> 2001-01-04\n> (1 row)\n>\n> or nice:\n>\n> test=# select to_date('33304333.1.2001', '333dd333.FMmm.yyyy');\n> to_date\n> ------------\n> 2001-01-04\n> (1 row)\n\nMaybe it's not designed for my needs, but that does not change the fact that \nit's a bug. When the mask is not exact, it should raise an error, and not \nsilently return WRONG values, which is really bad behaviour, and will result \nin \"lost\" data.\n\n\n-- \n===================================================\n Mario Weilguni � � � � � � � � KPNQwest Austria GmbH\n�Senior Engineer Web Solutions Nikolaiplatz 4\n�tel: +43-316-813824 � � � � 8020 graz, austria\n�fax: +43-316-813824-26 � � � http://www.kpnqwest.at\n�e-mail: mario.weilguni@kpnqwest.com\n===================================================\n",
"msg_date": "Wed, 18 Apr 2001 18:12:50 +0200",
"msg_from": "Mario Weilguni <Mario.Weilguni@kpnqwest.com>",
"msg_from_op": false,
"msg_subject": "Re: Strange behaviour of to_date()"
}
] |
[
{
"msg_contents": "> > The timeout will be useful to let the client or user decide\n> > on an alternate course of action other that killing his\n> > application (without the need for timers or threads in the\n> > client program).\n> \n> This assumes (without evidence) that the client has a good\n> idea of what the timeout limit ought to be. I think this \"feature\"\n> has no real use other than encouraging application programmers to\n> shoot themselves in the foot. I see no reason that we should make\n> it easy to misdesign applications.\n\nAFAIR, Big Boys have this feature. If its implementation is safe,\nie will not affect applications not using it, why do not implement it?\n\nVadim\n",
"msg_date": "Tue, 17 Apr 2001 11:12:08 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: AW: AW: timeout on lock feature "
},
{
"msg_contents": "Hi,\n\nfor 10 years i develop DB application using 'timeout on lock' feature\n(Informix,Ingres,AdabasD,RDB,...).\nI think about migrate with this application to postgresql, and with this\nfeature i don't need to modify my ready to run code specially for\npostgresql. This feature guard me against blocking terminals, because long\nquery\ninitialized by operator (or administrator).\n\n\"Mikheev, Vadim\" wrote in message\n<8F4C99C66D04D4118F580090272A7A234D33AC@sectorbase1.sectorbase.com>...\n>> > The timeout will be useful to let the client or user decide\n>> > on an alternate course of action other that killing his\n>> > application (without the need for timers or threads in the\n>> > client program).\n>>\n>> This assumes (without evidence) that the client has a good\n>> idea of what the timeout limit ought to be. I think this \"feature\"\n>> has no real use other than encouraging application programmers to\n>> shoot themselves in the foot. I see no reason that we should make\n>> it easy to misdesign applications.\n>\n>AFAIR, Big Boys have this feature. If its implementation is safe,\n>ie will not affect applications not using it, why do not implement it?\n>\n>Vadim\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 6: Have you searched our list archives?\n>\n>http://www.postgresql.org/search.mpl\n\n\n",
"msg_date": "Wed, 18 Apr 2001 10:53:20 +0200",
"msg_from": "\"Henryk Szal\" <szal@doctorq.com.pl>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: timeout on lock feature"
},
{
"msg_contents": "OK, we have it on the TODO list, so it will hopefully be added soon, in\nsome fashion. I like the SET or the BEGIN TIMEOUT options.\n\n> Hi,\n> \n> for 10 years i develop DB application using 'timeout on lock' feature\n> (Informix,Ingres,AdabasD,RDB,...).\n> I think about migrate with this application to postgresql, and with this\n> feature i don't need to modify my ready to run code specially for\n> postgresql. This feature guard me against blocking terminals, because long\n> query\n> initialized by operator (or administrator).\n> \n> \"Mikheev, Vadim\" wrote in message\n> <8F4C99C66D04D4118F580090272A7A234D33AC@sectorbase1.sectorbase.com>...\n> >> > The timeout will be useful to let the client or user decide\n> >> > on an alternate course of action other that killing his\n> >> > application (without the need for timers or threads in the\n> >> > client program).\n> >>\n> >> This assumes (without evidence) that the client has a good\n> >> idea of what the timeout limit ought to be. I think this \"feature\"\n> >> has no real use other than encouraging application programmers to\n> >> shoot themselves in the foot. I see no reason that we should make\n> >> it easy to misdesign applications.\n> >\n> >AFAIR, Big Boys have this feature. If its implementation is safe,\n> >ie will not affect applications not using it, why do not implement it?\n> >\n> >Vadim\n> >\n> >---------------------------(end of broadcast)---------------------------\n> >TIP 6: Have you searched our list archives?\n> >\n> >http://www.postgresql.org/search.mpl\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 19 Apr 2001 11:21:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: AW: AW: timeout on lock feature"
}
] |
[
{
"msg_contents": "Hello Dave & friends,\n\nI am working on the PgAdmin query loader project writing as much possible \ncode server-side in PL/pgSQL.\n\nFor the purpose of function 'compilation' (let's call it like that), I \ncreate two temporary tables: compiler_function which holds the list of \nPL/PgSQL functions to compile,\nand compiler_dependency which holds the list of dependencies. After \ncompilation of functions, these two tables are dropped.\n\nTo find function dependencies, I need to run this (problematic) query on \neach function:\n\nCREATE FUNCTION pgadmin_comp_dependency_init (int4, text)\nRETURNS int4\nAS '\n\tDECLARE\n\t\t/* $1 holds the function iod,\n\t\t $2 holds the function name.*/\n\n\t\trec\t\trecord;\n\t\tv_query1 varchar;\n\t\tv_query2 varchar;\n BEGIN\n\t\tSELECT INTO rec\n\t\tcompiler_function.function_oid\n\t\tFROM compiler_function\n WHERE function_source ilike %$2%; /* <----- $2 \nholds the name of the function on which is performed a dependency search. */\n\t\t\n\t\t\n\t\tIF FOUND THEN\n\n\t\t/* < --- The rest is OK : EXECUTE works perfectly when there is no issue \nin testing results*/\t\n\t\t\tv_query2 := ''INSERT INTO compiler_dependency (dependency_from, \ndependency_to ) SELECT compiler_function.function_oid, ''\n\t\t\t|| text($1) || '' FROM compiler_function WHERE function_source ilike \n''''%'' || $2 || ''%'''';'';\n\t\t\t\t\n\t\t\texecute (v_query2);\n\t\t\tRETURN 1;\n\t\tELSE\n\t\t\tRETURN 0;\n\t\tEND IF;\n\tEND ;\n'\nLANGUAGE 'plpgsql' ;\n\nMy problem is that \"ilike %$2%;\" (line 13) does not work.\nPL/PgSQL thinks % is the type of $2.\nI tried the EXECUTE variable alternative without results.\n\nAny idea to run the 'SELECT INTO rec xxx, xxxx, xxx, xxx WHERE YYYYYY ilike \n%$2%' ?\nIs there a workaround like using a server-side function similar to \nilike(varchar, varchar)->boolean ?\n\nGreeting from Jean-Michel POURE, Paris\n",
"msg_date": "Tue, 17 Apr 2001 22:11:31 +0200",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Foolish question about <<<< SELECT INTO rec xxx, xxxx, xxx,\n\txxx WHERE YYYYYY ilike %$2 >>>>"
},
{
"msg_contents": "I think you haven't counted your quotes correctly.\n\nquote_literal() would probably help you build a valid ILIKE pattern\nwith less pain.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Apr 2001 16:41:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Foolish question about <<<< SELECT INTO rec xxx, xxxx, xxx,\n\txxx WHERE YYYYYY ilike %$2 >>>>"
}
] |
[
{
"msg_contents": "I want to thank you for the excellent and fast responses I have received\nin the past. Especially while troubleshooting the sfio problem. These\nproblems are VERY minor and easily worked around. Part of the reason I\nam posting them is just in case someone else runs across the same\nthings. I am running Sparc Solaris 7, GCC 2.95.3. Configure line was:\n./configure --prefix=/usr/local --with-perl --enable-odbc\n--enable-syslog.\n\nI just built the postgresql 7.1 final and the configure script is still\nchecking for sfio. Not a major big deal, but I need to remove the sfio\ncheck from configure.in, run autoconf, and then configure to fix it. If\nyou remember sfio on Sparc Solaris 7 was causing a segfault when psql\nwould display its output.\n\nAlso, when I create a table and actually name the primary key using the\nfollowing:\ncreate table fcos (\n \"FeatureNumber\" integer NOT NULL,\n \"FeatureName\" varchar(32),\n \"Feature1\" integer DEFAULT 0,\n CONSTRAINT \"PK_fcos\" PRIMARY KEY (\"FeatureNumber\")\n);\nI get the following:\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'PK_fcos'\nfor table 'fcos'\nThis is really a nit-pick but I am explicitly naming it so why should I\nget a notice about it?\n\nAlso, during make install I am getting \"lack of permissions\" when it\ntries to install Perl. I have Perl 5.6.0 on this machine built from\nsource. It tells me to fix the problem and go into src/interfaces/perl5\nand do a make install. Without fixing any problem I just go into\nsrc/interfaces/perl5 and do a make install and it works without\nreturning any errors. BTW, I am doing the make install as root and all\nof the directory (both source and destination) have reasonable\npriviledges.\n\n\n",
"msg_date": "Tue, 17 Apr 2001 18:18:29 -0400",
"msg_from": "David George <david@onyxsoft.com>",
"msg_from_op": true,
"msg_subject": "three VERY minor things with 7.1 final"
},
{
"msg_contents": "David George <david@onyxsoft.com> writes:\n> I just built the postgresql 7.1 final and the configure script is still\n> checking for sfio. Not a major big deal, but I need to remove the sfio\n> check from configure.in, run autoconf, and then configure to fix it. If\n> you remember sfio on Sparc Solaris 7 was causing a segfault when psql\n> would display its output.\n\nHmm. Digging in the CVS logs and pghackers archives, it seems that\nconfigure started checking for sfio in response to Michael Richards'\nunsubstantiated claim that FreeBSD 2.2.5 requires it to be used\n(http://www.postgresql.org/mhonarc/pgsql-hackers/1998-04/msg00363.html).\nCan anyone confirm or deny that? Marc had previously expressed some\ninterest as well (thread starting at\nhttp://www.postgresql.org/mhonarc/pgsql-hackers/1998-04/msg00219.html)\nbut the outcome of that thread didn't seem to suggest that it's a\nmust-have item.\n\nI'm tempted to rip out the configure check for sfio, but if it's only\nbroken on some systems and really is useful on others, then we have to\ntry to figure out how to tell if it's broken :-(\n\n\n> Also, during make install I am getting \"lack of permissions\" when it\n> tries to install Perl. I have Perl 5.6.0 on this machine built from\n> source. It tells me to fix the problem and go into src/interfaces/perl5\n> and do a make install. Without fixing any problem I just go into\n> src/interfaces/perl5 and do a make install and it works without\n> returning any errors.\n\nNo clue about this. The makefile is testing for writability of perl's\nINSTALLSITELIB, which ought to succeed if you are root. Besides which,\nif you go into that directory and do \"make install\" again, it ought to\nfail again in the same way. Unless maybe root's path is different, and\nyou are invoking make not gmake the second time?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Apr 2001 19:21:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: three VERY minor things with 7.1 final "
},
{
"msg_contents": "On Tue, 17 Apr 2001, Tom Lane wrote:\n\n> David George <david@onyxsoft.com> writes:\n> > I just built the postgresql 7.1 final and the configure script is still\n> > checking for sfio. Not a major big deal, but I need to remove the sfio\n> > check from configure.in, run autoconf, and then configure to fix it. If\n> > you remember sfio on Sparc Solaris 7 was causing a segfault when psql\n> > would display its output.\n>\n> Hmm. Digging in the CVS logs and pghackers archives, it seems that\n> configure started checking for sfio in response to Michael Richards'\n> unsubstantiated claim that FreeBSD 2.2.5 requires it to be used\n> (http://www.postgresql.org/mhonarc/pgsql-hackers/1998-04/msg00363.html).\n> Can anyone confirm or deny that? Marc had previously expressed some\n> interest as well (thread starting at\n> http://www.postgresql.org/mhonarc/pgsql-hackers/1998-04/msg00219.html)\n> but the outcome of that thread didn't seem to suggest that it's a\n> must-have item.\n\nDamn, now *that* is an old thread ... I can't see any reason why it would\nbe required for FreeBSD, as its only a port for the OS, not part of it ...\n\n> I'm tempted to rip out the configure check for sfio, but if it's only\n> broken on some systems and really is useful on others, then we have to\n> try to figure out how to tell if it's broken :-(\n\nPull it, as far as I'm conerned ...\n\n\n",
"msg_date": "Tue, 17 Apr 2001 21:28:51 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: three VERY minor things with 7.1 final "
},
{
"msg_contents": "Tom Lane writes:\n\n> I'm tempted to rip out the configure check for sfio, but if it's only\n> broken on some systems and really is useful on others, then we have to\n> try to figure out how to tell if it's broken :-(\n\nI just installed sfio here and built PostgreSQL with it and didn't see any\ndifference. Which is not surprising because libsfio doesn't define any\nsymbols that PostgreSQL uses. (It's a completely separate interface, all\nthe functions are named sf*.) What you'd really need to use is libstdio\n(plus libsfio), which is a wrapper. So I removed the check in configure\nbecause it obviously had a net negative benefit.\n\nThose who want to use it can still add '-lstdio -lsfio' to LIBS.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Thu, 19 Apr 2001 22:48:11 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: three VERY minor things with 7.1 final "
}
] |
[
{
"msg_contents": "The postgresql interactive terminal will dump core on any script that is\nrun via the -f command line option if their exists a connect line without\na valid user. An example connect line is in one of the attached files.\nThe user that I have choosen is just testuser, you will see what I mean if\nyou just run that script on any database you have on your system, assuming\nthat you don't have a user called testuser. If you do, change the\nusername and see what happens. This bug has been in psql for the last\ncouple of versions that I have tested.\n\nAlso attached is a patch that seems to correct the issue. I admit that I\nhaven't studied the code long enough to determine if the fix is suitable,\nbut I feel that it will give you something to work from if it is not.\n\n-- \n//===================================================================\\\\\n|| D. Hageman <dhageman@dracken.com> ||\n\\\\===================================================================//",
"msg_date": "Tue, 17 Apr 2001 17:39:13 -0500 (CDT)",
"msg_from": "\"D. Hageman\" <dhageman@dracken.com>",
"msg_from_op": true,
"msg_subject": "Fix for psql core dumping on bad user"
},
{
"msg_contents": "\"D. Hageman\" <dhageman@dracken.com> writes:\n> The postgresql interactive terminal will dump core on any script that is\n> run via the -f command line option if their exists a connect line without\n> a valid user.\n\nCuriously, I see no core dump here:\n\n$ cat zscript\n\\connect - testuser\n$ psql -f zscript regression\npsql:zscript:1: \\connect: FATAL 1: user \"testuser\" does not exist\n$\n\nNonetheless, the comment at the top of do_connect() says that it\n*should* terminate the program under these circumstances, so I'm not\nsure why it doesn't. Peter?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Apr 2001 19:32:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix for psql core dumping on bad user "
},
{
"msg_contents": "\nStrange. Maybe I haven't fully explored the problem then. I would be\nmore then happy to supply a core file if you would like to analyze it. I\nalso guess that I should have been more complete in my bug report. I am\ndoing this on a RedHat 6.2 (Fully updated, Intel architecture) machine and\nI have seen this behavior in the past several versions of PostgreSQL, but\njust have now gotten around to doing something about it. As far as how I\ncompile it, I usually use the roll rpms from the srpms that you create Tom?\n\nI think I will go ahead and try it out on some other platforms later on\ntoday ...\n\nOn Tue, 17 Apr 2001, Tom Lane wrote:\n\n> \"D. Hageman\" <dhageman@dracken.com> writes:\n> > The postgresql interactive terminal will dump core on any script that is\n> > run via the -f command line option if their exists a connect line without\n> > a valid user.\n>\n> Curiously, I see no core dump here:\n>\n> $ cat zscript\n> \\connect - testuser\n> $ psql -f zscript regression\n> psql:zscript:1: \\connect: FATAL 1: user \"testuser\" does not exist\n> $\n>\n> Nonetheless, the comment at the top of do_connect() says that it\n> *should* terminate the program under these circumstances, so I'm not\n> sure why it doesn't. Peter?\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n-- \n//===================================================================\\\\\n|| D. Hageman <dhageman@dracken.com> ||\n\\\\===================================================================//\n\n",
"msg_date": "Wed, 18 Apr 2001 07:06:48 -0500 (CDT)",
"msg_from": "\"D. Hageman\" <dhageman@dracken.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix for psql core dumping on bad user "
},
{
"msg_contents": "D. Hageman writes:\n\n> Strange. Maybe I haven't fully explored the problem then. I would be\n> more then happy to supply a core file if you would like to analyze it.\n\nPlease compile with debug symbols, e.g.\n\nsrc/bin/psql$ make clean\nsrc/bin/psql$ make CFLAGS=-g all\n\nthen produce the core dump and run\n\ngdb location/bin/psql some/where/core\n\nand enter\n\nbt\n\nand show what it says.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Wed, 18 Apr 2001 18:25:22 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Fix for psql core dumping on bad user "
},
{
"msg_contents": "\nI just tried it on Alpha hardware running FreeBSD. Same results.\n\n[~/opt/bin]\ndhageman@marconi: ./psql -f test.sql test\npsql:test.sql:1: \\connect: FATAL 1: user \"testuser\" does not exist\nIllegal instruction (core dumped)\n\nAt any rate, I am convinced that I am not going crazy here with the\nresults I saw on my normal database system.\n\nOn Wed, 18 Apr 2001, D. Hageman wrote:\n\n>\n> Strange. Maybe I haven't fully explored the problem then. I would be\n> more then happy to supply a core file if you would like to analyze it. I\n> also guess that I should have been more complete in my bug report. I am\n> doing this on a RedHat 6.2 (Fully updated, Intel architecture) machine and\n> I have seen this behavior in the past several versions of PostgreSQL, but\n> just have now gotten around to doing something about it. As far as how I\n> compile it, I usually use the roll rpms from the srpms that you create Tom?\n>\n> I think I will go ahead and try it out on some other platforms later on\n> today ...\n>\n> On Tue, 17 Apr 2001, Tom Lane wrote:\n>\n> > \"D. Hageman\" <dhageman@dracken.com> writes:\n> > > The postgresql interactive terminal will dump core on any script that is\n> > > run via the -f command line option if their exists a connect line without\n> > > a valid user.\n> >\n> > Curiously, I see no core dump here:\n> >\n> > $ cat zscript\n> > \\connect - testuser\n> > $ psql -f zscript regression\n> > psql:zscript:1: \\connect: FATAL 1: user \"testuser\" does not exist\n> > $\n> >\n\n-- \n//===================================================================\\\\\n|| D. Hageman <dhageman@dracken.com> ||\n\\\\===================================================================//\n\n\n",
"msg_date": "Wed, 18 Apr 2001 11:40:15 -0500 (CDT)",
"msg_from": "\"D. Hageman\" <dhageman@dracken.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix for psql core dumping on bad user "
},
{
"msg_contents": "Tom Lane writes:\n\n> $ cat zscript\n> \\connect - testuser\n> $ psql -f zscript regression\n> psql:zscript:1: \\connect: FATAL 1: user \"testuser\" does not exist\n> $\n>\n> Nonetheless, the comment at the top of do_connect() says that it\n> *should* terminate the program under these circumstances, so I'm not\n> sure why it doesn't. Peter?\n\nThe comment is not correct. Failure in do_connect() in non-interactive\nmode terminates the script. In the case of -f the program terminates\nimplicitly, but in case of \\i you would return to the prompt (or the\ncontaining \\i).\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Wed, 18 Apr 2001 18:48:31 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Fix for psql core dumping on bad user "
},
{
"msg_contents": "D. Hageman writes:\n\n> The postgresql interactive terminal will dump core on any script that is\n> run via the -f command line option if their exists a connect line without\n> a valid user. An example connect line is in one of the attached files.\n\nOkay, I've found the problem. When the connection fails, psql momentarily\nruns without a valid database connection. When it does that, the\nmultibyte encoding has the invalid value -1. (You need to compile with\nmultibyte enabled to reproduce this.) With that value, PQmblen() has\ntrouble when it parses the next line. Perhaps PQmblen() should simply\nreturn 1 when it is passed an invalid encoding. In any case it should do\nbetter than dump core.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Wed, 18 Apr 2001 19:22:08 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Fix for psql core dumping on bad user"
},
{
"msg_contents": "> D. Hageman writes:\n> \n> > The postgresql interactive terminal will dump core on any script that is\n> > run via the -f command line option if their exists a connect line without\n> > a valid user. An example connect line is in one of the attached files.\n> \n> Okay, I've found the problem. When the connection fails, psql momentarily\n> runs without a valid database connection. When it does that, the\n> multibyte encoding has the invalid value -1. (You need to compile with\n> multibyte enabled to reproduce this.) With that value, PQmblen() has\n> trouble when it parses the next line. Perhaps PQmblen() should simply\n> return 1 when it is passed an invalid encoding. In any case it should do\n> better than dump core.\n\nWill fix. Also I will change the \"invalid\" encoding to a\ndefault i.e. SQL_ASCII, not -1.\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 19 Apr 2001 10:40:07 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Fix for psql core dumping on bad user"
},
{
"msg_contents": "Attached is the backtrace from gdb. I didn't find it very helpful when I\nfirst looked into this problem, but maybe you can see something that I\nmissed.\n\nI think tomorrow at work, I will take some time out to step through the\ncode and see exactly what is going on in this situation. I get the\nimpression from the responses that I recieved that my quick analysis is\nwrong and problem exists else where ... at any rate, time for bed.\n\nOn Wed, 18 Apr 2001, Peter Eisentraut wrote:\n\n> D. Hageman writes:\n>\n> > Strange. Maybe I haven't fully explored the problem then. I would be\n> > more then happy to supply a core file if you would like to analyze it.\n>\n> Please compile with debug symbols, e.g.\n>\n> src/bin/psql$ make clean\n> src/bin/psql$ make CFLAGS=-g all\n>\n> then produce the core dump and run\n>\n> gdb location/bin/psql some/where/core\n>\n> and enter\n>\n> bt\n>\n> and show what it says.\n>\n>\n\n-- \n//===================================================================\\\\\n|| D. Hageman <dhageman@dracken.com> ||\n\\\\===================================================================//",
"msg_date": "Thu, 19 Apr 2001 00:07:31 -0500 (CDT)",
"msg_from": "\"D. Hageman\" <dhageman@dracken.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix for psql core dumping on bad user "
},
{
"msg_contents": "> Attached is the backtrace from gdb. I didn't find it very helpful when I\n> first looked into this problem, but maybe you can see something that I\n> missed.\n> \n> I think tomorrow at work, I will take some time out to step through the\n> code and see exactly what is going on in this situation. I get the\n> impression from the responses that I recieved that my quick analysis is\n> wrong and problem exists else where ... at any rate, time for bed.\n\nI have commited a fix. Please grab the snapshot and try again.\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 19 Apr 2001 14:21:07 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Fix for psql core dumping on bad user "
},
{
"msg_contents": "On Thu, 19 Apr 2001, Tatsuo Ishii wrote:\n>\n> I have commited a fix. Please grab the snapshot and try again.\n> --\n> Tatsuo Ishii\n\nThe results are much much better. No core dumping at all. Thank you for\nyour help with this. Not that it was a major bug, but I like to help make\nopen source projects better whenever I can.\n\n[dhageman@typhon psql]$ ./psql -f test.sql test\npsql:test.sql:1: \\connect: FATAL 1: user \"testuser\" does not exist\n[dhageman@typhon psql]$ echo $?\n2\n[dhageman@typhon psql]$\n\n-- \n//===================================================================\\\\\n|| D. Hageman <dhageman@dracken.com> ||\n\\\\===================================================================//\n\n",
"msg_date": "Thu, 19 Apr 2001 10:40:07 -0500 (CDT)",
"msg_from": "\"D. Hageman\" <dhageman@dracken.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix for psql core dumping on bad user "
}
] |
[
{
"msg_contents": "In latest 7.1 (checked out 2 days ago from CVS), I see following\nbehaviour:\n\ncreate table foo(x int4);\ncreate function xx(foo) returns int4 as ' return 0;' language 'plpgsql';\ncreate view tv2 as select xx(foo) from foo;\n\nusers=# \\d tv2\nERROR: cache lookup of attribute 0 in relation 21747 failed\n\n(21747 is table oid for foo)\n\nHOWEVER, 'select * from tv2' succeeds (sometimes). Sometimes it fails with\nthe same error (cache lookup failed).\n\nI think the above should be enough to reproduce this bug. Any hints? \n\n-alex\n\n\n",
"msg_date": "Tue, 17 Apr 2001 23:36:00 -0400 (EDT)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": true,
"msg_subject": "[BUG] views and functions on relations"
},
{
"msg_contents": "Alex Pilosov <alex@pilosoft.com> writes:\n> users=# \\d tv2\n> ERROR: cache lookup of attribute 0 in relation 21747 failed\n\nConfirmed here. Too tired to chase it further tonight, though.\n\n> HOWEVER, 'select * from tv2' succeeds (sometimes). Sometimes it fails with\n> the same error (cache lookup failed).\n\nCouldn't reproduce this failure --- can you work out a sequence that\nmakes it happen?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Apr 2001 02:05:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] views and functions on relations "
},
{
"msg_contents": "Alex Pilosov <alex@pilosoft.com> writes:\n> In latest 7.1 (checked out 2 days ago from CVS), I see following\n> behaviour:\n\n> create table foo(x int4);\n> create function xx(foo) returns int4 as ' return 0;' language 'plpgsql';\n> create view tv2 as select xx(foo) from foo;\n\n> users=# \\d tv2\n> ERROR: cache lookup of attribute 0 in relation 21747 failed\n\nOkay, this is a simple oversight in ruleutils.c: the rule dumper doesn't\nhave logic to handle whole-tuple function arguments, such as (foo) in\nthe above example. Will fix.\n\n> HOWEVER, 'select * from tv2' succeeds (sometimes). Sometimes it fails with\n> the same error (cache lookup failed).\n\nThe ruleutils.c bug cannot explain this however, since ruleutils won't\neven be invoked. Can you find a sequence to reproduce it?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Apr 2001 11:25:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] views and functions on relations "
},
{
"msg_contents": "On Wed, 18 Apr 2001, Tom Lane wrote:\n\n> The ruleutils.c bug cannot explain this however, since ruleutils won't\n> even be invoked. Can you find a sequence to reproduce it?\nSorry, I was mistaken. The error I get for select is this:\nERROR: cache lookup for type 0 failed\n\nThis is a far harder to trigger bug, and actually, it doesn't happen in\nthis simple case (oops), and the only test case I have involves 2 tables\nand 3 stored procedures. It is not related to views at all, just doing the\nunderlying select causes the problem. Taking out _any_ stored procedure\nfrom the query removes the problem. \n\nFWIW, this is what I see in server error log:\n\nERROR: cache lookup for type 0 failed\nDEBUG: Last error occured while executing PL/pgSQL function cust_name\nDEBUG: while putting call arguments to local variables\n\nAnd this is the query:\nSELECT\ncust_name(a)\nFROM customers AS a, addresses AS b\nWHERE\nb.cust_id=a.cust_id\nand b.oid=get_billing_record(a.cust_id)\nand cust_balance(a.cust_id)>0\n\nRemoving either get_billing_record or cust_balance conditions or cust_name\nselection leaves the problem. Unfortunately, each function is very long,\nand involves lots of tables and it'd make no sense to post this all to the\nlist, so I'm going to try to narrow down the problem more to get a good\nreproducible result, but if the above helps any in diagnostic, it'd be\ngreat ;)\n\n\n",
"msg_date": "Wed, 18 Apr 2001 13:11:09 -0400 (EDT)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] views and functions on relations "
},
{
"msg_contents": "On Wed, 18 Apr 2001, Alex Pilosov wrote:\n\n> This is a far harder to trigger bug, and actually, it doesn't happen in\n> this simple case (oops), and the only test case I have involves 2 tables\n> and 3 stored procedures. It is not related to views at all, just doing the\n> underlying select causes the problem. Taking out _any_ stored procedure\n> from the query removes the problem. \nOh yes. One thing I forgot: It all worked in 7.0 and it only broke after\nupgrading to 7.1\n\n-alex\n\n",
"msg_date": "Wed, 18 Apr 2001 13:15:52 -0400 (EDT)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] views and functions on relations "
},
{
"msg_contents": "Here's more info on the bug:\n\nbackground: function cust_name(customers) returns varchar;\nQuery in question:\n\nSELECT\ncust_name(a)\nFROM customers AS a, addresses AS b\nWHERE\nb.cust_id=a.cust_id\nand b.oid=get_billing_record(a.cust_id)\nand cust_balance(a.cust_id)>0\n\nFirst, my idea of what's happening:\n\nTuple in question contains the row from 'customers' table.\n\nSomething (when the query is evaluated, before cust_name function is\ncalled) sets the tupdesc->natts=0, however, everything else in that\ntupdesc is right (all the attrs are present and have correct values and\natttypes), and tuple->t_data->t_natts is correct (12).\n\nWhen SPI_getbinval is called, it checks tuple->t_data->t_natts, and works\nOK, but, however, when SPI_gettypeid is called, it checks\ntupledesc->nattrs, and returns 0. \n\nQuestion: Should SPI_gettypeid look at tuple->t_data->t_natts (to do that,\nit needs to be passed tuple along with tupdesc)? \nOr some other code should be fixed to properly set tupledesc->nattrs?\n\nNOTE: when I removed the check in SPI_gettypeid, it _also_ fixed the '\\d\nviewname' bug, so these two bugs are related (i.e. you cannot see \\d\nbecause nattrs is set incorrectly). You may have more luck tracing the\ncode which improperly sets nattrs than me...\n\nHoping for proper fix, \n\n\n-alex\n\ntraceback:\n#0 elog (lev=-1, fmt=0x45d4b340 \"cache lookup for type %u failed\")\n at elog.c:119\n#1 0x45d4693e in exec_cast_value (value=1791, valtype=0, reqtype=23,\n reqinput=0x82bfdb0, reqtypelem=0, reqtypmod=-1, isnull=0xbfffeb6f \"\")\n at pl_exec.c:2682\n#2 0x45d45f19 in exec_assign_value (estate=0xbfffec40, target=0x82cdd88,\n value=1791, valtype=0, isNull=0xbfffeb6f \"\") at pl_exec.c:2173\n#3 0x45d4687a in exec_move_row (estate=0xbfffec40, rec=0x0,\nrow=0x82bfcc8,\n tup=0x827a170, tupdesc=0x827a130) at pl_exec.c:2629\n#4 0x45d43e64 in plpgsql_exec_function (func=0x82b3188, fcinfo=0x828e364)\n at pl_exec.c:331\n#5 0x45d41f57 in plpgsql_call_handler (fcinfo=0x828e364) at\npl_handler.c:128\n#6 0x80b78ad in ExecMakeFunctionResult (fcache=0x828e350,\n arguments=0x826eb28, econtext=0x826fc98, isNull=0xbfffed37 \"\",\n isDone=0xbfffed68) at execQual.c:796\n#7 0x80b794e in ExecEvalFunc (funcClause=0x826ead8, econtext=0x826fc98,\n isNull=0xbfffed37 \"\", isDone=0xbfffed68) at execQual.c:890\n#8 0x80b7d1c in ExecEvalExpr (expression=0x826ead8, econtext=0x826fc98,\n isNull=0xbfffed37 \"\", isDone=0xbfffed68) at execQual.c:1215\n#9 0x80b7fbb in ExecTargetList (targetlist=0x826e6a0, nodomains=19,\n targettype=0x8284620, values=0x8285100, econtext=0x826fc98,\n isDone=0xbfffef08) at execQual.c:1536\n#10 0x80b8215 in ExecProject (projInfo=0x82850d8, isDone=0xbfffef08)\n at execQual.c:1764\n#11 0x80bcd9a in ExecNestLoop (node=0x826e5c0) at nodeNestloop.c:245\n#12 0x80b6b76 in ExecProcNode (node=0x826e5c0, parent=0x826e5c0)\n at execProcnode.c:297\n#13 0x80b5eee in ExecutePlan (estate=0x826f770, plan=0x826e5c0,\n operation=CMD_SELECT, numberTuples=0, direction=ForwardScanDirection,\n destfunc=0x8285de0) at execMain.c:973\n#14 0x80b5463 in ExecutorRun (queryDesc=0x826f758, estate=0x826f770,\n feature=3, count=0) at execMain.c:233\n#15 0x80f76b3 in ProcessQuery (parsetree=0x82433e8, plan=0x826e5c0,\n dest=Remote) at pquery.c:295\n#16 0x80f62bb in pg_exec_query_string (\n query_string=0x8243090 \"select * from outstanding_balances;\",\ndest=Remote,\n parse_context=0x8218730) at postgres.c:810\n#17 0x80f71e6 in PostgresMain (argc=4, argv=0xbffff1e0, real_argc=8,\n real_argv=0xbffffaf4, username=0x81cbf69 \"sw\") at postgres.c:1908\n#18 0x80e14c3 in DoBackend (port=0x81cbd00) at postmaster.c:2111\n#19 0x80e10ac in BackendStartup (port=0x81cbd00) at postmaster.c:1894\n#20 0x80e0436 in ServerLoop () at postmaster.c:992\n#21 0x80dfe63 in PostmasterMain (argc=8, argv=0xbffffaf4) at\npostmaster.c:682\n#22 0x80c4055 in main (argc=8, argv=0xbffffaf4) at main.c:151\n\n\n\n\n\n",
"msg_date": "Wed, 18 Apr 2001 21:36:52 -0400 (EDT)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] views and functions on relations "
},
{
"msg_contents": "Alex Pilosov <alex@pilosoft.com> writes:\n> Something (when the query is evaluated, before cust_name function is\n> called) sets the tupdesc->natts=0,\n\nUgh. You verified the natts is wrong in the tupdesc?\n\n> Question: Should SPI_gettypeid look at tuple->t_data->t_natts (to do that,\n> it needs to be passed tuple along with tupdesc)? \n> Or some other code should be fixed to properly set tupledesc->nattrs?\n\nThe tupdesc natts *must* match the actual tuple, else all sorts of\nthings will go wrong. I don't think SPI_gettypeid is broken.\n\n> NOTE: when I removed the check in SPI_gettypeid, it _also_ fixed the '\\d\n> viewname' bug, so these two bugs are related (i.e. you cannot see \\d\n> because nattrs is set incorrectly).\n\nThat seems moderately unlikely, since \\d doesn't depend on SPI...\n\n> You may have more luck tracing the\n> code which improperly sets nattrs than me...\n\nHard to do without a working (failing ;-)) example to look at.\nHave you had any luck reducing your example? Alternatively,\nwould you be willing to give me telnet or ssh access to your\nmachine, and I'll look at the problem in situ?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Apr 2001 21:44:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] views and functions on relations "
},
{
"msg_contents": "Alex Pilosov <alex@pilosoft.com> writes:\n> Something (when the query is evaluated, before cust_name function is\n> called) sets the tupdesc->natts=0,\n\nFWIW, I have just looked through all the code that sets natts fields,\nand I don't believe that any of it can set a tupdesc's natts field to\nzero. Therefore the zeroing must be an accidental stomp of some kind.\nSince natts is the first field in a tupdesc, it seems plausible that\nthis might happen if some bit of code misinterprets a tupdesc pointer\nas something else. However, that makes the odds of finding the problem\nby staring at code even lower. I really need to get after this with\na debugger...\n\nBTW, are you building with --enable-cassert? If not I strongly recommend\nit for chasing this sort of problem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Apr 2001 22:02:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] views and functions on relations "
},
{
"msg_contents": "Alex Pilosov <alex@pilosoft.com> writes:\n> Here's more info on the bug:\n> background: function cust_name(customers) returns varchar;\n> Query in question:\n\n> SELECT\n> cust_name(a)\n> FROM customers AS a, addresses AS b\n> WHERE\n> b.cust_id=a.cust_id\n> and b.oid=get_billing_record(a.cust_id)\n> and cust_balance(a.cust_id)>0\n\n\nI think I see the problem. Is your query being executed via a mergejoin\nplan with an explicit sort on customers? Does the failure go away if\nyou force a nestloop join?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Apr 2001 22:47:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] views and functions on relations "
}
] |
[
{
"msg_contents": "\n> > In short, I think lock timeout is a solution searching in vain for a\n> > problem. If we implement it, we are just encouraging bad > application\n> > design.\n> \n> I agree with Tom completely here.\n> \n> In any real-world application the database is the key component of a \n> larger system: the work it does is the most finicky, and any mistakes\n> (either internally or, more commonly, from misuse) have the most \n> far-reaching consequences. The responsibility of the database is to \n> provide a reliable and easily described and understood mechanism to \n> build on.\n\nIt is not something that makes anything unrelyable or less robust.\nIt is also simple: \"I (the client) request that you (the backend) dont wait for \nany lock longer than x seconds\"\n\n> Timeouts are a system-level mechanism that to be useful must refer to \n> system-level events that are far above anything that PG knows about.\n\nI think you are talking about different kinds of timeouts here. \n\n> The only way PG could apply reasonable timeouts would be for the \n> application to dictate them, \n\nThat is exactly what we are talking about here.\n\n> but the application can better implement them itself.\n\nIt can, but it makes the program more complicated (needs timers or threads, \nwhich violates your last statement \"simplest interface\".\n\n> \n> You can think of this as another aspect of the \"end-to-end\" principle: \n> any system-level construct duplicated in a lower-level system component \n> can only improve efficiency, not provide the corresponding high-level \n> service. If we have timeouts in the database, they should be there to\n> enable the database to better implement its abstraction, and not pretend \n> to be a substitute for system-level timeouts.\n\nMentioned functionality has nothing to do with above statement which\nI can fully support. \n\n> There's no upper limit on how complicated a database interface can\n> become (cf. Oracle). The database serves its users best by having \n> the simplest interface that can possibly provide the needed service.\n\nAgreed.\n\nAndreas\n",
"msg_date": "Wed, 18 Apr 2001 09:54:11 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: timeout on lock feature"
},
{
"msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> It is not something that makes anything unrelyable or less robust.\n\nHow can you argue that? The presence of a lock timeout *will* make\noperations fail that otherwise would have succeeded; moreover that\nfailure will be pretty unpredictable (at least from the point of view\nof the application that issued the command). That qualifies as\n\"unreliable and not robust\" in my book. A persistent SET variable\nalso opens up the risk of completely unwanted failures in critical\noperations --- all you have to do is forget to reset the variable\nwhen the effect is no longer wanted. (Murphy's Law guarantees that\nyou won't find out such a mistake until the worst possible time.\nThat's even less robust.)\n\n>> The only way PG could apply reasonable timeouts would be for the \n>> application to dictate them, \n\n> That is exactly what we are talking about here.\n\nThe *real* problem is that the application cannot determine reasonable\ntimeouts either. Perhaps the app can decide how long it is willing to\nwait overall, but how can it translate that into the low-level notion of\nan appropriate lock timeout? It does not know how many locks might get\nlocked in the course of a query, nor which locks they are exactly, nor\nwhat the likely distribution of wait intervals is for those locks.\n\nGiven that, using a lock timeout \"feature\" is just a crapshoot. If you\nsay \"set lock timeout X\", you have no real idea how that translates to\napplication-visible performance nor how big a risk you are taking of\ninducing unwanted failures. You don't even get to average out the\nuncertainty across a whole query, because if any one of the lock waits\nexceeds X, your query blows up. Yet making X large destroys the\nusefulness of the feature entirely, so there will always be a strong\ntemptation to set it too low.\n\nThis is the real reason why I've been holding out for restricting the\nfeature to a specific LOCK TABLE statement: if it's designed that way,\nat least you know which lock you are applying the timeout to, and have\nsome chance of being able to estimate an appropriate timeout.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Apr 2001 10:36:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AW: timeout on lock feature "
},
{
"msg_contents": "On Wed, Apr 18, 2001 at 09:54:11AM +0200, Zeugswetter Andreas SB wrote:\n> > > In short, I think lock timeout is a solution searching in vain for a\n> > > problem. If we implement it, we are just encouraging bad application\n> > > design.\n> > \n> > I agree with Tom completely here.\n> > \n> > In any real-world application the database is the key component of a \n> > larger system: the work it does is the most finicky, and any mistakes\n> > (either internally or, more commonly, from misuse) have the most \n> > far-reaching consequences. The responsibility of the database is to \n> > provide a reliable and easily described and understood mechanism to \n> > build on.\n> \n> It is not something that makes anything unrelyable or less robust.\n> It is also simple: \"I (the client) request that you (the backend) \n> dont wait for any lock longer than x seconds\"\n\nMany things that are easy to say have complicated consequences.\n\n> > Timeouts are a system-level mechanism that to be useful must refer to \n> > system-level events that are far above anything that PG knows about.\n> \n> I think you are talking about different kinds of timeouts here. \n\nExactly. I'm talking about useful, meaningful timeouts, not random\ntimeouts attached to invisible events within the database.\n\n> > The only way PG could apply reasonable timeouts would be for the \n> > application to dictate them, \n> \n> That is exactly what we are talking about here.\n\nNo. You wrote elsewhere that the application sets \"30 seconds\" and\nleaves it. But that 30 seconds doesn't have any application-level\nmeaning -- an operation could take twelve hours without tripping your\n30-second timeout. For the application to dictate the timeouts\nreasonably, PG would have to expose all its lock events to the client\nand expect it to deduce how they affect overall behavior.\n\n> > but the application can better implement them itself.\n> \n> It can, but it makes the program more complicated (needs timers \n> or threads, which violates your last statement \"simplest interface\".\n\nIt is good for the program to be more complicated if it is doing a \nmore complicated thing -- if it means the database may remain simple. \nPeople building complex systems have an even greater need for simple\ncomponents than people building little ones.\n \nWhat might be a reasonable alternative would be a BEGIN timeout: report \nfailure as soon as possible after N seconds unless the timer is reset, \nsuch as by a commit. Such a timeout would be meaningful at the \ndatabase-interface level. It could serve as a useful building block \nfor application-level timeouts when the client environment has trouble \napplying timeouts on its own.\n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Wed, 18 Apr 2001 14:35:30 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: timeout on lock feature"
},
{
"msg_contents": "> What might be a reasonable alternative would be a BEGIN timeout: report \n> failure as soon as possible after N seconds unless the timer is reset, \n> such as by a commit. Such a timeout would be meaningful at the \n> database-interface level. It could serve as a useful building block \n> for application-level timeouts when the client environment has trouble \n> applying timeouts on its own.\n\nNow that is a nifty idea. Just put it on one command, BEGIN, and have\nit apply for the whole transaction. We could just set an alarm and do a\nlongjump out on timeout.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Apr 2001 19:33:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout on lock featurey"
},
{
"msg_contents": "On Wed, Apr 18, 2001 at 07:33:24PM -0400, Bruce Momjian wrote:\n> > What might be a reasonable alternative would be a BEGIN timeout: report \n> > failure as soon as possible after N seconds unless the timer is reset, \n> > such as by a commit. Such a timeout would be meaningful at the \n> > database-interface level. It could serve as a useful building block \n> > for application-level timeouts when the client environment has trouble \n> > applying timeouts on its own.\n> \n> Now that is a nifty idea. Just put it on one command, BEGIN, and have\n> it apply for the whole transaction. We could just set an alarm and do a\n> longjump out on timeout.\n\nOf course, it begs the question why the client couldn't do that\nitself, and leave PG out of the picture. But that's what we've \nbeen talking about all along.\n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Wed, 18 Apr 2001 18:09:46 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: timeout on lock feature"
},
{
"msg_contents": "> On Wed, Apr 18, 2001 at 07:33:24PM -0400, Bruce Momjian wrote:\n> > > What might be a reasonable alternative would be a BEGIN timeout: report \n> > > failure as soon as possible after N seconds unless the timer is reset, \n> > > such as by a commit. Such a timeout would be meaningful at the \n> > > database-interface level. It could serve as a useful building block \n> > > for application-level timeouts when the client environment has trouble \n> > > applying timeouts on its own.\n> > \n> > Now that is a nifty idea. Just put it on one command, BEGIN, and have\n> > it apply for the whole transaction. We could just set an alarm and do a\n> > longjump out on timeout.\n> \n> Of course, it begs the question why the client couldn't do that\n> itself, and leave PG out of the picture. But that's what we've \n> been talking about all along.\n\nYes, they can, but of course, they could code the database in the\napplication too. It is much easier to put the timeout in a psql script\nthan to try and code it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Apr 2001 21:39:39 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout on lock feature"
},
{
"msg_contents": "On Wed, Apr 18, 2001 at 09:39:39PM -0400, Bruce Momjian wrote:\n> > On Wed, Apr 18, 2001 at 07:33:24PM -0400, Bruce Momjian wrote:\n> > > > What might be a reasonable alternative would be a BEGIN timeout:\n> > > > report failure as soon as possible after N seconds unless the\n> > > > timer is reset, such as by a commit. Such a timeout would be\n> > > > meaningful at the database-interface level. It could serve as a\n> > > > useful building block for application-level timeouts when the\n> > > > client environment has trouble applying timeouts on its own.\n> > > \n> > > Now that is a nifty idea. Just put it on one command, BEGIN, and\n> > > have it apply for the whole transaction. We could just set an\n> > > alarm and do a longjump out on timeout.\n> > \n> > Of course, it begs the question why the client couldn't do that\n> > itself, and leave PG out of the picture. But that's what we've \n> > been talking about all along.\n> \n> Yes, they can, but of course, they could code the database in the\n> application too. It is much easier to put the timeout in a psql script\n> than to try and code it.\n\nGood: add a timeout feature to psql. \n\nThere's no limit to what features you might add to the database \ncore once you decide that new features need have nothing to do with \ndatabases. Why not (drum roll...) deliver e-mail?\n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Wed, 18 Apr 2001 19:01:01 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: timeout on lock feature"
}
] |
[
{
"msg_contents": "\n> \"Henryk Szal\" <szal@doctorq.com.pl> writes:\n> > YES, this feature should affect ALL locks.\n> > 'Timeout on lock' parameter says to server \"I CAN'T WAIT WITH THIS\n> > TRANSACTION TOO LONG BECAUSE OF (ANY) LOCK\",\n> \n> It still seems to me that what such an application wants is not a lock\n> timeout at all, but an overall limit on the total elapsed time for the\n> query. If you can't afford to wait to get a lock, why is it OK to wait\n> (perhaps much longer) for I/O or computation?\n\nYes, that is a valid argument. The only thing I can counter is that (in OLTP) \nit is usually easy to predict the amount of work that needs to be done\nfor your own tx (we are typically talking about 1 - 200 ms here), but it is not easy \nto predict how long another session needs to complete it's transaction \n(the other session might be OLAP, vacuum ...).\n\nAndreas\n",
"msg_date": "Wed, 18 Apr 2001 10:12:34 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: Re: timeout on lock feature "
}
] |
[
{
"msg_contents": "\n> In latest 7.1 (checked out 2 days ago from CVS), I see following\n> behaviour:\n> \n> create table foo(x int4);\n> create function xx(foo) returns int4 as ' return 0;' language 'plpgsql';\n> create view tv2 as select xx(foo) from foo;\n\nregression=# create function xx(foo) returns int4 as ' return 0;' language 'plpgsql';\nCREATE\n\nregression=# \\d tv2\nERROR: cache lookup of attribute 0 in relation 145121 failed\n\nAbove function does not compile:\nregression=# select * from tv2;\nNOTICE: plpgsql: ERROR during compile of xx near line 1\nERROR: parse error at or near \"return\"\n\nTry to see whether the problem persists with a valid function.\n\nAndreas\n",
"msg_date": "Wed, 18 Apr 2001 10:42:04 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: [BUG] views and functions on relations"
}
] |
[
{
"msg_contents": "I am in the middle of a rather nasty experience that I hope someone out\nthere can help solve.\n \n My hard disk partition with the postgres data directory got full. I\ntried to shut down postgres so I could clear some space, nothing\nhappened. So I did a reboot. On restart (after clearing some\npg_sorttemp.XX files), I discovered that all my tables appear empty!\nWhen I check in the data directories of the databases, I see that the\nfiles for each table have data (they are still of the size as before). \n \n I've been running some experiments on another machine and notice that\nif I remove the pg_log file, databases seem to disappear (or data to\nbecome invisible). So I am guessing that postgres is looking in one\nplace and deciding there is no data. Now I need to get my data of\ncourse! Any solutions?? My programming skills are generally very good so\nif it involves some code I'd have no problem. How do I get a dump of the\nraw data (saw copy-style output) from the table files? Please help! \n\n Thanks\n\nPaul Bagyenda\n",
"msg_date": "Wed, 18 Apr 2001 15:28:05 +0300",
"msg_from": "\"P. A. Bagyenda\" <bagyenda@dsmagic.com>",
"msg_from_op": true,
"msg_subject": "Corrupt database log??"
},
{
"msg_contents": "Oh, forgot to point out that this is v7.0.3 I am running, on linux\nkernel 2.0.5\n\nP.\n",
"msg_date": "Wed, 18 Apr 2001 15:35:44 +0300",
"msg_from": "\"P. A. Bagyenda\" <bagyenda@dsmagic.com>",
"msg_from_op": true,
"msg_subject": "Re: Corrupt database log??"
}
] |
[
{
"msg_contents": "Where would I find a postgres DBA for the Toronto area? Are there are\nany web-based user groups for job hunters?\n\nThank you,\nDanielle\n\n\n\n",
"msg_date": "Wed, 18 Apr 2001 15:57:35 GMT",
"msg_from": "\"Opensoft.ca\" <info@osft.com>",
"msg_from_op": true,
"msg_subject": "Ethical HH"
}
] |
[
{
"msg_contents": "\n> > It is not something that makes anything unrelyable or less robust.\n> \n> How can you argue that? The presence of a lock timeout *will* make\n> operations fail that otherwise would have succeeded; moreover that\n> failure will be pretty unpredictable (at least from the point of view\n> of the application that issued the command). That qualifies as\n> \"unreliable and not robust\" in my book.\n> A persistent SET variable\n> also opens up the risk of completely unwanted failures in critical\n> operations --- all you have to do is forget to reset the variable\n\n?????? So what, when you e.g. forget to commit you are also in trouble,\nI do not see anything special here.\n\n> when the effect is no longer wanted. (Murphy's Law guarantees that\n> you won't find out such a mistake until the worst possible time.\n> That's even less robust.)\n\nMy OLTP clients set it to 30 sec right after connect and leave it at that. \n\n> \n> >> The only way PG could apply reasonable timeouts would be for the \n> >> application to dictate them, \n> \n> > That is exactly what we are talking about here.\n> \n> The *real* problem is that the application cannot determine reasonable\n> timeouts either. Perhaps the app can decide how long it is willing to\n> wait overall,\n\nYes, that is it. As I tried to explain earlier, the amount of work that needs to be \ndone for the own tx (in OLTP) is pretty well predictable, but the work of other \nclients is not.\n\n> but how can it translate that into the low-level notion of\n> an appropriate lock timeout? It does not know how many locks might get\n> locked in the course of a query, nor which locks they are exactly, nor\n> what the likely distribution of wait intervals is for those locks.\n\nThe above would imho be a wrong approach at determining the timeout.\n\n> Given that, using a lock timeout \"feature\" is just a crapshoot. If you\n> say \"set lock timeout X\", you have no real idea how that translates to\n> application-visible performance nor how big a risk you are taking of\n> inducing unwanted failures. You don't even get to average out the\n> uncertainty across a whole query, because if any one of the lock waits\n> exceeds X, your query blows up. Yet making X large destroys the\n> usefulness of the feature entirely, so there will always be a strong\n> temptation to set it too low.\n> \n> This is the real reason why I've been holding out for restricting the\n> feature to a specific LOCK TABLE statement: if it's designed that way,\n> at least you know which lock you are applying the timeout to, and have\n> some chance of being able to estimate an appropriate timeout.\n\nI do not agree, but such is life :-)\n\nBTW: for distributed txns you need a lock timeout feature anyway, because \ndetecting remote deadlocks between two or more different servers would be \nvery complicated. And I do think PostgreSQL will need remote db access a la long.\n\nAndreas\n",
"msg_date": "Wed, 18 Apr 2001 18:09:37 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: AW: timeout on lock feature "
},
{
"msg_contents": ">>>>> \"A Z\" == Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n\nPS: where can I find more on the distributed txn plans for PostgreSQL? Thanks.\n\n A Z> BTW: for distributed txns you need a lock timeout feature\n A Z> anyway, because detecting remote deadlocks between two or\n A Z> more different servers would be very complicated. And I do\n A Z> think PostgreSQL will need remote db access a la long.\n\nA typical distributed transaction management system would consist of\n\n- a transaction manager for that particular transaction\n- an associated log manager\n- an optional associated lock manager (depending on transaction type:\n pessimistic, optimistic, etc)\n- one or more resource managers (usually storage)\n\nand thus the lock manager would have a single point of reference for\nthe existence of a transaction or not. The \"distributed\" part in that\nscenario would be that one {log|lock|resource} manager could be client\nto several transaction managers simultaneously. The problem of\ndeadlock detection is that of cyclic dependency-detection among\nseveral transaction managers. Pessimistic Transaction Managers do use\nlocks and understand their semantics, thus they can communicate with\ntheir peers that are accessing the shared pool of locks.\n\nI would agree that the simplest solution for deadlock detection is a\ntimeout, but it certainly is not the only one.\n\nMost desirable would be a measure to choose which transaction to\nabort, which simultaneously avoids starvation (no more cycles, ever\nfor txn X), upper limits (txns beyond X objects / locks / cycles /\n... cannot happen), etc.. A timeout mechanism is not going to\napproach this measure, but an analysis of the dependency matrix with\nthe associated information on resource usage of each transaction might\nget close.\n\nso long,\n\nOliver\n",
"msg_date": "18 Apr 2001 18:57:22 +0200",
"msg_from": "Oliver Seidel <seidel@in-medias-res.com>",
"msg_from_op": false,
"msg_subject": "theory of distributed transactions / timeouts"
}
] |
[
{
"msg_contents": "> This is the real reason why I've been holding out for restricting the\n> feature to a specific LOCK TABLE statement: if it's designed that way,\n> at least you know which lock you are applying the timeout to, and have\n> some chance of being able to estimate an appropriate timeout.\n\nAs I pointed before - it's half useless.\n\nAnd I totally do not understand why to object feature\n\n1. that affects users *only when explicitly requested*;\n2. whose implementation costs nothing - ie has no drawbacks\n for overall system.\n\nIt was general practice in project so far: if user want some\nfeature and it doesn't affect others - let's do it.\nWhat's changed?\n\nVadim\n",
"msg_date": "Wed, 18 Apr 2001 09:59:04 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: AW: timeout on lock feature "
},
{
"msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> > This is the real reason why I've been holding out for restricting the\n> > feature to a specific LOCK TABLE statement: if it's designed that way,\n> > at least you know which lock you are applying the timeout to, and have\n> > some chance of being able to estimate an appropriate timeout.\n> \n> As I pointed before - it's half useless.\n> \n> And I totally do not understand why to object feature\n> \n> 1. that affects users *only when explicitly requested*;\n> 2. whose implementation costs nothing - ie has no drawbacks\n> for overall system.\n> \n> It was general practice in project so far: if user want some\n> feature and it doesn't affect others - let's do it.\n> What's changed?\n\nThis is another reason to make it be SET TIMEOUT ... because then we\ndon't have to have this NOWAIT tacked on to every command. It keeps the\nparser and manual pages cleaner, and it is a non-standard extension.\n\nOne idea Tom had was to make it only active in a transaction, so you do:\n\n\tBEGIN WORK;\n\tSET TIMEOUT TO 10;\n\tUPDATE tab SET col = 3;\n\tCOMMIT\n\nTom is concerned people will do the SET and forget to RESET it, causing\nall queries to be affected by the timeout.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Apr 2001 14:58:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AW: timeout on lock feature"
}
] |
[
{
"msg_contents": "\ntgconstrrelid (in pg_trigger) holds table references in a RI trigger.\nThe value in this field is not successfully recreated after a\ndump/restore.\n\n---\n\nIf I create a simple relationship:\n\n create table p (id int primary key);\n create table c (pid int references p);\n\nand query the system table for the RI triggers:\n\n select tgrelid, tgname, tgconstrrelid from pg_trigger \n where tgisconstraint;\n\nI get (as expected) the trigger information:\n\n tgrelid | tgname | tgconstrrelid\n ---------+----------------------------+---------------\n 29122 | RI_ConstraintTrigger_29135 | 29096\n 29096 | RI_ConstraintTrigger_29137 | 29122\n 29096 | RI_ConstraintTrigger_29139 | 29122\n (3 rows)\n\nHowever, if I dump this database:\n\n[joel@olympus joel]$ pg_dump -sN test1 | grep -v - -- > test1\n\n\n CREATE TABLE \"p\" (\n \"id\" integer NOT NULL,\n Constraint \"p_pkey\" Primary Key (\"id\")\n );\n\n\n CREATE TABLE \"c\" (\n \"id\" integer NOT NULL\n );\n\n\n CREATE CONSTRAINT TRIGGER \"<unnamed>\" AFTER INSERT OR UPDATE ON\n \"c\" NOT DEFERRABLE INITIALLY\n IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE\n \"RI_FKey_check_ins\" ('<unnamed>',\n 'c', 'p', 'UNSPECIFIED', 'id', 'id');\n\n\n CREATE CONSTRAINT TRIGGER \"<unnamed>\" AFTER DELETE ON \"p\" NOT\n DEFERRABLE INITIALLY IMMEDIATE\n FOR EACH ROW EXECUTE PROCEDURE \"RI_FKey_noaction_del\" ('<unnamed>',\n 'c', 'p', 'UNSPECIFIED', 'id', 'id');\n\n\n CREATE CONSTRAINT TRIGGER \"<unnamed>\" AFTER UPDATE ON \"p\" NOT\n DEFERRABLE INITIALLY IMMEDIATE\n FOR EACH ROW EXECUTE PROCEDURE \"RI_FKey_noaction_upd\" ('<unnamed>', \n 'c', 'p', 'UNSPECIFIED', 'id', 'id');\n\n\nIf I drop the database and recreate from the dump:\n\n drop database test1;\n create database test1 with template=template0;\n \\c test1\n \\i test1\n\nand re-run the query on the pg_trigger table:\n\n select tgrelid, tgname, tgconstrrelid from pg_trigger \n where tgisconstraint;\n\nPG has lost the information on which table was being referred to\n(tgconstrrelid):\n\n tgrelid | tgname | tgconstrrelid\n ---------+----------------------------+---------------\n 29155 | RI_ConstraintTrigger_29168 | 0\n 29142 | RI_ConstraintTrigger_29170 | 0\n 29142 | RI_ConstraintTrigger_29172 | 0\n (3 rows)\n\nThee referential integrity still *works* though --\n\n test1=# insert into p values (1);\n INSERT 29174 1\n\n test1=# insert into c values (1);\n INSERT 29175 1\n\n test1=# insert into c values (2);\n ERROR: <unnamed> referential integrity violation - key referenced from\n c not found in p\n\n test1=# update p set id=2;\n ERROR: <unnamed> referential integrity violation - key in p still\n referenced from c\n\n test1=# delete from p;\n ERROR: <unnamed> referential integrity violation - key in p still \n referenced from c\n\nThe problem is that I've use tools that examine tgconstrrelid to figure\nreverse engineer which relationships exist.\n\n\nIs this a bug? Am I misunderstanding a feature?\n\n(This was run with 7.1RC4; it's possible that this bug doesn't exist in\nthe release 7.1. I haven't been able to get the CVS server to work for\nabout 48 hours, so I haven't been able to upgrade.)\n\nThanks!\n\n\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n",
"msg_date": "Wed, 18 Apr 2001 13:08:06 -0400 (EDT)",
"msg_from": "Joel Burton <jburton@scw.org>",
"msg_from_op": true,
"msg_subject": "[BUG?] tgconstrrelid doesn't survive a dump/restore"
},
{
"msg_contents": "Joel Burton <jburton@scw.org> writes:\n> tgconstrrelid (in pg_trigger) holds table references in a RI trigger.\n> The value in this field is not successfully recreated after a\n> dump/restore.\n\nYes, this problem was noted a couple months ago. AFAIK it was not fixed\nfor 7.1, but I concur that it should be fixed.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Apr 2001 14:38:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUG?] tgconstrrelid doesn't survive a dump/restore "
},
{
"msg_contents": "On Wed, 18 Apr 2001, Tom Lane wrote:\n\n> Joel Burton <jburton@scw.org> writes:\n> > tgconstrrelid (in pg_trigger) holds table references in a RI trigger.\n> > The value in this field is not successfully recreated after a\n> > dump/restore.\n> \n> Yes, this problem was noted a couple months ago. AFAIK it was not fixed\n> for 7.1, but I concur that it should be fixed.\n\nJan/Philip/Tom --\n\nDo we know if the problem is in pg_dump, or is there no way\nto pass the tgconstrrelid value in the CREATE CONSTRAINT TRIGGER\nstatement?\n\n(I've read the dev docs on RI, but I haven't seen anyplace that\ndocuments what the arguments for the call are exactly, and a muddled\nwading through the source didn't help much.)\n\nIf there are no better suggestions for the before-the-real-fix fix, I\ncould make RI_pre_dump() and RI_post_dump() functions that would stick\nthis information into another table so that I won't lose that info. (Or,\ncan I always rely on digging it out of the preserved fields in pg_trig?)\n\nThanks!\n\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n",
"msg_date": "Wed, 18 Apr 2001 16:25:16 -0400 (EDT)",
"msg_from": "Joel Burton <jburton@scw.org>",
"msg_from_op": true,
"msg_subject": "Re: [BUG?] tgconstrrelid doesn't survive a dump/restore "
},
{
"msg_contents": "Joel Burton <jburton@scw.org> writes:\n> Do we know if the problem is in pg_dump, or is there no way\n> to pass the tgconstrrelid value in the CREATE CONSTRAINT TRIGGER\n> statement?\n\nIIRC, pg_dump is just failing to transfer the value; it needs to emit\nan additional clause in the CREATE CONSTRAINT command to do so.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Apr 2001 16:30:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUG?] tgconstrrelid doesn't survive a dump/restore "
},
{
"msg_contents": "At 16:30 18/04/01 -0400, Tom Lane wrote:\n>\n>IIRC, pg_dump is just failing to transfer the value; it needs to emit\n>an additional clause in the CREATE CONSTRAINT command to do so.\n>\n\n From memory, this is one of the non-standard SQL things that pg_dump still\ndoes (ie. definining the constraint using rule definitions). I'll see if I\ncan find a way of constructing the FK constraint properly, but don't hold\nyour breath.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 19 Apr 2001 12:29:15 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: [BUG?] tgconstrrelid doesn't survive a dump/restore "
},
{
"msg_contents": "At 16:25 18/04/01 -0400, Joel Burton wrote:\n>\n>Do we know if the problem is in pg_dump, or is there no way\n>to pass the tgconstrrelid value in the CREATE CONSTRAINT TRIGGER\n>statement?\n>\n\nIt's because pg_dump is not designed to dump these constraints *as*\nconstraints. We just need to make pg_dump clever enough to do that.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 19 Apr 2001 12:30:55 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Re: [BUG?] tgconstrrelid doesn't survive a\n dump/restore"
},
{
"msg_contents": "Philip Warner wrote:\n> At 16:25 18/04/01 -0400, Joel Burton wrote:\n> >\n> >Do we know if the problem is in pg_dump, or is there no way\n> >to pass the tgconstrrelid value in the CREATE CONSTRAINT TRIGGER\n> >statement?\n> >\n>\n> It's because pg_dump is not designed to dump these constraints *as*\n> constraints. We just need to make pg_dump clever enough to do that.\n\n IMHO there's nothing fundamentally wrong with having pg_dump\n dumping the constraints as special triggers, because they are\n implemented in PostgreSQL as triggers. And the required\n feature to correctly restore the tgconstrrelid is already in\n the backend, so pg_dump should make use of it (right now,\n after a dump/restore, a DROP of a table involved in\n referential integrity wouldn't correctly remove the triggers\n from the referencing/referenced opposite table(s)).\n\n The advantage of having pg_dump output these constraints as\n proper ALTER TABLE commands would only be readability and\n easier portability (from PG to another RDBMS).\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Thu, 19 Apr 2001 08:42:22 -0500 (EST)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: [BUG?] tgconstrrelid doesn't survive a dump/restore"
},
{
"msg_contents": "At 08:42 19/04/01 -0500, Jan Wieck wrote:\n>>\n>> It's because pg_dump is not designed to dump these constraints *as*\n>> constraints. We just need to make pg_dump clever enough to do that.\n>\n> IMHO there's nothing fundamentally wrong with having pg_dump\n> dumping the constraints as special triggers, because they are\n> implemented in PostgreSQL as triggers. \n\nNot sure if it's fundamentally wrong, but ISTM that making pg_dump use the\nSQL standards whenever possible will make dump files portable across\nversions as well as other RDBMSs. It is also, as you say, more readable.\n\n\n> and the required\n> feature to correctly restore the tgconstrrelid is already in\n> the backend, so pg_dump should make use of it \n\nNo problem there - just tell me how...\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 20 Apr 2001 00:32:21 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Re: [BUG?] tgconstrrelid doesn't survive a\n dump/restore"
},
{
"msg_contents": "Jan Wieck <JanWieck@yahoo.com> writes:\n> IMHO there's nothing fundamentally wrong with having pg_dump\n> dumping the constraints as special triggers, because they are\n> implemented in PostgreSQL as triggers. ...\n> The advantage of having pg_dump output these constraints as\n> proper ALTER TABLE commands would only be readability and\n> easier portability (from PG to another RDBMS).\n\nMore to the point, it would allow easier porting to future Postgres\nreleases that might implement constraints differently. So I agree with\nPhilip that it's important to have these constructs dumped symbolically\nwherever possible.\n\nHowever, if that's not likely to happen right away, I think a quick hack\nto restore tgconstrrelid in the context of the existing approach would\nbe a good idea.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Apr 2001 10:40:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [BUG?] tgconstrrelid doesn't survive a dump/restore "
},
{
"msg_contents": "On Thu, 19 Apr 2001, Tom Lane wrote:\n\n> Jan Wieck <JanWieck@yahoo.com> writes:\n> > IMHO there's nothing fundamentally wrong with having pg_dump\n> > dumping the constraints as special triggers, because they are\n> > implemented in PostgreSQL as triggers. ...\n> > The advantage of having pg_dump output these constraints as\n> > proper ALTER TABLE commands would only be readability and\n> > easier portability (from PG to another RDBMS).\n> \n> More to the point, it would allow easier porting to future Postgres\n> releases that might implement constraints differently. So I agree with\n> Philip that it's important to have these constructs dumped symbolically\n> wherever possible.\n> \n> However, if that's not likely to happen right away, I think a quick hack\n> to restore tgconstrrelid in the context of the existing approach would\n> be a good idea.\n\nNot having the right value was stopping me in a project, so I put together\na rather fragile hack:\n\nFirst, a view that shows info about relationships:\n\n\nCREATE VIEW dev_ri_detech AS\nSELECT t.oid AS trigoid, \n c.relname AS trig_tbl,\n t.tgrelid,\n rtrunc(text(f.proname), 3) AS trigfunc, \n t.tgconstrname, c2.relname \nFROM pg_trigger t \nJOIN pg_class c ON (t.tgrelid = c.oid) \nJOIN pg_proc f ON (t.tgfoid = f.oid)\nLEFT JOIN pg_class c2 ON (t.tgconstrrelid = c2.oid) \nWHERE t.tgisconstraint;\n\n\nThen, the new part, a function that iterates over RI sets (grouped by\nname*). It stores the 'other' table in pgconstrrelid, knowing that the\n'_ins' action is for the child, and that '_del' and '_upd' are for the\nparent.\n\n* - It requires that your referential integrity constraints have unique\nnames (not a bad idea anyway). eg: CREATE TABLE child (pid INT CONSTRAINT\nchild__ref_pid REFERENCES parent)\n\n* - it completely relies on how RI is handled as of Pg7.1, including the\nexact names of the RI functions.\n\nAfter a dump/restore cycle, just select dev_ri_fix(); It does seem to\nwork, but do try it on a backup copy of your database, please!\n\n\ncreate function dev_ri_fix() returns int as '\ndeclare \n count_fixed\n int := 0; \n rec_ins record; \n rec_del record; \n upd_oid oid; \nbegin \n for rec_ins in select trigoid, \n tgrelid, \n tgconstrname \n from dev_ri_detect\n where rtrunc(trigfunc,3)='ins' \n loop \n select trigoid,\n tgrelid \n into rec_del from dev_ri_detect \n where tgconstrname=rec_ins.tgconstrname \n and rtrunc(trigfunc,3)='del'; \n\n if not found then\n raise notice 'No Match: % %', rec_ins.tgconstrname, rec_ins.trigoid;\n else\n upd_oid := trigoid \n from dev_ri_detect \n where tgconstrname=rec_ins.tgconstrname \n and rtrunc(trigfunc,3)='upd'; \n update pg_trigger \n set tgconstrrelid=rec_del.tgrelid \n where oid=rec_ins.trigoid; \n update pg_trigger \n set tgconstrrelid=rec_ins.tgrelid \n where oid=rec_del.trigoid;\n update pg_trigger \n set tgconstrrelid=rec_ins.tgrelid \n where oid=upd_oid; \n count_fixed :=count_fixed + 1; \n end if; \n end loop; \n return count_fixed; \nend;\n' language 'plpgsql';\n\n(it's not terribly optimized--I normally work w/databases <=300 tables)\n\n\nAlso helpful: sometimes, after dropping, rebuilding and tinkering with a\nschema, I find that I'm left w/half of my referential integrity: (the\nparent has upd/del rules, but the child has no ins, or vice versa). The\nfollowing query helps find these:\n\nSELECT tgconstrname,\n comma(trigfunc) as funcs,\n count(*) as count\nFROM dev_ri_detect\nGROUP BY tgconstrname\nHAVING count(*) < 3;\n\nIt also requires that you have named constraints.\n\nIt uses a function, comma(), that just aggregates a resultset into a\ncomma-separated list. This function (which I find generally useful) is in\nRoberto Mello's Cookbook, via techdocs.postgresql.org.\n\n\nAnyway, here's hoping that someone fixes the dumping problem (emitting as\nreal constraints would be *much* nicer), but in the meantime, this stuff\nmay be useful.\n\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n",
"msg_date": "Thu, 19 Apr 2001 13:10:03 -0400 (EDT)",
"msg_from": "Joel Burton <jburton@scw.org>",
"msg_from_op": true,
"msg_subject": "Re: Re: [BUG?] tgconstrrelid doesn't survive a dump/restore"
},
{
"msg_contents": "On Thu, 19 Apr 2001, Tom Lane wrote:\n\n> Jan Wieck <JanWieck@yahoo.com> writes:\n> > IMHO there's nothing fundamentally wrong with having pg_dump\n> > dumping the constraints as special triggers, because they are\n> > implemented in PostgreSQL as triggers. ...\n> > The advantage of having pg_dump output these constraints as\n> > proper ALTER TABLE commands would only be readability and\n> > easier portability (from PG to another RDBMS).\n> \n> More to the point, it would allow easier porting to future Postgres\n> releases that might implement constraints differently. So I agree with\n> Philip that it's important to have these constructs dumped symbolically\n> wherever possible.\n> \n> However, if that's not likely to happen right away, I think a quick hack\n> to restore tgconstrrelid in the context of the existing approach would\n> be a good idea.\n\nA while ago, I wrote up a small tutorial example about using RI\nw/Postgres. There wasn't much response to a RFC, but it might be helpful\nfor people trying to learn what's in pg_trigger. It includes a discussion\nabout how to disable RI, change an action, etc.\n\nIt's at\n \nhttp://www.ca.postgresql.org/mhonarc/pgsql-docs/archive/pgsql-docs.200012\n\n\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n",
"msg_date": "Thu, 19 Apr 2001 13:12:42 -0400 (EDT)",
"msg_from": "Joel Burton <jburton@scw.org>",
"msg_from_op": true,
"msg_subject": "Re: Re: [BUG?] tgconstrrelid doesn't survive a dump/restore"
}
] |
[
{
"msg_contents": "Hello Group!\nDoes anybody knows how I can get the difference between two date (timestamp)\nis secondes or convert timestamp to integer or float type (in any acceptable\nsense).\n\nThank you in advance,\nSergiy.\n\n\n\n",
"msg_date": "Wed, 18 Apr 2001 20:43:38 +0300",
"msg_from": "\"Sergiy Ovcharuk\" <Sergey.Ovcharuk@tessart.kiev.ua>",
"msg_from_op": true,
"msg_subject": "get difference between two timestamp value in second?"
},
{
"msg_contents": "> Does anybody knows how I can get the difference between two date (timestamp)\n> is secondes or convert timestamp to integer or float type (in any acceptable\n> sense).\n\nlockhart=# select date_part('epoch', timestamp 'today' - timestamp\n'yesterday');\n date_part \n-----------\n 86400\n",
"msg_date": "Thu, 19 Apr 2001 14:54:20 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: get difference between two timestamp value in second?"
}
] |
[
{
"msg_contents": "I am in the middle of a rather nasty experience that I hope someone\nout\nthere can help solve.\n \n My hard disk partition with the postgres data directory got full. I\ntried to shut down postgres so I could clear some space, nothing\nhappened. So I did a reboot. On restart (after clearing some\npg_sorttemp.XX files), I discovered that all my tables appear empty!\nWhen I check in the data directories of the databases, I see that the\nfiles for each table have data (they are still of the size as before).\n \n I've been running some experiments on another machine and notice that\nif I remove the pg_log file, databases seem to disappear (or data to\nbecome invisible). So I am guessing that postgres is looking in one\nplace and deciding there is no data. Now I need to get my data of\ncourse! Any solutions?? My programming skills are generally very good\nso\nif it involves some code I'd have no problem. How do I get a dump of\nthe\nraw data (saw copy-style output) from the table files? Please help! \n\n Thanks\n\nPaul Bagyenda\n",
"msg_date": "Wed, 18 Apr 2001 21:01:57 +0300",
"msg_from": "\"P. A. Bagyenda\" <bagyenda@dsmagic.com>",
"msg_from_op": true,
"msg_subject": "problems with corrupted db on v7.0.3 on linux 2.2.x"
}
] |
[
{
"msg_contents": "I just checked the CRN PostgreSQL article at:\n\n http://www.crn.com/Sections/Fast_Forward/fast_forward.asp?ArticleID=25670\n\nI see no changes to the article, even though Vince our webmaster, Geoff\nDavidson of PostgreSQL, Inc, and Dave Mele of Great Bridge have\nrequested it be fixed. Not sure what we can do now.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Apr 2001 14:22:48 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "CRN article not updated"
},
{
"msg_contents": "On Wed, Apr 18, 2001 at 02:22:48PM -0400, Bruce Momjian wrote:\n> I just checked the CRN PostgreSQL article at:\n> \n> http://www.crn.com/Sections/Fast_Forward/fast_forward.asp?ArticleID=25670\n> \n> I see no changes to the article, even though Vince our webmaster, Geoff\n> Davidson of PostgreSQL, Inc, and Dave Mele of Great Bridge have\n> requested it be fixed. \n\nIf _you_ had been deluged with that kind of vitriol, what kind of favors \nwould you feel like doing?\n\n> Not sure what we can do now.\n\nIt's too late. \"We\" screwed it up. (Thanks again, guys.)\nThe responses have done far more lasting damage than any article \ncould ever have done. The horse is dead. \n\nThe best we can do is to plan for the future. \n\n1. What happens the next time a slightly inaccurate article is published? \n2. What happens when an openly hostile article is published?\n\nWill our posse ride off again with guns blazing, making more enemies? \nWill they make us all look to potential users like a bunch of hotheaded, \nchildish nobodies?\n\nOr will we have somebody appointed, already, to write a measured,\nrational, mature clarification? Will we have articles already written,\nand handed to more responsible reporters, so that an isolated badly-done \narticle can do little damage?\n\nWe're not even on Oracle's radar yet. When PG begins to threaten their \nincome, their marketing department will go on the offensive. Oracle \nmarketing is very, very skillful, and very, very nasty. If they find \nthat by seeding the press with reasonable-sounding criticisms of PG, \nthey can prod the PG community into making itself look like idiots, \nthey will go to town on it.\n\nNathan Myers\nncm@zembu.com\n\n",
"msg_date": "Wed, 18 Apr 2001 13:49:43 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: CRN article not updated"
},
{
"msg_contents": "> If _you_ had been deluged with that kind of vitriol, what kind of favors\n> would you feel like doing?\n\n Well, one person's opinion on the article that was perhaps expressed a\nlittle harshly shouldn't cause the company to cover their ears and hum when\ntheir article is in need of multiple corrections (as pointed out by many people,\nsome involved professionally with PostgreSQL and some not).. I did go through\nand read some other articles written by this author -- they're all pretty bad\nand filled with half-researched statements.\n\n The article also made it seems as if Great Bridge owned and developed\nPostgreSQL, which is of course totally false. That stepped on some fingers I'm\n*sure*.\n\n> It's too late. \"We\" screwed it up. (Thanks again, guys.)\n> The responses have done far more lasting damage than any article\n> could ever have done. The horse is dead.\n\n There isn't really a \"we\", is there? The PostgreSQL community isn't any kind\nof entity that can be governed.. Great Bridge and PostgreSQL INC are such\nentites and I'm sure they both handled the situation differently than the people\nthat sent in their personal opinions..\n\n> The best we can do is to plan for the future.\n>\n> 1. What happens the next time a slightly inaccurate article is published?\n\n The author and publisher will probably get flamed by angry PostgreSQL users,\ndemanding the article be corrected :-)\n\n Regardless of it being right or good for the (commercial) success of\nPostgreSQL, people will get pissed and express some pretty harsh opinions when\nsomething they know and love is being insulted or otherwise attacked.\n\n> 2. What happens when an openly hostile article is published?\n\n See above, take result of above and cube the intensity factor. :-)\n\n> Will our posse ride off again with guns blazing, making more enemies?\n> Will they make us all look to potential users like a bunch of hotheaded,\n> childish nobodies?\n\n Who is \"we\"? Even if we (\"we\" being you and I) come up with something we\nthink is best we can't force others to do what ever that may be. Result? Someone\nis always going to get angry and let the person(s) attacking know what's on\ntheir mind.\n\n> Or will we have somebody appointed, already, to write a measured,\n> rational, mature clarification? Will we have articles already written,\n> and handed to more responsible reporters, so that an isolated badly-done\n> article can do little damage?\n\n Great Bridge and their marketing people will do this...... Maybe? After all,\nthe PostgreSQL users/developers don't have to market their product, they're not\nselling anything! (I do know some of them now work for Great Bridge, though)\n\n> We're not even on Oracle's radar yet. When PG begins to threaten their\n> income, their marketing department will go on the offensive. Oracle\n> marketing is very, very skillful, and very, very nasty. If they find\n> that by seeding the press with reasonable-sounding criticisms of PG,\n> they can prod the PG community into making itself look like idiots,\n> they will go to town on it.\n\n This is something for companies, like Great Bridge, to deal with and just\nisn't an issue for the PostgreSQL development/user community as we're not\nmarketing anything :-)\n\n After saying all that, let me say this.. I use PostgreSQL and have been\nusing it for several years now, I think it's the best RDBMS out there for me and\nI have recommend (and used) it for every database-driven project I've done to\ndate. I haven't had any trouble convincing clients to use PostgreSQL over Oracle\n(and everyone that wants some software written always wants to use Oracle!). I\npresent the facts of PostgreSQL and in every one of the cases I've been involved\nin, PostgreSQL is simply the best choice, everyone ends up happy..\n\n To all involved in the development of PostgreSQL : THANKS!\n\n-Mitch\nLife is good. Be happy.\n\n\n\n",
"msg_date": "Wed, 18 Apr 2001 18:32:07 -0400",
"msg_from": "\"Mitch Vincent\" <mitch@venux.net>",
"msg_from_op": false,
"msg_subject": "Re: CRN article not updated"
}
] |
[
{
"msg_contents": "> One idea Tom had was to make it only active in a transaction, \n> so you do:\n> \n> \tBEGIN WORK;\n> \tSET TIMEOUT TO 10;\n> \tUPDATE tab SET col = 3;\n> \tCOMMIT\n> \n> Tom is concerned people will do the SET and forget to RESET \n> it, causing all queries to be affected by the timeout.\n\nAnd what? Queries would be just aborted. It's not critical event\nto take responsibility from user.\n\nVadim\n",
"msg_date": "Wed, 18 Apr 2001 12:24:09 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: AW: timeout on lock feature"
},
{
"msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> > One idea Tom had was to make it only active in a transaction, \n> > so you do:\n> > \n> > \tBEGIN WORK;\n> > \tSET TIMEOUT TO 10;\n> > \tUPDATE tab SET col = 3;\n> > \tCOMMIT\n> > \n> > Tom is concerned people will do the SET and forget to RESET \n> > it, causing all queries to be affected by the timeout.\n> \n> And what? Queries would be just aborted. It's not critical event\n> to take responsibility from user.\n\nHey, I agree. If the users wants the TIMEOUT, give it to them.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Apr 2001 16:13:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AW: timeout on lock featurey"
}
] |
[
{
"msg_contents": "\n> > > The only way PG could apply reasonable timeouts would be for the \n> > > application to dictate them, \n> > \n> > That is exactly what we are talking about here.\n> \n> No. You wrote elsewhere that the application sets \"30 seconds\" and\n> leaves it. But that 30 seconds doesn't have any application-level\n> meaning\n\nIn interactive OLTP it does.\n\n> -- an operation could take twelve hours without tripping your\n> 30-second timeout.\n\nNot in OLTP. Using the feature for a batch client with a low timeout\nwould be plain wrong.\n\n> What might be a reasonable alternative would be a BEGIN timeout: report \n> failure as soon as possible after N seconds unless the timer is reset, \n> such as by a commit. Such a timeout would be meaningful at the \n> database-interface level. It could serve as a useful building block \n> for application-level timeouts when the client environment has trouble \n> applying timeouts on its own.\n\nI like that, but I do not think it is feasible. \n\nI do not think that you can guarantee an answer within x seconds,\nbe it positive or negative. But that is what this feature would imply.\nIf the client needs to react within x sec there is no way around implementing this in \nthe client (there could be all kinds of trouble between client and backend). \n\nOn a very busy server (in OLTP that is the only real reason your query takes too \nlong other than waiting for a lock) you will produce still more work with this feature.\nThat is also partly why I think that a lock timeout feature really makes sense\nfor interactive OLTP clients. It is not a perfect solution, but it usually serves\nthe purpose very well and keeps the client simple. And I do not agree, that it\nis an objective to keep the db code simple at the cost of making a standard client\nmore complex. \n\nAndreas\n",
"msg_date": "Thu, 19 Apr 2001 11:24:29 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: timeout on lock feature"
}
] |
[
{
"msg_contents": "\nI tried to do a 'kill <pid>' like I would have in v7.0.3, doesn't affect\nit ... so, how to get rid of idle process that have been sitting around\nfor a long time, without having to shutdown the database itself?\n\npgsql 64484 0.0 1.0 15352 10172 p4- I Sat08PM 0:00.15 postmaster: hordemgr horde 216.126.85.170 idle (postgres)\npgsql 64760 0.0 1.0 15352 10420 p4- I Sat08PM 0:00.21 postmaster: hordemgr horde 216.126.85.170 idle (postgres)\npgsql 94615 0.0 0.8 15336 8488 p4- I Sun12AM 0:00.09 postmaster: hordemgr horde 216.126.85.86 idle (postgres)\npgsql 99711 0.0 0.9 15336 9688 p4- I Sun01AM 0:00.14 postmaster: hordemgr horde 216.126.85.188 idle (postgres)\npgsql 99810 0.0 0.9 15352 9124 p4- I Sun01AM 0:00.12 postmaster: hordemgr horde 216.126.85.188 idle (postgres)\npgsql 1768 0.0 0.8 15336 8112 p4- I Sun01AM 0:00.09 postmaster: hordemgr horde 216.126.85.86 idle (postgres)\npgsql 8781 0.0 1.1 15368 11016 p4- I Sun09AM 0:00.23 postmaster: hordemgr horde 216.126.85.30 idle (postgres)\npgsql 29503 0.0 1.1 15352 11052 p4- I Sun10AM 0:00.23 postmaster: hordemgr horde 216.126.85.30 idle (postgres)\npgsql 29930 0.0 1.0 15352 10556 p4- I Sun11PM 0:00.15 postmaster: hordemgr horde 216.126.85.100 idle (postgres)\npgsql 29985 0.0 1.0 15352 10192 p4- I Sun11PM 0:00.14 postmaster: hordemgr horde 216.126.85.100 idle (postgres)\npgsql 57446 0.0 0.6 15336 6784 p4- I Mon09AM 0:00.06 postmaster: hordemgr horde 216.126.85.243 idle (postgres)\npgsql 57961 0.0 0.7 15336 6884 p4- I Mon09AM 0:00.06 postmaster: hordemgr horde 216.126.85.243 idle (postgres)\npgsql 98541 0.0 0.8 15352 8756 p4- I Mon11AM 0:00.10 postmaster: hordemgr horde 216.126.85.13 idle (postgres)\npgsql 98601 0.0 0.7 15336 6796 p4- I Mon11AM 0:00.06 postmaster: hordemgr horde 216.126.85.13 idle (postgres)\npgsql 16510 0.0 0.7 15352 7624 p4- I Mon01PM 0:00.07 postmaster: hordemgr horde 216.126.85.196 idle (postgres)\npgsql 17052 0.0 0.7 15352 6936 p4- I Mon01PM 0:00.06 postmaster: hordemgr horde 216.126.85.196 idle (postgres)\npgsql 86741 0.0 0.6 15336 6660 p4- I Mon10PM 0:00.06 postmaster: hordemgr horde 216.126.85.119 idle (postgres)\npgsql 8388 0.0 0.6 15336 6484 p4- I Tue12AM 0:00.05 postmaster: hordemgr horde 216.126.85.95 idle (postgres)\npgsql 27186 0.0 1.0 15368 10880 p4- I Tue09AM 0:00.24 postmaster: hordemgr horde 216.126.85.184 idle (postgres)\npgsql 27512 0.0 0.9 15352 9616 p4- I Tue09AM 0:00.15 postmaster: hordemgr horde 216.126.85.184 idle (postgres)\npgsql 91723 0.0 1.0 15368 10364 p4- I Tue12PM 0:00.16 postmaster: hordemgr horde 216.126.85.194 idle (postgres)\npgsql 45351 0.0 0.6 15336 5952 p4- I 10:03AM 0:00.04 postmaster: hordemgr horde 216.126.85.186 idle (postgres)\npgsql 35967 0.0 0.5 15336 4960 ?? I 8:53AM 0:00.03 postmaster: hordemgr horde 216.126.84.1 idle (postgres)\npgsql 36678 0.0 0.0 1008 312 p4 R+ 8:55AM 0:00.00 grep hordemgr\npgsql 45200 0.0 0.7 15356 6828 p4- I 10:03AM 0:00.06 postmaster: hordemgr horde 216.126.85.186 idle (postgres)\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Thu, 19 Apr 2001 09:57:30 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "idle processes in v7.1 ... not killable?"
},
{
"msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n> I tried to do a 'kill <pid>' like I would have in v7.0.3, doesn't affect\n> it ...\n\nHuh? Doesn't kill default to -TERM on your machine? That works fine\nfor me ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Apr 2001 14:23:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: idle processes in v7.1 ... not killable? "
},
{
"msg_contents": "\nOkay, I *swear* I tried both 'kill <pid>' and 'kill -TERM <pid>' this\nmorning before I sent this out .. just tried it again and it worked :(\n\n*shrug*\n\nOn Thu, 19 Apr 2001, Tom Lane wrote:\n\n> The Hermit Hacker <scrappy@hub.org> writes:\n> > I tried to do a 'kill <pid>' like I would have in v7.0.3, doesn't affect\n> > it ...\n>\n> Huh? Doesn't kill default to -TERM on your machine? That works fine\n> for me ...\n>\n> \t\t\tregards, tom lane\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Thu, 19 Apr 2001 15:31:46 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: idle processes in v7.1 ... not killable? "
}
] |
[
{
"msg_contents": "Oldtimers might recall the last thread about enhancements of the access\nprivilege system. See\n\nhttp://www.postgresql.org/mhonarc/pgsql-hackers/2000-05/msg01220.html\n\nto catch up.\n\nIt was more or less agreed that privilege descriptors should be split out\ninto a separate table for better flexibility and ease of processing. The\ndispute was that the old proposal wanted to store only one privilege per\nrow. I have devised something more efficient:\n\npg_privilege (\n priobj oid,\t\t\t-- oid of table, column, function, etc.\n prigrantor oid,\t\t-- user who granted the privilege\n prigrantee oid,\t\t-- user who owns the privilege\n\n priselect char,\t\t-- specific privileges follow...\n prihierarchy char,\n priinsert char,\n priupdate char,\n pridelete char,\n prireferences char,\n priunder char,\n pritrigger char,\n prirule char\n /* obvious extension mechanism... */\n)\n\nThe various \"char\" fields would be NULL for not granted, some character\nfor granted, and some other character for granted with grant option (a\npoor man's enum, if you will). Votes on the particular characters are\nbeing taken. ;-) Since NULLs are stored specially, sparse pg_privilege\nrows wouldn't take extra space.\n\n\"Usage\" privileges on types and other non-table objects could probably be\nlumped under \"priselect\" (purely for internal purposes).\n\nFor access we define system caches on these indexes:\n\nindex ( priobj, prigrantee, priselect )\nindex ( priobj, prigrantee, prihierarchy )\nindex ( priobj, prigrantee, priinsert )\nindex ( priobj, prigrantee, priupdate )\nindex ( priobj, prigrantee, pridelete )\n\nThese are the privileges you usually need quickly during query processing,\nthe others are only needed during table creation. These indexes are not\nunique (more than one grantor can grant the same privilege), but AFAICS\nthe syscache interface should work okay with this, since in normal\noperation we don't care who granted the privilege, only whether you have\nat least one.\n\nHow does that look?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Thu, 19 Apr 2001 17:58:12 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "System catalog representation of access privileges"
},
{
"msg_contents": "So, this will remove the relacl field from pg_class, making pg_class\na fixed tuple-length table: that might actually speed access: there\nare shortcircuits in place to speed pointer math when this is true.\n\nThe implementation looks fine to me, as well. How are group privileges\ngoing to be handled with this system?\n\nRoss\n\nOn Thu, Apr 19, 2001 at 05:58:12PM +0200, Peter Eisentraut wrote:\n> Oldtimers might recall the last thread about enhancements of the access\n> privilege system. See\n> \n> http://www.postgresql.org/mhonarc/pgsql-hackers/2000-05/msg01220.html\n> \n> to catch up.\n> \n> It was more or less agreed that privilege descriptors should be split out\n> into a separate table for better flexibility and ease of processing. The\n> dispute was that the old proposal wanted to store only one privilege per\n> row. I have devised something more efficient:\n> \n> pg_privilege (\n\n<snip>\n\n",
"msg_date": "Thu, 19 Apr 2001 14:37:48 -0500",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: System catalog representation of access privileges"
},
{
"msg_contents": "Peter Eisentraut wrote:\n\n> I have devised something more efficient:\n> \n> pg_privilege (\n> priobj oid, -- oid of table, column, etc.\n> prigrantor oid, -- user who granted the privilege\n> prigrantee oid, -- user who owns the privilege\n> \n> priselect char, -- specific privileges follow...\n> prihierarchy char,\n> priinsert char,\n> priupdate char,\n> pridelete char,\n> prireferences char,\n> priunder char,\n> pritrigger char,\n> prirule char\n> /* obvious extension mechanism... */\n> )\n>\n> \"Usage\" privileges on types and other non-table objects could probably be\n> lumped under \"priselect\" (purely for internal purposes).\n> \n\nThat looks quite nice. I do have 3 quick questions though. First, I\nassume that the prigrantee could also be a group id? Or would this\nsystem table represent the effective privileges granted to user via\ngroups? Second, one nice feature of Oracle is the ability to GRANT roles\n(our groups) to other roles. So I could do:\n\nCREATE ROLE clerk;\nGRANT SELECT on mascarm.deposits TO clerk;\nGRANT UPDATE (mascarm.deposits.amount) ON mascarm.deposits TO clerk;\n\nCREATE ROLE banker;\nGRANT clerk TO banker;\n\nWould any part of your design prohibit such functionality in the future?\n\nFinally, I'm wondering if \"Usage\" or \"System\" privileges should be\nanother system table. For example, one day I would like to (as in\nOracle):\n\nGRANT SELECT ANY TABLE TO foo WITH ADMIN;\nGRANT CREATE PUBLIC SYNONYM TO foo;\nGRANT DROP ANY TABLE TO foo;\n\nPresumably, in your design, the above would be represented by 3 records\nwith something like the following values:\n\nThis would be a \"SELECT ANY TABLE\" privilege (w/Admin):\n\nNULL, grantor_oid, grantee_oid, 'S', NULL, NULL, NULL, NULL, ...\n\nThis would be a \"CREATE PUBLIC SYNONYM\" privilege:\n\nNULL, grantor_oid, grantee_oid, 'c', NULL, NULL, NULL, NULL, ...\n\nThat means that the system would need an index as:\n\nindex ( prigrantee, priselect )\n\nWhile I'm not arguing it won't work, it just doesn't \"seem\" clean to\nshoe-horn the system privileges into the same table as the object\nprivileges.\n\nI've been wrong before though :-)\n\nMike Mascari\nmascarm@mascari.com\n",
"msg_date": "Thu, 19 Apr 2001 16:17:52 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": false,
"msg_subject": "Re: System catalog representation of access privileges"
},
{
"msg_contents": "Mike Mascari writes:\n\n> That looks quite nice. I do have 3 quick questions though. First, I\n> assume that the prigrantee could also be a group id?\n\nYes. It was also suggested making two different grantee columns for users\nand groups, but I'm not yet convinced of that. It's an option though.\n\n> Second, one nice feature of Oracle is the ability to GRANT roles\n> (our groups) to other roles.\n\nRoles are not part of this deal, although I agree that they would be nice\nto have eventually. I'm not sure yet whether role grants would get a\ndifferent system table, but I'm leaning there.\n\n> Would any part of your design prohibit such functionality in the future?\n\nNot that I can see.\n\n> Finally, I'm wondering if \"Usage\" or \"System\" privileges should be\n> another system table. For example, one day I would like to (as in\n> Oracle):\n>\n> GRANT SELECT ANY TABLE TO foo WITH ADMIN;\n\nANY TABLE probably implies \"any table in this schema/database\", no? In\nthat case the grant record would refer to the oid of the schema/database.\nIs there any use distinguishing between ANY TABLE and ANY VIEW? That\nwould make it a bit trickier.\n\n> GRANT CREATE PUBLIC SYNONYM TO foo;\n\nI'm not familiar with that above command.\n\n> GRANT DROP ANY TABLE TO foo;\n\nI'm not sold on a DROP privilege, but a CREATE privilege would be another\ncolumn. I didn't include it here because it's not in SQL.\n\n> While I'm not arguing it won't work, it just doesn't \"seem\" clean to\n> shoe-horn the system privileges into the same table as the object\n> privileges.\n\nIt would make sense to split privileges on tables from privileges on\nschemas/databases from privileges on, say, functions, etc. E.g.,\n\npg_privtable\t-- like proposed\n\npg_privschema (\n priobj oid, prigrantor oid, prigrantee oid,\n char pritarget,\t-- 't' = any table, 'v' = any view, ...\n char priselect,\n char priupdate,\n /* etc */\n)\n\nBut this would mean that a check like \"can I select from this table\"\nwould possibly require lookups in two tables. Not sure how much of a\ntradeoff that is, but the \"shoehorn factor\" would be lower.\n\nComments on this?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Thu, 19 Apr 2001 23:24:51 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: System catalog representation of access privileges"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> pg_privilege (\n> priobj oid,\t\t\t-- oid of table, column, function, etc.\n> prigrantor oid,\t\t-- user who granted the privilege\n> prigrantee oid,\t\t-- user who owns the privilege\n\nWhat about groups? What about wildcards? We already allow\n\"grant <priv> to PUBLIC (all)\", and it would be nice to be able to do\nsomething like \"grant <on everything I own> to joeblow\"\n\n> Since NULLs are stored specially, sparse pg_privilege\n> rows wouldn't take extra space.\n\nUnless there get to be a very large number of privilege bits, it'd\nprobably be better to handle these columns as NOT NULL, so that a fixed\nC struct record could be mapped onto the tuples. You'll notice that\nmost of the other system tables are done that way.\n\nAlternatively, since you really only need two bits per privilege,\nperhaps a pair of BIT (VARYING?) fields would be a more effective\napproach. BIT VARYING would have the nice property that adding a new\nprivilege type doesn't force initdb.\n\n> For access we define system caches on these indexes:\n\n> index ( priobj, prigrantee, priselect )\n> index ( priobj, prigrantee, prihierarchy )\n> index ( priobj, prigrantee, priinsert )\n> index ( priobj, prigrantee, priupdate )\n> index ( priobj, prigrantee, pridelete )\n\nUsing the privilege bits as part of the index won't work if you intend\nto allow them to be null. Another objection is that this would end up\ncaching multiple copies of the same tuple. A third is that you can't\nreadily tell lack of an entry (implying you should use a default ACL\nsetting, which might allow the access) from presence of an entry denying\nthe access. A fourth is it doesn't work for groups or wildcards.\n\n> These indexes are not\n> unique (more than one grantor can grant the same privilege), but AFAICS\n> the syscache interface should work okay with this,\n\nUnfortunately not. The syscache stuff needs unique indexes, because it\ncan only return one tuple for any given request.\n\nI don't really believe this indexing scheme is workable. Need to think\nsome more. Possibly the syscache mechanism will not do, and we need a\nspecially indexed privilege cache instead.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Apr 2001 17:53:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: System catalog representation of access privileges "
},
{
"msg_contents": "First, let me say that just because Oracle does it this way doesn't make\nit better but...\n\nOracle divides privileges into 2 categories:\n\nObject privileges\nSystem privileges\n\nThe Object privileges are the ones you describe. And I agree\nfundamentally with your design. Although I would have (a) used a bitmask\nfor the privileges and (b) have an additional bitmask which determines\nwhether or not the Grantee could turn around and grant the same\npermission to someone else:\n\npg_objprivs {\n\tpriobj oid,\n\tprigrantor oid,\n\tprigrantee oid,\n\tpriprivileges int4,\n\tpriadmin int4\n};\n\nWhere priprivileges is a bitmask for:\n\n0 ALTER - tables, sequences\n1 DELETE - tables, views\t\n2 EXECUTE - procedures, functions\n3 INDEX - tables\n4 INSERT - tables, views\n5 REFERENCES - tables\n6 SELECT - tables, views, sequences\n7 UPDATE - tables, views\n8 HIERARCHY - tables\n9 UNDER - tables\n\nAnd the priadmin is a bitmask to determine whether or not the Grantee\ncould grant the same privilege to another user. Since these are Object\nprivileges, 32 bits should be enough (and also 640K RAM ;-)).\n\nThe System privileges are privileges granted to a user or role (a.k.a\ngroup) which are not associated with any particular object. This is one\narea where I think PostgreSQL needs a lot of work and thought,\nparticularly with schemas coming down the road. Some example Oracle\nSystem privileges are:\n\nTypical User Privileges:\n-----------------------\n\nCREATE SESSION - Allows the user to connect \nCREATE SEQUENCE - Allows the user to create sequences in his schema\nCREATE SYNONYM - Allows the user to create private synonyms\nCREATE TABLE - Allows the user to create a table in his schema\nCREATE TRIGGER - Allows the user to create triggers on tables in his\nschema\nCREATE VIEW - Allows the user to create views in his schema\n\nTypical Power-User Privileges:\n-----------------------------\n\nALTER ANY INDEX - Allows user to alter an index in *any* schema\nALTER ANY PROCEDURE - Allows user to alter a procedure in *any* schema\nALTER ANY TABLE - Allows user to alter a table in *any* schema\n...\nCREATE ANY TABLE - Allows user to create a table in *any* schema\nCOMMENT ANY TABLE - Allows user to document any table in *any* schema\n...\n\nTypical DBA-Only Privileges:\n---------------------------\n\nALTER USER - Allows user to change password, quotas, etc. for *any* user\nCREATE USER - Allows user to create a new user\nDROP USER - Allows user to drop a new user\nGRANT ANY PRIVILEGE - Allows user to grant any privilege to any user\nANALYZE ANY - Allows user to analyze any table in *any* schema\n\nThere are, in fact, many, many more System Privileges that Oracle\ndefines. You may want someone to connect to a database and query one\ntable and that's it. Or you may want someone to have no other abilities\nexcept to document the database design via the great COMMENT ON command\n;-), etc. \n\nSo for System Privileges, I would have something like:\n\npg_sysprivs {\n\tprigrantee oid,\n\tpriprivilege oid,\n\tprigroup bool,\n\tpriadmin bool\n};\n\nSo each System privilege granted to a user (or group) would be its own\nrecord. The priprivilege would be the OID of one of the many System\nprivileges defined in the same way types are defined, if prigroup is\nfalse. If prigroup is true, however, then priprivilege is not a System\nprivilege, but a group id. And then PostgreSQL will have to examine the\nprivileges recursively for that group. Of course, you might not want to\nallow for the GRANTing of group privileges to other groups initially,\nwhich simplifies the implementation tremendously. But its a neat (if not\ncomplicated) Oracle-ism.\n\nUnfortunately, this means that the permission might require > 2 lookups.\nBut these lookups are only if the previous lookup failed:\n\nSELECT * FROM employees.foo;\n\n1. Am I a member of the employees schema? Yes -> Done\n2. Have I been GRANTed the Object Privilege of:\n SELECT on employees.foo? Yes -> Done\n3. Have I been GRANTed the System Privilege of:\n SELECT ANY TABLE? Yes -> Done\n\nSo the number of lookups does potentially increase, but only for those\nusers that have been granted access through greater and greater layers\nof authority. \n\nI just think that each new feature added to PostgreSQL opens up a very\nlarge can of worms. Schemas are such a feature and the security system\nshould be prepared for it.\n\nFWIW,\n\nMike Mascari\nmascarm@mascari.com\n\n\nPeter Eisentraut wrote:\n> \n> \n> It would make sense to split privileges on tables from privileges on\n> schemas/databases from privileges on, say, functions, etc. E.g.,\n> \n> pg_privtable -- like proposed\n> \n> pg_privschema (\n> priobj oid, prigrantor oid, prigrantee oid,\n> char pritarget, -- 't' = any table, 'v' = any view, ...\n> char priselect,\n> char priupdate,\n> /* etc */\n> )\n> \n> But this would mean that a check like \"can I select from this table\"\n> would possibly require lookups in two tables. Not sure how much of a\n> tradeoff that is, but the \"shoehorn factor\" would be lower.\n> \n> Comments on this?\n> \n> --\n> Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n",
"msg_date": "Thu, 19 Apr 2001 18:34:11 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": false,
"msg_subject": "Re: System catalog representation of access privileges"
},
{
"msg_contents": "Tom Lane writes:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > pg_privilege (\n> > priobj oid,\t\t\t-- oid of table, column, function, etc.\n> > prigrantor oid,\t\t-- user who granted the privilege\n> > prigrantee oid,\t\t-- user who owns the privilege\n>\n> What about groups?\n\nEither integrated into prigrantee or another column prigroupgrantee. One\nof these would always be zero or null, that's why I'm not sure if this\nisn't a waste of space.\n\n> What about wildcards? We already allow\n> \"grant <priv> to PUBLIC (all)\", and it would be nice to be able to do\n> something like \"grant <on everything I own> to joeblow\"\n\nPublic would be prigrantee == 0. About <everything I own>, how is this\ndefined? If it is \"everything I own and will ever own\" then I suppose\npriobj == 0. Although I admit I have never seen this kind of privilege\nbefore. It's probably better to set up a group for that.\n\n> Alternatively, since you really only need two bits per privilege,\n> perhaps a pair of BIT (VARYING?) fields would be a more effective\n> approach. BIT VARYING would have the nice property that adding a new\n> privilege type doesn't force initdb.\n\nThis would be tricky to index, I think.\n\n> I don't really believe this indexing scheme is workable. Need to think\n> some more. Possibly the syscache mechanism will not do, and we need a\n> specially indexed privilege cache instead.\n\nMaybe just an index on (object, grantee) and walk through that with an\nindex scan. This is done in some other places as well (triggers, I\nrecall), but the performance is probably not too exciting.\n\nHowever, last I looked at the syscache I figured that it would be\nperfectly capable of handling non-unique indexes if there only was an API\nto retrieve those values. Storing and finding the entries didn't seem to\nbe the problem. Need to look there, probably.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Fri, 20 Apr 2001 17:31:16 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: System catalog representation of access privileges "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n>> Alternatively, since you really only need two bits per privilege,\n>> perhaps a pair of BIT (VARYING?) fields would be a more effective\n>> approach. BIT VARYING would have the nice property that adding a new\n>> privilege type doesn't force initdb.\n\n> This would be tricky to index, I think.\n\nTrue, but I don't believe that making the privilege value part of the\nindex is useful.\n\n> Maybe just an index on (object, grantee) and walk through that with an\n> index scan. This is done in some other places as well (triggers, I\n> recall), but the performance is probably not too exciting.\n\nI agree, that'd be slower than we'd like. It needs to be cached somehow.\n\nThe major problem is that you'd need multiple index scans: after failing\nto find anything for (table, currentuser) you'd also need to try\n(table, 0) for PUBLIC and (table, G) for every group G that contains the\ncurrent user. Not to mention the scan to find out which groups those are.\n\nIt gets rapidly worse if you want to allow any wildcarding on the object\n--- for example, if a privilege record attached to a schema can allow\naccess to the tables therein, which I think should be possible. You'd\nhave to repeat the above for each possible priobject that might relate\nto the target object.\n\nI think this might be tolerable for getting the info in the first place,\nbut the final results really need to be cached. That's why I was\nwondering about a special \"privilege cache\".\n\n> However, last I looked at the syscache I figured that it would be\n> perfectly capable of handling non-unique indexes if there only was an API\n> to retrieve those values.\n\nYes, it's an API problem more than anything else. Invent away, if that\nseems like a needed component.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 Apr 2001 12:35:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: System catalog representation of access privileges "
},
{
"msg_contents": "I have added this to the TODO as:\n\n\t* Allow better control over user privileges [privileges] \n\nand added the thread to TODO.detail.\n\n\n\n> Oldtimers might recall the last thread about enhancements of the access\n> privilege system. See\n> \n> http://www.postgresql.org/mhonarc/pgsql-hackers/2000-05/msg01220.html\n> \n> to catch up.\n> \n> It was more or less agreed that privilege descriptors should be split out\n> into a separate table for better flexibility and ease of processing. The\n> dispute was that the old proposal wanted to store only one privilege per\n> row. I have devised something more efficient:\n> \n> pg_privilege (\n> priobj oid,\t\t\t-- oid of table, column, function, etc.\n> prigrantor oid,\t\t-- user who granted the privilege\n> prigrantee oid,\t\t-- user who owns the privilege\n> \n> priselect char,\t\t-- specific privileges follow...\n> prihierarchy char,\n> priinsert char,\n> priupdate char,\n> pridelete char,\n> prireferences char,\n> priunder char,\n> pritrigger char,\n> prirule char\n> /* obvious extension mechanism... */\n> )\n> \n> The various \"char\" fields would be NULL for not granted, some character\n> for granted, and some other character for granted with grant option (a\n> poor man's enum, if you will). Votes on the particular characters are\n> being taken. ;-) Since NULLs are stored specially, sparse pg_privilege\n> rows wouldn't take extra space.\n> \n> \"Usage\" privileges on types and other non-table objects could probably be\n> lumped under \"priselect\" (purely for internal purposes).\n> \n> For access we define system caches on these indexes:\n> \n> index ( priobj, prigrantee, priselect )\n> index ( priobj, prigrantee, prihierarchy )\n> index ( priobj, prigrantee, priinsert )\n> index ( priobj, prigrantee, priupdate )\n> index ( priobj, prigrantee, pridelete )\n> \n> These are the privileges you usually need quickly during query processing,\n> the others are only needed during table creation. These indexes are not\n> unique (more than one grantor can grant the same privilege), but AFAICS\n> the syscache interface should work okay with this, since in normal\n> operation we don't care who granted the privilege, only whether you have\n> at least one.\n> \n> How does that look?\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n> \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 7 May 2001 20:19:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: System catalog representation of access privileges"
}
] |
[
{
"msg_contents": "What did you do to the CVS server? It takes hours to update a single\nfile, half a day to run cvs diff. This has been like that for about 48\nhours.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Thu, 19 Apr 2001 18:03:11 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "CVS server ailing?"
},
{
"msg_contents": "\ntry now?\n\nOn Thu, 19 Apr 2001, Peter Eisentraut wrote:\n\n> What did you do to the CVS server? It takes hours to update a single\n> file, half a day to run cvs diff. This has been like that for about 48\n> hours.\n>\n> --\n> Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Thu, 19 Apr 2001 15:46:09 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: CVS server ailing?"
}
] |
[
{
"msg_contents": "I figured out the bug that Alex Pilosov was reporting about constructs\nlike\n\tcreate table customers (...)\n\tcreate function cust_name(customers) ...\n\tselect cust_name(a) from customers a, addresses b ...\n\nThe problem is that whole-tuple parameters to functions are represented\nas pointers to TupleTableSlot objects. This works as long as the\nfunction call is performed right away, ie, a simple table scan with no\njoin. But in a join, the TupleTableSlot pointer gets put into an output\ntuple of a scan node, and by the time it is extracted and used again,\nthe scan node may have decided to recycle its per-tuple temporary\nstorage. Which is where the TupleTableSlot was living.\n\nI have fixed this for the moment by allocating the TupleTableSlot\nobjects and their subsidiary tuples in TransactionCommandContext, rather\nthan the per-tuple workspace context. In other words, a query involving\nwhole-tuple parameters will leak memory until end of query. This is not\nany worse than the behavior of prior versions (which also leaked memory\nfor such queries) but it's pretty annoying now that we've fixed most of\nthe other query-duration leaks.\n\nI believe a good fix for this issue would involve allowing whole-tuple\nvalues to become ordinary varlena datums, so that they can be stuffed\ninto larger tuples (and even stored on disk as columns of tables).\n\nHowever the TupleTableSlot representation will not do for this, since\nit involves several pointers. If we design a new representation then we\nwill break existing C-coded user functions that accept tuples according\nto the documented way of doing that (cf src/tutorial/funcs.c). Possibly\nthis is no big problem, since the only thing that such functions are\nvery likely to do with tuples is pass them to the GetAttributeByName or\nGetAttributeByNum functions, and we can fix those to interpret the\npointer correctly. But it's clearly not a change to make in a patch\nrelease.\n\nAnother problem is that we'd need to de-TOAST any out-of-line toasted\nvalue in such a tuple before we'd dare store it on disk. However, it'd\nbe annoying to do that if the whole-tuple value were not destined to\nend up on disk, but only to be passed to a function that might or might\nnot ever touch the toasted column. I'm not sure how this could be\nhandled efficiently. Thoughts anyone?\n\nAnyway, a better fix is clearly a TODO item for some future release,\nnot something we can do for 7.1.1.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Apr 2001 13:03:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Some notes on whole-tuple function parameters"
}
] |
[
{
"msg_contents": "Dear Hackers,\n\ni'm using 7.1 in a production environment, porformace is very good, you've \nmade a vesy good job.\n\nBut there are problems, sometimes backend \"failed to start\":\n\n(mandrake 7.2, mod_perl 1.24, apache 1.3.14, Apache::DBI)\n(deadlock_timeout=2000\nmax_connections=300)\n\nDBI->connect(dbname=mydb) failed: Backend startup failed\nat /home/httpd/cgi-bin/e/lib/DBstuff.pm line 27.\n\nOur application is transactional, with a high degree of concurrency.\n\nI've already a silly monitor in perl that connects every minute to postgres \nto see if connection is ok, but sometimes it gives the error above.\n\nI'm asking to all of you the CORRECT sequence of actions to do a monitor \nthat restarts postgresql in the most safe possible way.\nThis for having a solution that gives a better overall uptime.\n\n\nFor example i must do (??):\n\n0. kill apache so that connection will terminate.\n1.killall -TERM postmaster\n2. wait for secs, or for what (?)\n3. if postgres alive (ps -ef | grep postmaster...) than:\n killall -9 postmaster\n4.restart postmaster\n5. see if it'ok\n6.restart apache\n\netc.\n\n\nthanks in advance for your answers.\n\nvater mazzola, italy.\n\n\n\n\n\n\n\n\n\n\n\n\nvalter mazzola\n_________________________________________________________________________\nGet Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.\n\n",
"msg_date": "Thu, 19 Apr 2001 20:52:30 +0200",
"msg_from": "\"V. M.\" <txian@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Postgresql, HA ?, Monitoring."
},
{
"msg_contents": "\"V. M.\" <txian@hotmail.com> writes:\n> DBI->connect(dbname=mydb) failed: Backend startup failed\n\nHave you looked in the postmaster log to see *why* this is happening?\nSome sort of resource-limitation problem is my bet.\n\n> I'm asking to all of you the CORRECT sequence of actions to do a monitor \n> that restarts postgresql in the most safe possible way.\n> This for having a solution that gives a better overall uptime.\n\nRestarting the postmaster because a backend start failed is certainly\nnot the way to improve uptime.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Apr 2001 18:01:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql, HA ?, Monitoring. "
}
] |
[
{
"msg_contents": "Request for comments:\n\nOverview\n--------\n\nThe planner is currently badly handicapped by inadequate statistical\ninformation. The stats that VACUUM ANALYZE currently computes are:\n\nper-table:\n\tnumber of disk pages\n\tnumber of tuples\n\nper-column:\n\tdispersion\n\tminimum and maximum values (if datatype has a suitable '<' operator)\n\tmost common value\n\tfraction of entries that are the most common value\n\tfraction of entries that are NULL\n\nNot only is this an impoverished dataset, but the dispersion and\nmost-common-value are calculated in a very unreliable way, and cannot\nbe trusted.\n\nEven though the stats are meager and untrustworthy, they are expensive\nto get: the per-column data is updated only by VACUUM ANALYZE, which\nis quite unpleasant to do regularly on large tables.\n\nIt is possible to get better statistics with less work. Here is a\nproposal for revising the stats mechanisms for 7.2.\n\nI believe that it's still a good idea for the stats to be updated by an\nexplicit housekeeping command. Although we could think of updating them\non-the-fly, that seems expensive and slow. Another objection is that\nplanning results will become difficult to reproduce if the underlying\nstatistics change constantly. But if we stick to the explicit-command\napproach then we need to make the command much faster than VACUUM ANALYZE.\nWe can address that in two ways:\n\n(1) The statistics-gathering process should be available as a standalone\ncommand, ANALYZE [ tablename ], not only as part of VACUUM. (This was\nalready discussed and agreed to for 7.1, but it never got done.) Note\nthat a pure ANALYZE command needs only a read lock on the target table,\nnot an exclusive lock as VACUUM needs, so it's much more friendly to\nconcurrent transactions.\n\n(2) Statistics should be computed on the basis of a random sample of the\ntarget table, rather than a complete scan. According to the literature\nI've looked at, sampling a few thousand tuples is sufficient to give good\nstatistics even for extremely large tables; so it should be possible to\nrun ANALYZE in a short amount of time regardless of the table size.\n\nIf we do both of these then I think that ANALYZE will be painless enough\nto do frequently, so there's no need to try to fold stats-updating into\nregular operations.\n\n\nMore extensive statistics\n-------------------------\n\nThe min, max, and most common value are not enough data, particularly\nnot for tables with heavily skewed data distributions. Instead,\npg_statistic should store three small arrays for each column:\n\n1. The K most common values and their frequencies, for some (adjustable)\nparameter K. This will be omitted if the column appears unique (no\nduplicate values found).\n\n2. The M boundary values of an equi-depth histogram, ie, the values that\ndivide the data distribution into equal-population sets. For example, if\nM is 3 this would consist of the min, median, and max values.\n\nK and M might both be about 10 for a typical column. In principle at\nleast, these numbers could be user-settable parameters, to allow trading\noff estimation accuracy against amount of space used in pg_statistics.\n\nA further refinement is that the histogram should describe the\ndistribution of the data *after excluding the given most-common values*\n(that is, it's a \"compressed histogram\"). This allows more accurate\nrepresentation of the data distribution when there are just a few\nvery-common values. For a column with not many distinct values (consider\na boolean column), the most-common-value list might describe the complete\ndataset, in which case the histogram collapses to nothing. (We'd still\nhave it store the min and max, just so that it's not necessary to scan the\nmost-common-value array to determine those values.)\n\nI am also inclined to remove the attdispersion parameter in favor of\nstoring an estimate of the number of distinct values in the column.\nWe can compute more accurate selectivities using the most-common-value\nfrequencies and the distinct-values estimate than we could get from\ndispersion.\n\nAnother useful idea is to try to estimate the correlation between physical\ntable order and logical order of any given column --- this would let us\naccount for the effect of clustering when estimating indexscan costs.\nI don't yet know the appropriate formula to use, but this seems quite\ndoable.\n\nFinally, it would be a good idea to add an estimate of average width\nof varlena fields to pg_statistic. This would allow the estimated\ntuple width computed by the planner to have something to do with reality,\nwhich in turn would improve the cost estimation for hash joins (which\nneed to estimate the size of the in-memory hash table).\n\n\nComputing the statistics\n------------------------\n\nThe best way to obtain these stats seems to be:\n\n1. Scan the target table and obtain a random sample of R tuples. R is\nchosen in advance based on target histogram size (M) --- in practice R\nwill be about 3000 or so. If the table contains fewer than that many\ntuples then the \"sample\" will be the whole table. The sample can be\nobtained in one pass using Vitter's algorithm or similar. Note that for\nexpected values of R we shouldn't have any trouble storing all the sampled\ntuples in memory.\n\n2. For each column in the table, extract the column value from each\nsampled tuple, and sort these values into order. A simple scan of the\nvalues then suffices to find the most common values --- we only need to\ncount adjacent duplicates and remember the K highest counts. After we\nhave those, simple arithmetic will let us find the positions that contain\nthe histogram boundary elements. Again, all this can be done in-memory\nand so should be pretty fast.\n\nThe sort step will require detoasting any toasted values. To avoid risk\nof memory overflow, we may exclude extremely wide toasted values from the\nsort --- this shouldn't affect the stats much, since such values are\nunlikely to represent duplicates. Such an exclusion will also prevent\nwide values from taking up lots of space in pg_statistic, if they happened\nto be chosen as histogram entries.\n\nIssue: for datatypes that have no '<' operator, we of course cannot\ncompute a histogram --- but if there is an '=' then the most-common-value\nand number-of-distinct-values stats still make sense. Is there a way to\nderive these without O(R^2) brute-force comparisons? We could still do\na scan of the R sample values using something like the existing\ncomparison algorithm (but using S > K work slots); this would cost about\nS*R rather than R^2 comparisons.\n\nA different approach that's been discussed on pghackers is to make use\nof btree indexes for columns that have such indexes: we could scan the\nindexes to visit all the column values in sorted order. I have rejected\nthat approach because (a) it doesn't help for columns without a suitable\nindex; (b) our indexes don't distinguish deleted and live tuples,\nwhich would skew the statistics --- in particular, we couldn't tell a\nfrequently-updated single tuple from a commonly repeated value; (c)\nscanning multiple indexes would likely require more total I/O than just\ngrabbing sample tuples from the main table --- especially if we have to\ndo that anyway to handle columns without indexes.\n\n\nUser-visible changes\n--------------------\n\nAside from implementing the stand-alone ANALYZE statement, it seems that\nit would be a good idea to allow user control of the target numbers of\nstatistical entries (K and M above). A simple approach would be a SET\nvariable or explicit parameter for ANALYZE. But I am inclined to think\nthat it'd be better to create a persistent per-column state for this,\nset by say\n\tALTER TABLE tab SET COLUMN col STATS COUNT n\n(better names welcome). The target count for each column could be stored\nin pg_attribute. Doing it this way would let the dbadmin establish higher\nor lower targets for especially irregular or simple columns, and then\nforget about the numbers --- he wouldn't have to tweak his periodic cron\nscript to apply the right parameters to the right tables/columns.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Apr 2001 18:37:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "RFC: planner statistics in 7.2"
},
{
"msg_contents": "> (1) The statistics-gathering process should be available as a standalone\n> command, ANALYZE [ tablename ], not only as part of VACUUM. (This was\n> already discussed and agreed to for 7.1, but it never got done.) Note\n> that a pure ANALYZE command needs only a read lock on the target table,\n> not an exclusive lock as VACUUM needs, so it's much more friendly to\n> concurrent transactions.\n\n7.1 already does the ANALYZE part of VACUUM ANALYZE with lighter\nlocking. I just never split out the command to be separate, partly\nbecause of fear of user confusion, and I ran out of time.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 19 Apr 2001 19:03:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RFC: planner statistics in 7.2y"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> that a pure ANALYZE command needs only a read lock on the target table,\n>> not an exclusive lock as VACUUM needs, so it's much more friendly to\n>> concurrent transactions.\n\n> 7.1 already does the ANALYZE part of VACUUM ANALYZE with lighter\n> locking.\n\nRight, but you still have to run through the VACUUM part, which will\nhold down an exclusive lock for a considerable amount of time (if the\ntable is big).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Apr 2001 19:10:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: RFC: planner statistics in 7.2y "
},
{
"msg_contents": "At 18:37 19/04/01 -0400, Tom Lane wrote:\n>(2) Statistics should be computed on the basis of a random sample of the\n>target table, rather than a complete scan. According to the literature\n>I've looked at, sampling a few thousand tuples is sufficient to give good\n>statistics even for extremely large tables; so it should be possible to\n>run ANALYZE in a short amount of time regardless of the table size.\n\nThis sounds great; can the same be done for clustering. ie. pick a random\nsample of index nodes, look at the record pointers and so determine how\nwell clustered the table is?\n\n\n>A simple approach would be a SET\n>variable or explicit parameter for ANALYZE. But I am inclined to think\n>that it'd be better to create a persistent per-column state for this,\n>set by say\n>\tALTER TABLE tab SET COLUMN col STATS COUNT n\n\nSounds fine - user-selectability at the column level seems a good idea.\nWould there be any value in not making it part of a normal SQLxx statement,\nand adding an 'ALTER STATISTICS' command? eg. \n\n ALTER STATISTICS FOR tab[.column] COLLECT n\n ALTER STATISTICS FOR tab SAMPLE m\n\netc.\n\n\n\n\n\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 20 Apr 2001 10:44:05 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: RFC: planner statistics in 7.2"
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> At 18:37 19/04/01 -0400, Tom Lane wrote:\n>> (2) Statistics should be computed on the basis of a random sample of the\n>> target table, rather than a complete scan. According to the literature\n>> I've looked at, sampling a few thousand tuples is sufficient to give good\n>> statistics even for extremely large tables; so it should be possible to\n>> run ANALYZE in a short amount of time regardless of the table size.\n\n> This sounds great; can the same be done for clustering. ie. pick a random\n> sample of index nodes, look at the record pointers and so determine how\n> well clustered the table is?\n\nMy intention was to use the same tuples sampled for the data histograms\nto estimate how well sorted the data is. However it's not immediately\nclear that that'll give a trustworthy estimate; I'm still studying it ...\n\n>> ALTER TABLE tab SET COLUMN col STATS COUNT n\n\n> Sounds fine - user-selectability at the column level seems a good idea.\n> Would there be any value in not making it part of a normal SQLxx statement,\n> and adding an 'ALTER STATISTICS' command? eg. \n\n> ALTER STATISTICS FOR tab[.column] COLLECT n\n> ALTER STATISTICS FOR tab SAMPLE m\n\nIs that more standard than the other syntax?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Apr 2001 20:48:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: RFC: planner statistics in 7.2 "
},
{
"msg_contents": "At 20:48 19/04/01 -0400, Tom Lane wrote:\n>\n>> This sounds great; can the same be done for clustering. ie. pick a random\n>> sample of index nodes, look at the record pointers and so determine how\n>> well clustered the table is?\n>\n>My intention was to use the same tuples sampled for the data histograms\n>to estimate how well sorted the data is. However it's not immediately\n>clear that that'll give a trustworthy estimate; I'm still studying it ...\n\nI'm not sure you want to know how well sorted it is in general, but you do\nwant to know the expected cost in IOs of reading all records from a given\nindex node, so you can more accurately estimate indexscan costs. AFAICS it\ndoes not require that the entire table be sorted. So checking the pointers\non the index nodes gives an idea of clustering.\n\n\n>\n>> ALTER STATISTICS FOR tab[.column] COLLECT n\n>> ALTER STATISTICS FOR tab SAMPLE m\n>\n>Is that more standard than the other syntax?\n>\n\nNot at all. It just avoids messing with one of the standard statements.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 20 Apr 2001 11:02:54 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: RFC: planner statistics in 7.2 "
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> I'm not sure you want to know how well sorted it is in general, but you do\n> want to know the expected cost in IOs of reading all records from a given\n> index node, so you can more accurately estimate indexscan costs. AFAICS it\n> does not require that the entire table be sorted. So checking the pointers\n> on the index nodes gives an idea of clustering.\n\nBut you don't really need to look at the index (if it even exists\nat the time you do the ANALYZE). The extent to which the data is\nordered in the table is a property of the table, not the index.\nI'd prefer to get the stat just from the table and not have to do\nadditional I/O to examine the indexes.\n\nBut, as I said, I'm still reading the literature about estimation\nmethods for indexscans. It may turn out that a statistic calculated\nthis way isn't the right thing to use, or isn't trustworthy when taken\nover a small sample.\n\n>> Is that more standard than the other syntax?\n\n> Not at all. It just avoids messing with one of the standard statements.\n\nOh, so you're deliberately not being standard. I see. Probably a\nreasonable point.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Apr 2001 21:14:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: RFC: planner statistics in 7.2 "
},
{
"msg_contents": "At 21:14 19/04/01 -0400, Tom Lane wrote:\n>Philip Warner <pjw@rhyme.com.au> writes:\n>> I'm not sure you want to know how well sorted it is in general, but you do\n>> want to know the expected cost in IOs of reading all records from a given\n>> index node, so you can more accurately estimate indexscan costs. AFAICS it\n>> does not require that the entire table be sorted. So checking the pointers\n>> on the index nodes gives an idea of clustering.\n>\n>But you don't really need to look at the index (if it even exists\n>at the time you do the ANALYZE). The extent to which the data is\n>ordered in the table is a property of the table, not the index.\n\nBut the value (and cost) of using a specific index in an indexscan depends\non that index (or am I missing something?). \n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 20 Apr 2001 11:57:41 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: RFC: planner statistics in 7.2 "
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> At 21:14 19/04/01 -0400, Tom Lane wrote:\n>> But you don't really need to look at the index (if it even exists\n>> at the time you do the ANALYZE). The extent to which the data is\n>> ordered in the table is a property of the table, not the index.\n\n> But the value (and cost) of using a specific index in an indexscan depends\n> on that index (or am I missing something?). \n\nAll that we're discussing here is one specific parameter in the cost\nestimation for an indexscan, viz, the extent to which the table ordering\nagrees with the index ordering. As long as they both agree about the\nordering operator, this number doesn't depend on the index --- the index\nis by definition in perfect agreement with the ordering operator. There\nare other parameters in the total cost estimate that will depend on the\nindex, but this one doesn't AFAICS.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Apr 2001 22:27:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: RFC: planner statistics in 7.2 "
},
{
"msg_contents": "Tom Lane writes:\n\n> 2. The M boundary values of an equi-depth histogram, ie, the values that\n> divide the data distribution into equal-population sets. For example, if\n> M is 3 this would consist of the min, median, and max values.\n\nWhy not model that data in a normal distribution (or some other such\nmodel)? This should give you fairly good estimates for <, > and between\ntype queries. That way the user wouldn't have to tweak M. (I couldn't\neven imagine a way to explain to the user how to pick good M's.)\n\n> Issue: for datatypes that have no '<' operator, we of course cannot\n> compute a histogram --- but if there is an '=' then the most-common-value\n> and number-of-distinct-values stats still make sense.\n\nI think the statistics calculation should be data type (or operator, or\nopclass) specific, like the selectivity estimation functions. For\ngeometric data types you need completely different kinds of statistics,\nand this would allow users to plug in different methods. The\npg_statistics table could just have an array of floats where each data\ntype can store statistics as it chooses and the selectivity estimation\nroutines can interpret the values in a different way per data type. That\nway you can also make some common sense optimization without hacks, e.g.,\nfor boolean columns you possibly only need to calculate 1 value (number of\ntrues).\n\nOf course, how to calculate up to N different sets of statistics for N\ndifferent columns that require up to N different numbers of passes over\nthe table is left as a challenge. ;-)\n\nThis brings up a question I have: Are statistics calculated for every\ncolumn? Should they be? Is there a possibility to speed up ANALYZE by\ncontrolling this?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Fri, 20 Apr 2001 17:15:37 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: RFC: planner statistics in 7.2"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> 2. The M boundary values of an equi-depth histogram, ie, the values that\n>> divide the data distribution into equal-population sets. For example, if\n>> M is 3 this would consist of the min, median, and max values.\n\n> Why not model that data in a normal distribution (or some other such\n> model)?\n\nIf we knew an appropriate distribution model, we could do that. There\nis no distribution model that is appropriate for everything... certainly\nthe normal distribution is not. A compressed histogram is more flexible\nthan any other simple model I know of.\n\n> (I couldn't even imagine a way to explain to the user how to pick good\n> M's.)\n\nI was just planning to say \"if you get bad estimates on a column with a\nhighly irregular distribution, try increasing the default M\".\nEventually we might have enough experience with it to offer more\ndetailed advice.\n\n>> Issue: for datatypes that have no '<' operator, we of course cannot\n>> compute a histogram --- but if there is an '=' then the most-common-value\n>> and number-of-distinct-values stats still make sense.\n\n> I think the statistics calculation should be data type (or operator, or\n> opclass) specific, like the selectivity estimation functions.\n\nYeah, that's probably what we should do, but so far no one has even\nsuggested what stats we might want for geometric datatypes. Tell you\nwhat: let's throw in a field that indicates \"kind of statistics\ngathered\", and have the meaning of the array fields depend on that.\nThe proposal I gave describes just the stats to gather for scalar types.\n\n> pg_statistics table could just have an array of floats where each data\n> type can store statistics as it chooses and the selectivity estimation\n> routines can interpret the values in a different way per data type.\n\nMan does not live by floats alone --- in particular we need to be able\nto store specific values of the target datatype. Probably what we want\nis three or four instances of the pattern\n\nstats_kind\tint,\t\t-- identifies type of info\nstats_numbers\tfloat[],\t-- numerical info such as occurrence fraction\nstats_values\ttext[],\t\t-- array of values of column datatype\n\t\t\t\t-- (in external representation)\n\nDepending on the value of stats_kind, either stats_numbers or\nstats_values might be unused, in which case it could be set to NULL so\nit doesn't waste space. For scalar datatypes, an instance of this\npattern could hold the most common values and their frequencies, and\nanother one could hold the histogram boundary points (with stats_numbers\nunused).\n\n> Of course, how to calculate up to N different sets of statistics for N\n> different columns that require up to N different numbers of passes over\n> the table is left as a challenge. ;-)\n\nAs long as it's OK to gather the stats from a random sample of the\ntable, I think the structure I proposed would work fine. Remember the\nsecond part of ANALYZE is looping over each column separately anyway ---\nso it could easily apply a datatype-dependent algorithm for computing\nstatistics.\n\n> This brings up a question I have: Are statistics calculated for every\n> column? Should they be? Is there a possibility to speed up ANALYZE by\n> controlling this?\n\nPresently, frequency stats will be calculated for every column that has\nan '=' operator, and min/max stats will be calculated if there's a '<'\noperator for the type. I was planning to retain that convention.\nHowever if we are going to allow adjustable M, we could also add the\nstipulation that the DBA could set M = 0 for columns that he doesn't\nwant stats gathered for. (This would make sense for a column that is\nnever used in a WHERE clause.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 Apr 2001 11:46:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: RFC: planner statistics in 7.2 "
},
{
"msg_contents": "> A different approach that's been discussed on pghackers is to make use\n> of btree indexes for columns that have such indexes: we could scan the\n> indexes to visit all the column values in sorted order. I have rejected\n> that approach because (a) it doesn't help for columns without a suitable\n> index; (b) our indexes don't distinguish deleted and live tuples,\n> which would skew the statistics --- in particular, we couldn't tell a\n> frequently-updated single tuple from a commonly repeated value; (c)\n> scanning multiple indexes would likely require more total I/O than just\n> grabbing sample tuples from the main table --- especially if we have to\n> do that anyway to handle columns without indexes.\n\nRemember one idea is for index scans to automatically update the expired\nflag in the index bitfields when they check the heap tuple.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 20 Apr 2001 14:40:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RFC: planner statistics in 7.2"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Remember one idea is for index scans to automatically update the expired\n> flag in the index bitfields when they check the heap tuple.\n\nThere is no such flag, and I'm not planning to add one before I fix\nstatistics ;-)\n\nThere are a number of things that'd have to be thought about before\nsuch a thing could happen. For one thing, trying to obtain write locks\ninstead of read locks during a btree scan would blow us out of the water\nfor supporting concurrent btree operations.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 Apr 2001 14:51:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: RFC: planner statistics in 7.2 "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Remember one idea is for index scans to automatically update the expired\n> > flag in the index bitfields when they check the heap tuple.\n> \n> There is no such flag, and I'm not planning to add one before I fix\n> statistics ;-)\n> \n> There are a number of things that'd have to be thought about before\n> such a thing could happen. For one thing, trying to obtain write locks\n> instead of read locks during a btree scan would blow us out of the water\n> for supporting concurrent btree operations.\n\nYes, it was just an idea. Someone may add it for 7.2.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 20 Apr 2001 15:00:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RFC: planner statistics in 7.2"
},
{
"msg_contents": "At 22:27 19/04/01 -0400, Tom Lane wrote:\n>Philip Warner <pjw@rhyme.com.au> writes:\n>> At 21:14 19/04/01 -0400, Tom Lane wrote:\n>>> But you don't really need to look at the index (if it even exists\n>>> at the time you do the ANALYZE). The extent to which the data is\n>>> ordered in the table is a property of the table, not the index.\n>\n>> But the value (and cost) of using a specific index in an indexscan depends\n>> on that index (or am I missing something?). \n>\n>All that we're discussing here is one specific parameter in the cost\n>estimation for an indexscan, viz, the extent to which the table ordering\n>agrees with the index ordering. \n\nThis does not necessarily follow. A table ordering need not follow the sort\norder of an index for the index to have a low indexscan cost. All that is\nrequired is that most of the rows referred to by an index node must reside\nin a page or pages that will be read by one IO. eg. a table that has a\nsequence based ID, with, say 20% of rows updated, will work nicely with an\nindexscan on the ID, even though it has never been clustered. \n\nWhat I'm suggesting is that if you look at a random sample of index nodes,\nyou should be able to get a statistically valid estimate of the 'clumping'\nof the data pointed to by the index. \n\nAm I still missing the point?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Mon, 23 Apr 2001 22:31:47 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: RFC: planner statistics in 7.2 "
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n>> All that we're discussing here is one specific parameter in the cost\n>> estimation for an indexscan, viz, the extent to which the table ordering\n>> agrees with the index ordering. \n\n> This does not necessarily follow. A table ordering need not follow the sort\n> order of an index for the index to have a low indexscan cost. All that is\n> required is that most of the rows referred to by an index node must reside\n> in a page or pages that will be read by one IO. eg. a table that has a\n> sequence based ID, with, say 20% of rows updated, will work nicely with an\n> indexscan on the ID, even though it has never been clustered. \n\nRight, what matters is the extent of correlation between table ordering\nand index ordering, not how it got to be that way.\n\n> What I'm suggesting is that if you look at a random sample of index nodes,\n> you should be able to get a statistically valid estimate of the 'clumping'\n> of the data pointed to by the index. \n\nAnd I'm saying that you don't actually have to look at the index in\norder to compute the very same estimate. The only property of the index\nthat matters is its sort order; if you assume you know the right sort\norder (and in practice there's usually only one interesting possibility\nfor a column) then you can compute the correlation just by looking at\nthe table.\n\nAndreas correctly points out that this approach doesn't extend very well\nto multi-column or functional indexes, but I'm willing to punt on those\nfor the time being ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Apr 2001 10:10:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: RFC: planner statistics in 7.2 "
},
{
"msg_contents": "Tom Lane wrote:\n\n> Philip Warner <pjw@rhyme.com.au> writes:\n> \n>> What I'm suggesting is that if you look at a random sample of index nodes,\n>> you should be able to get a statistically valid estimate of the 'clumping'\n>> of the data pointed to by the index. \n> \n> \n> And I'm saying that you don't actually have to look at the index in\n> order to compute the very same estimate. The only property of the index\n> that matters is its sort order; if you assume you know the right sort\n> order (and in practice there's usually only one interesting possibility\n> for a column) then you can compute the correlation just by looking at\n> the table.\n\nThis is more true for unique indexes than for non-unique ones unless \nour non-unique indexes are smart enough to insert equal index nodes in \ntable order .\n\n> Andreas correctly points out that this approach doesn't extend very well\n> to multi-column or functional indexes, but I'm willing to punt on those\n> for the time being ...\n\n----------\nHannu\n\n",
"msg_date": "Mon, 23 Apr 2001 17:32:44 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: RFC: planner statistics in 7.2"
},
{
"msg_contents": "At 10:10 23/04/01 -0400, Tom Lane wrote:\n>>> All that we're discussing here is one specific parameter in the cost\n>>> estimation for an indexscan, viz, the extent to which the table ordering\n>>> agrees with the index ordering. \n>\n>> This does not necessarily follow. A table ordering need not follow the sort\n>> order of an index for the index to have a low indexscan cost. All that is\n>> required is that most of the rows referred to by an index node must reside\n>> in a page or pages that will be read by one IO. eg. a table that has a\n>> sequence based ID, with, say 20% of rows updated, will work nicely with an\n>> indexscan on the ID, even though it has never been clustered. \n>\n>Right, what matters is the extent of correlation between table ordering\n>and index ordering, not how it got to be that way.\n\nNo; *local* ordering needs to *roughly* match. Global ordering and\nrecord-by-record ordering don't matter.\n\nFor example, for a table with an ID field, the rows may be stored as (where\n--- indicates a mythical page)\n\n-----\n5\n9\n6\n7\n-----\n1\n10\n2\n3\n-----\n4\n8\n11\n12\n-----\n\nA sorted index may have nodes pointers to (1,2,3), (4,5,6), (7,8,9) and\n(10,11,12). The first node would take 1 IO, then each of the others would\ntake 2. This would give a much more reasonable estimate for the indexscan\ncosts (assuming a random sample is adequate).\n\n\n>> What I'm suggesting is that if you look at a random sample of index nodes,\n>> you should be able to get a statistically valid estimate of the 'clumping'\n>> of the data pointed to by the index. \n>\n>And I'm saying that you don't actually have to look at the index in\n>order to compute the very same estimate. \n\nNo. Not given the above.\n\n\n>The only property of the index\n>that matters is its sort order; if you assume you know the right sort\n>order (and in practice there's usually only one interesting possibility\n>for a column) then you can compute the correlation just by looking at\n>the table.\n\nThis is true, but only if you are strictly interested in sort order, which\nI don't think we are.\n\n\n>Andreas correctly points out that this approach doesn't extend very well\n>to multi-column or functional indexes, but I'm willing to punt on those\n>for the time being ...\n\nMy approach should work with arbitrary indexes.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 24 Apr 2001 01:56:47 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: RFC: planner statistics in 7.2 "
}
] |
[
{
"msg_contents": "Hi,\nI am a newbie for developing with postgresql. I have included the header file \nby : #include <postgresql/libpq++.h> (I am using 7.1) When I compile it i get \nthe following errors:\nIn pgconnection.h\nline 41 postgres_fe.h: No such file or directory\nline 42 libpq-fe.h: No such file or directory\n\nThe files are located in the directory above where pgconnection.h is located\n\nAm I doing anything wrong with the way i am including it?\n\nGreg\n",
"msg_date": "Fri, 20 Apr 2001 11:09:36 +1000",
"msg_from": "Greg Hulands <ghulands@bigpond.net.au>",
"msg_from_op": true,
"msg_subject": "Including libpq++.h"
},
{
"msg_contents": "Greg Hulands <ghulands@bigpond.net.au> writes:\n> I am a newbie for developing with postgresql. I have included the header file\n> by : #include <postgresql/libpq++.h> (I am using 7.1) When I compile it i get\n> the following errors:\n> In pgconnection.h\n> line 41 postgres_fe.h: No such file or directory\n> line 42 libpq-fe.h: No such file or directory\n\n> The files are located in the directory above where pgconnection.h is located\n\nYou need to make that directory part of your -I search path --- not the\none above it, which is what you evidently did. Your initial include\nwill simplify to #include <libpq++.h>.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Apr 2001 21:19:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Including libpq++.h "
}
] |
[
{
"msg_contents": "\n> But you don't really need to look at the index (if it even exists\n> at the time you do the ANALYZE). The extent to which the data is\n> ordered in the table is a property of the table, not the index.\n\nThink compound, ascending, descending and functional index.\nThe (let's call it) cluster statistic for estimating indexscan cost can only \nbe deduced from the index itself (for all but the simplest one column btree). \n\nAndreas\n",
"msg_date": "Fri, 20 Apr 2001 11:58:23 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: RFC: planner statistics in 7.2 "
},
{
"msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n>> But you don't really need to look at the index (if it even exists\n>> at the time you do the ANALYZE). The extent to which the data is\n>> ordered in the table is a property of the table, not the index.\n\n> Think compound, ascending, descending and functional index.\n> The (let's call it) cluster statistic for estimating indexscan cost can only \n> be deduced from the index itself (for all but the simplest one column btree).\n\nIf you want to write code that handles those cases, go right ahead ;-).\nI think it's sufficient to look at the first column of a multicolumn\nindex for cluster-order estimation --- remember all these numbers are\npretty crude anyway. We have no such thing as a \"descending index\";\nand I'm not going to worry about clustering estimation for functional\nindexes.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 Apr 2001 10:17:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AW: RFC: planner statistics in 7.2 "
}
] |
[
{
"msg_contents": "Philip Warner wrote:\n> At 08:42 19/04/01 -0500, Jan Wieck wrote:\n> [...]\n> > and the required\n> > feature to correctly restore the tgconstrrelid is already in\n> > the backend, so pg_dump should make use of it\n>\n> No problem there - just tell me how...\n\n Add a \"FROM <opposite-relname>\" after the \"ON <relname>\" to\n the CREATE CONSTRAINT TRIGGER statements. That's it.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Fri, 20 Apr 2001 11:29:10 -0500 (EST)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: Re: [BUG?] tgconstrrelid doesn't survive a dump/restore"
},
{
"msg_contents": "On Sat, 21 Apr 2001, Philip Warner wrote:\n\n> At 11:29 20/04/01 -0500, Jan Wieck wrote:\n> >Philip Warner wrote:\n> >> At 08:42 19/04/01 -0500, Jan Wieck wrote:\n> >> > and the required\n> >> > feature to correctly restore the tgconstrrelid is already in\n> >> > the backend, so pg_dump should make use of it\n> >>\n> >> No problem there - just tell me how...\n> >\n> > Add a \"FROM <opposite-relname>\" after the \"ON <relname>\" to\n> > the CREATE CONSTRAINT TRIGGER statements. That's it.\n> >\n> \n> I'll make the change ASAP.\n\nWoo-hoo! Thanks.\n\nI posted a plpgsql script yesterday that tries to restore the name if it's\nalready been lost to a dump/restore cycle.\n\nIt would be a more robust solution if, instead of relying on pgconstrname,\nI could get into the trigger arguments. However, these does not seem to be\nany way for me to do this from plpgsql, as the functions for manipulating\nbytea fields aren't very useful for this, an I can't coerce bytea into\ntext or anything like that.\n\nCan anyone offer help on this? If I can get into the real args, I'll fix\nup the script so that it can be run once by the people w/o tgconstrrelid,\nand then, once Philip's done his work, we'll never lose it again! :-)\n\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n",
"msg_date": "Fri, 20 Apr 2001 13:04:15 -0400 (EDT)",
"msg_from": "Joel Burton <jburton@scw.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: [BUG?] tgconstrrelid doesn't survive a dump/restore"
},
{
"msg_contents": "At 11:29 20/04/01 -0500, Jan Wieck wrote:\n>Philip Warner wrote:\n>> At 08:42 19/04/01 -0500, Jan Wieck wrote:\n>> > and the required\n>> > feature to correctly restore the tgconstrrelid is already in\n>> > the backend, so pg_dump should make use of it\n>>\n>> No problem there - just tell me how...\n>\n> Add a \"FROM <opposite-relname>\" after the \"ON <relname>\" to\n> the CREATE CONSTRAINT TRIGGER statements. That's it.\n>\n\nI'll make the change ASAP.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sat, 21 Apr 2001 10:29:33 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Re: [BUG?] tgconstrrelid doesn't survive a \n dump/restore"
},
{
"msg_contents": ">>\n>> Add a \"FROM <opposite-relname>\" after the \"ON <relname>\" to\n>> the CREATE CONSTRAINT TRIGGER statements. That's it.\n>>\n>\n>I'll make the change ASAP.\n>\n\nI'm about to do this - does anyone object to me adding the 7.0 backward\ncompatibility changes at the same time?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 24 Apr 2001 16:26:13 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "pg_dump & 7.0"
},
{
"msg_contents": ">\n>I'll make the change ASAP.\n>\n\nNow in CVS along with PG 7.0 compat. code.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Wed, 25 Apr 2001 22:28:50 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Re: [BUG?] tgconstrrelid doesn't survive a \n dump/restore"
}
] |
[
{
"msg_contents": "I've posted the 7.1 hardcopy (postscript) docs at\n\n ftp://ftp.postgresql.org/pub/doc/7.1/*.ps.gz\n\nand\n\n http://www.postgresql.org/users-lounge/docs\n\nI've also generated standalone html versions of the docs in case someone\nwants that format. All are available at the http reference above.\n\nI was not able to get into the site to test the ftp connection, but it\n*should* work. Let me know if it does not. Also, if there is interest in\nan A4 layout of the docs, let me know...\n\nafaik things should be available on mirrors by tomorrow.\n\n - Thomas\n\nCan someone generate PDF versions of these? Sorry I can't remember all\nthose who volunteered, so perhaps we should coordinate on the -hackers\nlist so folks don't duplicate work. Thanks!\n",
"msg_date": "Fri, 20 Apr 2001 16:45:13 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": true,
"msg_subject": "Hardcopy docs available"
},
{
"msg_contents": "> Can someone generate PDF versions of these? Sorry I can't remember all\n> those who volunteered, so perhaps we should coordinate on the -hackers\n> list so folks don't duplicate work. Thanks!\n\nIf you have Postscript, does ps2pdf do anything useful for you? The\nissue is that the site that generated the Postscript should be the one\nwho generates the PDF because the font metrics use to generate the\nPostscript should be the same as the fonts embedded in the PDF file.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 20 Apr 2001 13:16:00 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hardcopy docs available"
},
{
"msg_contents": "There may be a better way to create PDF files if you have TeX. See\nhttp://www.tug.org/applications/jadetex/\n\nI haven't used Jade, so I don't know, but the PDF files that are output by\npdftex and pdflatex are smaller than those created by ps2pdf and easier to\nread on your monitor (I think because they use the built-in PDF fonts where\nthey can.)\n\nKen Hirsch\nAll your database are belong to us\n\n----- Original Message -----\nFrom: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nTo: <lockhart@fourpalms.org>\nCc: \"Hackers List\" <pgsql-hackers@postgresql.org>;\n<pgsql-announce@postgresql.org>; <pgsql-general@postgresql.org>\nSent: Friday, April 20, 2001 1:16 PM\nSubject: Re: [HACKERS] Hardcopy docs available\n\n\n> > Can someone generate PDF versions of these? Sorry I can't remember all\n> > those who volunteered, so perhaps we should coordinate on the -hackers\n> > list so folks don't duplicate work. Thanks!\n>\n> If you have Postscript, does ps2pdf do anything useful for you? The\n> issue is that the site that generated the Postscript should be the one\n> who generates the PDF because the font metrics use to generate the\n> Postscript should be the same as the fonts embedded in the PDF file.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n",
"msg_date": "Fri, 20 Apr 2001 14:02:41 -0400",
"msg_from": "\"Ken Hirsch\" <kenhirsch@myself.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hardcopy docs available"
},
{
"msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> There may be a better way to create PDF files if you have TeX. See\n> http://www.tug.org/applications/jadetex/\n> \n> I haven't used Jade, so I don't know, but the PDF files that are output by\n> pdftex and pdflatex are smaller than those created by ps2pdf and easier to\n> read on your monitor (I think because they use the built-in PDF fonts where\n> they can.)\n\nThere is nothing inherent with TeX that makes embedded fonts more or\nless likely. You will need Ghostscript >= 6.0 to embed any nonstandard\nfonts.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 20 Apr 2001 14:52:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hardcopy docs available"
},
{
"msg_contents": "Thomas Lockhart writes:\n\n> I've posted the 7.1 hardcopy (postscript) docs at\n\nThis list is getting out of hand:\n\n Read the Administrator's Guide online\n Download the Postscript version of the Administrator's Guide\n Download the HTML version of the Administrator's Guide\n\n Read the Developer's Guide online\n Download the Postscript version of the Developer's Guide\n Download the HTML version of the Developer's Guide\n\n Read the Integrated Docs online\n Download the HTML version of the Integrated Docs\n\n Read the Programmer's Guide online\n Download the Postscript version of the Programmer's Guide\n Download the HTML version of the Programmer's Guide\n\n Read the Reference Manual online\n Download the Postscript version of the Reference Manual\n Download the HTML version of the Reference Manual\n\n Read the Tutorial online\n Download the Postscript version of the Tutorial\n Download the HTML version of the Tutorial\n\n Read the User's Guide online\n Download the Postscript version of the User's Guide\n Download the HTML version of the User's Guide\n\nThere should only be one link \"Read Documentation Online\". When you click\non that you get all the tables of content and you can make a much better\ndecision which book you actually want to read.\n\nIn any case, this list could be better organized visually. Maybe\nsomething like\n\n_Read Documentation Online_\n\n_Download the Documentation_\n\n* HTML\n\n Complete documentation set (tar.gz, 648k)\n Complete documentation on a single page (xxxk)\n\n Administrator's Guide (tar.gz, xxxk)\n Administrator's Guide on a single page (xxxk)\n\n Developers' Guide (tar.gz, xxxk)\n etc.\n\n* Postscript\n\n Administrator's Guide [ A4 ] [ US Letter ]\n Developer's Guide [ A4 ] [ US Letter ]\n etc.\n\n* PDF\n\n ...\n\n> I've also generated standalone html versions of the docs in case someone\n> wants that format. All are available at the http reference above.\n\nShould we integrate these into the build system?\n\n> Can someone generate PDF versions of these?\n\nps2pdf\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Fri, 20 Apr 2001 21:14:45 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hardcopy docs available"
},
{
"msg_contents": "Okay, you're right. I just updated my Ghostscript to 7.00 (just out) and it\nproduced very nice PDFs. I can upload them somewhere if you give me an FTP\naddress.\n\nKen Hirsch\n\n----- Original Message -----\nFrom: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nTo: \"Ken Hirsch\" <kenhirsch@myself.com>\nCc: <lockhart@fourpalms.org>; \"Hackers List\" <pgsql-hackers@postgresql.org>;\n<pgsql-announce@postgresql.org>; <pgsql-general@postgresql.org>\nSent: Friday, April 20, 2001 2:52 PM\nSubject: Re: [HACKERS] Hardcopy docs available\n\n\n> [ Charset ISO-8859-1 unsupported, converting... ]\n> > There may be a better way to create PDF files if you have TeX. See\n> > http://www.tug.org/applications/jadetex/\n> >\n> > I haven't used Jade, so I don't know, but the PDF files that are output\nby\n> > pdftex and pdflatex are smaller than those created by ps2pdf and easier\nto\n> > read on your monitor (I think because they use the built-in PDF fonts\nwhere\n> > they can.)\n>\n> There is nothing inherent with TeX that makes embedded fonts more or\n> less likely. You will need Ghostscript >= 6.0 to embed any nonstandard\n> fonts.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n",
"msg_date": "Fri, 20 Apr 2001 19:33:52 -0400",
"msg_from": "\"Ken Hirsch\" <kenhirsch@myself.com>",
"msg_from_op": false,
"msg_subject": "Re: Hardcopy docs available"
},
{
"msg_contents": "> I've posted the 7.1 hardcopy (postscript) docs at\n> \n> ftp://ftp.postgresql.org/pub/doc/7.1/*.ps.gz\n> \n> and\n> \n> http://www.postgresql.org/users-lounge/docs\n> \n> I've also generated standalone html versions of the docs in case someone\n> wants that format. All are available at the http reference above.\n\nGreat work!\n\n> I was not able to get into the site to test the ftp connection, but it\n> *should* work. Let me know if it does not. Also, if there is interest in\n> an A4 layout of the docs, let me know...\n\nYes, please. We need A4 layout...\n--\nTatsuo Ishii\n",
"msg_date": "Sat, 21 Apr 2001 10:36:49 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hardcopy docs available"
},
{
"msg_contents": "> Okay, you're right. I just updated my Ghostscript to 7.00 (just out) and it\n> produced very nice PDFs. I can upload them somewhere if you give me an FTP\n> address.\n\nJust mail them to me and I'll post them for you. ps2pdf seems to do a\nnice (simple) job from the 5.50 version I'm running, so presumably 7.00\nwill do even better.\n\n - Thomas\n",
"msg_date": "Sat, 21 Apr 2001 01:43:24 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": true,
"msg_subject": "Re: Hardcopy docs available"
},
{
"msg_contents": "> ... if there is interest in an A4 layout of the docs, let me know...\n\nI've gotten several requests for the A4 format, and have completed four\nof the six docs in that format. Thanks for the feedback. They should be\navailable in the next couple of days...\n\n - Thomas\n",
"msg_date": "Sun, 22 Apr 2001 16:04:36 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": true,
"msg_subject": "Re: Hardcopy docs available"
},
{
"msg_contents": "> > ... if there is interest in an A4 layout of the docs, let me know...\n> I've gotten several requests for the A4 format, and have completed four\n> of the six docs in that format. Thanks for the feedback. They should be\n> available in the next couple of days...\n\nOK, A4 docs are now posted on the web site and the ftp site. Also, I've\nput copies of the html tarballs on the ftp site, so there should now be\ntarballs, two kinds of postscript, and PDFs available there.\n\nIf someone wants to run the A4 docs through a PDF converter, send 'em to\nme and I'll post them too.\n\n - Thomas\n",
"msg_date": "Tue, 24 Apr 2001 00:05:17 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": true,
"msg_subject": "Re: Hardcopy docs available"
},
{
"msg_contents": "On Tue, 24 Apr 2001, Thomas Lockhart wrote:\n\n> > > ... if there is interest in an A4 layout of the docs, let me know...\n> > I've gotten several requests for the A4 format, and have completed four\n> > of the six docs in that format. Thanks for the feedback. They should be\n> > available in the next couple of days...\n>\n> OK, A4 docs are now posted on the web site and the ftp site. Also, I've\n> put copies of the html tarballs on the ftp site, so there should now be\n> tarballs, two kinds of postscript, and PDFs available there.\n>\n> If someone wants to run the A4 docs through a PDF converter, send 'em to\n> me and I'll post them too.\n\nTom, ps2pdf is on hub.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Mon, 23 Apr 2001 21:16:30 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Hardcopy docs available"
},
{
"msg_contents": "On Tue, 24 Apr 2001, Thomas Lockhart wrote:\n\n> If someone wants to run the A4 docs through a PDF converter, send 'em to\n> me and I'll post them too.\n\nI just did the userA4 pdf from hub with this:\n\n$ gzcat userA4.ps.gz | ps2pdf - > userA4.pdf\n\nDo you want me to do the rest of them? Or we can probably have the\nmakindex script do it for us.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 26 Apr 2001 10:00:42 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: Hardcopy docs available"
},
{
"msg_contents": "> > If someone wants to run the A4 docs through a PDF converter, send 'em to\n> > me and I'll post them too.\n> I just did the userA4 pdf from hub with this:\n> $ gzcat userA4.ps.gz | ps2pdf - > userA4.pdf\n> Do you want me to do the rest of them? Or we can probably have the\n> makindex script do it for us.\n\nSure. Anything is fine.\n\n - Thomas\n",
"msg_date": "Thu, 26 Apr 2001 14:58:14 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Re: Hardcopy docs available"
},
{
"msg_contents": "On Thu, 26 Apr 2001, Thomas Lockhart wrote:\n\n> > > If someone wants to run the A4 docs through a PDF converter, send 'em to\n> > > me and I'll post them too.\n> > I just did the userA4 pdf from hub with this:\n> > $ gzcat userA4.ps.gz | ps2pdf - > userA4.pdf\n> > Do you want me to do the rest of them? Or we can probably have the\n> > makindex script do it for us.\n>\n> Sure. Anything is fine.\n\nAlready done. a42ps in the doc index.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 26 Apr 2001 11:16:44 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: Hardcopy docs available"
}
] |
[
{
"msg_contents": "Hi Vince. I've put the 7.1 docs on the web site in the appropriate\nplace. Nice auto-generating web page stuff btw. \n\nI modified and compiled *but did not check into cvs* build.c to get nice\nreferences for the new Developer's and Reference docs.\n\nOK? If you want to inspect the changes, then commit (or tell me to) that\nwould be great. TIA\n\n - Thomas\n\nThe web site itself seems very slow at the moment; some robot process is\nchewing up lots of CPU so perhaps that is the reason. hub.org (which may\nactually be a different machine) is quite fast otherwise.\n",
"msg_date": "Fri, 20 Apr 2001 16:49:05 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": true,
"msg_subject": "Docs on web site"
},
{
"msg_contents": "On Fri, 20 Apr 2001, Thomas Lockhart wrote:\n\n> Hi Vince. I've put the 7.1 docs on the web site in the appropriate\n> place. Nice auto-generating web page stuff btw.\n>\n> I modified and compiled *but did not check into cvs* build.c to get nice\n> references for the new Developer's and Reference docs.\n\nI never had buildmenu.c checked in. What did you modify? I didn't\nsee anything noticable. Anyway unless new filetypes are being introduced\nyou only need to run makeindex with the current release version number.\neg..\n\n$ makeindex 7.1\n\nAnd it'll find all files and create the index with the 7.1 link.\n\n> OK? If you want to inspect the changes, then commit (or tell me to) that\n> would be great. TIA\n>\n> - Thomas\n>\n> The web site itself seems very slow at the moment; some robot process is\n> chewing up lots of CPU so perhaps that is the reason. hub.org (which may\n> actually be a different machine) is quite fast otherwise.\n\nWe're working on that too.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Fri, 20 Apr 2001 13:10:57 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Docs on web site"
},
{
"msg_contents": "> > I modified and compiled *but did not check into cvs* build.c to get nice\n> > references for the new Developer's and Reference docs.\n> I never had buildmenu.c checked in. What did you modify? I didn't\n> see anything noticable. Anyway unless new filetypes are being introduced\n> you only need to run makeindex with the current release version number.\n> eg..\n\nThere are two new documents (formed by splitting some info from the\noriginal ones):\n\no Reference Manual\no Developer's Guide\n\nIf we check buildmenu.c into cvs then we can check for differences ;)\n\n> $ makeindex 7.1\n\nRight. Worked great, except for needing the small additions to buildmenu\nto adjust names.\n\n> And it'll find all files and create the index with the 7.1 link.\n\nYeah, neat feature.\n\n - Thomas\n",
"msg_date": "Fri, 20 Apr 2001 19:52:16 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": true,
"msg_subject": "Re: Docs on web site"
}
] |
[
{
"msg_contents": "You probably already thought of this - but - why not just set up a \ncentralized server and have each office interact to the db via a web \ninterface. Let your application enforce security (apacheSSL, use db for \nuser auth) and to prevent two users from editing the same record \nsimultaneously.\n\n-r\n\nAt 06:19 PM 4/20/01 -0700, Nathan Myers wrote:\n\n>On Fri, Apr 20, 2001 at 04:53:43PM -0700, G. Anthony Reina wrote:\n> > Nathan Myers wrote:\n> >\n> > > Does the replication have to be reliable? Are you equipped to\n> > > reconcile databases that have got out of sync, when it's not?\n> > > Will the different labs ever try to update the same existing\n> > > record, or insert conflicting (unique-key) records?\n> >\n> > (1) Yes, of course. (2) Willing--yes; equipped--dunno. (3) Yes,\n> > probably.\n>\n>Hmm, good luck. Replication, by itself, is not hard, but it's only\n>a tiny part of the job. Most of the job is in handling failures\n>and conflicts correctly, for some (usually enormous) definition of\n>\"correctly\".\n>\n> > > Reliable WAN replication is harder. Most of the proprietary database\n> > > companies will tell you they can do it, but their customers will tell\n> > > you they can't.\n> >\n> > Joel Burton suggested the rserv utility. I don't know how well it would\n> > work over a wide network.\n>\n>The point about WANs is that things which work nicely in the lab, on a\n>LAN, behave very differently when the communication medium is, like the\n>Internet, only fitfully reliable. You will tend to have events occurring\n>in unexpected order, and communications lost, and queues topping over,\n>and conflicting entries in different instances which you must somehow\n>reconcile after the fact. Reconciliation by shipping the whole database\n>across the WAN is often impractical, particularly when you're trying to\n>use it at the same time.\n>\n>WAN replication is an important part of Zembu's business, and it's hard.\n>I would expect the rserv utility (about which I admit I know little) not\n>to have been designed for the job.\n>\n>Nathan Myers\n>ncm@zembu.com\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n>http://www.postgresql.org/users-lounge/docs/faq.html\n>\n>\n>\n>---\n>Incoming mail is certified Virus Free.\n>Checked by AVG anti-virus system (http://www.grisoft.com).\n>Version: 6.0.250 / Virus Database: 123 - Release Date: 4/18/01\n\n\n---\nOutgoing mail is certified Virus Free.\nChecked by AVG anti-virus system (http://www.grisoft.com).\nVersion: 6.0.250 / Virus Database: 123 - Release Date: 4/18/01",
"msg_date": "Fri, 20 Apr 2001 21:45:52 +0100",
"msg_from": "Ryan Mahoney <ryan@paymentalliance.net>",
"msg_from_op": true,
"msg_subject": "Re: Re: Is it possible to mirror the db in Postgres?"
},
{
"msg_contents": "Nathan Meyers wrote:\n\n> Does the replication have to be reliable? Are you equipped to\n> reconcile databases that have got out of sync, if not? Will the\n> different labs ever try to update the same existing record, or\n> insert conflicting (unique-key) records?\n>\n\n(1) Yes, of course. (2) Willing--yes; equipped--dunno. (3) Yes,\nprobably.\n\nReliable WAN replication is harder. Most of the proprietary database\ncompanies will tell you they can do it, but their customers will tell\nyou they can't.\n\nJoel Burton suggested the rserv utility. I don't know how well it would\nwork over a wide network.\n\n-Tony\n\n\n\n",
"msg_date": "Fri, 20 Apr 2001 16:53:43 -0700",
"msg_from": "\"G. Anthony Reina\" <reina@nsi.edu>",
"msg_from_op": false,
"msg_subject": "Re: Is it possible to mirror the db in Postgres?"
},
{
"msg_contents": "On Fri, Apr 20, 2001 at 04:53:43PM -0700, G. Anthony Reina wrote:\n> Nathan Myers wrote:\n> \n> > Does the replication have to be reliable? Are you equipped to\n> > reconcile databases that have got out of sync, when it's not? \n> > Will the different labs ever try to update the same existing \n> > record, or insert conflicting (unique-key) records?\n> \n> (1) Yes, of course. (2) Willing--yes; equipped--dunno. (3) Yes,\n> probably.\n\nHmm, good luck. Replication, by itself, is not hard, but it's only\na tiny part of the job. Most of the job is in handling failures\nand conflicts correctly, for some (usually enormous) definition of\n\"correctly\".\n\n> > Reliable WAN replication is harder. Most of the proprietary database\n> > companies will tell you they can do it, but their customers will tell\n> > you they can't.\n> \n> Joel Burton suggested the rserv utility. I don't know how well it would\n> work over a wide network.\n\nThe point about WANs is that things which work nicely in the lab, on a \nLAN, behave very differently when the communication medium is, like the \nInternet, only fitfully reliable. You will tend to have events occurring\nin unexpected order, and communications lost, and queues topping over, \nand conflicting entries in different instances which you must somehow \nreconcile after the fact. Reconciliation by shipping the whole database \nacross the WAN is often impractical, particularly when you're trying to\nuse it at the same time.\n\nWAN replication is an important part of Zembu's business, and it's hard.\nI would expect the rserv utility (about which I admit I know little) not\nto have been designed for the job.\n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Fri, 20 Apr 2001 18:19:58 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Re: Is it possible to mirror the db in Postgres?"
},
{
"msg_contents": "> Joel Burton suggested the rserv utility. I don't know how well it would\n> work over a wide network.\n\nIt should work well over a WAN for what it can do. However, that is\nasync one-way replication, which was not your initial requirement.\n\nOf course, requirements sometimes adjust in the face of reality ;)\n\n - Thomas\n",
"msg_date": "Sat, 21 Apr 2001 01:46:22 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: Is it possible to mirror the db in Postgres?"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> > Joel Burton suggested the rserv utility. I don't know how well it would\n> > work over a wide network.\n> \n> It should work well over a WAN for what it can do. However, that is\n> async one-way replication, which was not your initial requirement.\n> \n> Of course, requirements sometimes adjust in the face of reality ;)\n\nIf it is a low-update mostly-analyze DB then updating on central db and \nthen doing async one-way replication from there may be a good strategy.\n\n-------------\nHannu\n",
"msg_date": "Sat, 21 Apr 2001 13:44:06 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Re: Is it possible to mirror the db in Postgres?"
}
] |
[
{
"msg_contents": "We use Postgres 7.0.3 to store data for our scientific research. We have\ntwo other labs in St. Louis, MO and Tempe, AZ. I'd like to see if\nthere's a way for them to mirror our database. They would be able to\nupdate our database when they received new results and we would be able\nto update theirs. So, in effect, we'd have 3 copies of the same db. Each\ncopy would be able to update the other.\n\nAny thoughts on if this is possible?\n\nThanks.\n-Tony Reina\n\n\n",
"msg_date": "Fri, 20 Apr 2001 15:33:38 -0700",
"msg_from": "\"G. Anthony Reina\" <reina@nsi.edu>",
"msg_from_op": true,
"msg_subject": "Is it possible to mirror the db in Postgres?"
},
{
"msg_contents": "On Fri, Apr 20, 2001 at 03:33:38PM -0700, G. Anthony Reina wrote:\n> We use Postgres 7.0.3 to store data for our scientific research. We have\n> two other labs in St. Louis, MO and Tempe, AZ. I'd like to see if\n> there's a way for them to mirror our database. They would be able to\n> update our database when they received new results and we would be able\n> to update theirs. So, in effect, we'd have 3 copies of the same db. Each\n> copy would be able to update the other.\n> \n> Any thoughts on if this is possible?\n\nDoes the replication have to be reliable? Are you equipped to\nreconcile databases that have got out of sync, if not? Will the\ndifferent labs ever try to update the same existing record, or\ninsert conflicting (unique-key) records?\n\nSymmetric replication is easy or impossible, but usually somewhere \nin between, depending on many details. Usually when it's made to\nwork, it runs on a LAN. \n\nReliable WAN replication is harder. Most of the proprietary database \ncompanies will tell you they can do it, but their customers will tell \nyou they can't. \n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Fri, 20 Apr 2001 15:56:34 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Is it possible to mirror the db in Postgres?"
},
{
"msg_contents": "> Reliable WAN replication is harder. Most of the proprietary database \n> companies will tell you they can do it, but their customers will tell \n> you they can't. \n\nThis comment is great.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 20 Apr 2001 19:16:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Is it possible to mirror the db in Postgres?"
}
] |
[
{
"msg_contents": "This call\n\nsetuid(geteuid());\n\nis found in backend/utils/init/postinit.c. AFAICT, this does nothing.\nAnyone got an idea?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sat, 21 Apr 2001 17:04:34 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "setuid(geteuid());?"
},
{
"msg_contents": "> This call\n> \n> setuid(geteuid());\n> \n> is found in backend/utils/init/postinit.c. AFAICT, this does nothing.\n> Anyone got an idea?\n\nWell, from my BSD manual it says:\n\n The setuid() function sets the real and effective user IDs and the saved\n set-user-ID of the current process to the specified value. The setuid()\n\nso it seems to make sure the real/saved uid matches the effective uid. \nNow, considering we don't use uid/euid distinction for anything, I agree\nit is useless and should be removed.\n\nI don't see any mention of getuid() except in odbc and pg_id. Seems\nthey should be geteuid() too.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 21 Apr 2001 11:07:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: setuid(geteuid());?"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> so it seems to make sure the real/saved uid matches the effective uid. \n> Now, considering we don't use uid/euid distinction for anything, I agree\n> it is useless and should be removed.\n\nNo, it is NOT useless and must NOT be removed. The point of this little\nmachination is to be dead certain that we have given up root rights if\nexecuted as setuid postgres. The scenario we're concerned about is\nwhere real uid = root and effective uid = postgres. We want real uid\nto become postgres as well --- otherwise our test to prevent execution\nas root is a waste of time, because nefarious code could become root\nagain just by doing setuid. See the setuid man page: if real uid is\nroot then setuid(root) will succeed.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 21 Apr 2001 12:29:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: setuid(geteuid());? "
},
{
"msg_contents": ">> so it seems to make sure the real/saved uid matches the effective uid. \n>> Now, considering we don't use uid/euid distinction for anything, I agree\n>> it is useless and should be removed.\n\n> No, it is NOT useless and must NOT be removed.\n\nBut it would make sense to move it over to main.c where the primary\ntest for not running as root is, since these are closely related\nconsiderations: we don't want either euid or ruid to be root.\n\nI'll make that change unless I hear objections...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 21 Apr 2001 12:43:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: setuid(geteuid());? "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > so it seems to make sure the real/saved uid matches the effective uid. \n> > Now, considering we don't use uid/euid distinction for anything, I agree\n> > it is useless and should be removed.\n> \n> No, it is NOT useless and must NOT be removed. The point of this little\n> machination is to be dead certain that we have given up root rights if\n> executed as setuid postgres. The scenario we're concerned about is\n> where real uid = root and effective uid = postgres. We want real uid\n> to become postgres as well --- otherwise our test to prevent execution\n> as root is a waste of time, because nefarious code could become root\n> again just by doing setuid. See the setuid man page: if real uid is\n> root then setuid(root) will succeed.\n\nI understand, but how do we get suid execution. Does someone have to\nset the seuid bit on the executable?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 21 Apr 2001 12:51:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: setuid(geteuid());?"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I understand, but how do we get suid execution. Does someone have to\n> set the seuid bit on the executable?\n\nProbably so, but I could see someone thinking they could do that as a\nsubstitute for saying \"su - postgres\" on every startup.\n\nIf we are going to take the trouble to refuse to run when euid = 0,\nthen it also behooves us to guard against ruid = 0.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 21 Apr 2001 12:58:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: setuid(geteuid());? "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I understand, but how do we get suid execution. Does someone have to\n> > set the seuid bit on the executable?\n> \n> Probably so, but I could see someone thinking they could do that as a\n> substitute for saying \"su - postgres\" on every startup.\n> \n> If we are going to take the trouble to refuse to run when euid = 0,\n> then it also behooves us to guard against ruid = 0.\n\nOK, that's what I thought. The command is not needed in our default\nconfiguration. I agree we should prevent people from setting up bad\nconfigurations if we can.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 21 Apr 2001 13:08:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: setuid(geteuid());?"
},
{
"msg_contents": "> That is a valid concern, but the code doesn't actually prevent this. I\n> just tried\n> \n> chmod u+s postgres\n> su -\n> postmaster -D ...\n> \n> Then loaded the function\n> \n> #include <postgres.h>\n> \n> int32 touch(int32 a) {\n> if (setuid(0) == -1)\n> elog(ERROR, \"setuid: %m\");\n> elog(DEBUG, \"getuid = %d, geteuid = %d\", getuid(), geteuid());\n> system(\"touch /tmp/foofile\");\n> setuid(500); /* my own */\n> return a + 1;\n> }\n> \n> and the output was\n> \n> DEBUG: getuid = 0, geteuid = 0\n> \n> and I got a file /tmp/foofile owned by root.\n> \n> ISTM that the best way to prevent this exploit would be to check for both\n> geteuid() == 0 and getuid() == 0 in main.c.\n\nPeter, can you check your setuid manual page. Is there a mention of\nspecial handling of saved-uid for root? I don't have it here on BSD/OS\nbut have heard of some os's that treat setuid differently for root.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 21 Apr 2001 13:10:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: setuid(geteuid());?"
},
{
"msg_contents": "Tom Lane writes:\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > so it seems to make sure the real/saved uid matches the effective uid.\n> > Now, considering we don't use uid/euid distinction for anything, I agree\n> > it is useless and should be removed.\n>\n> No, it is NOT useless and must NOT be removed. The point of this little\n> machination is to be dead certain that we have given up root rights if\n> executed as setuid postgres. The scenario we're concerned about is\n> where real uid = root and effective uid = postgres.\n\nIf effective uid = postgres, then this will execute setuid(postgres),\nwhich does nothing.\n\n> We want real uid\n> to become postgres as well --- otherwise our test to prevent execution\n> as root is a waste of time, because nefarious code could become root\n> again just by doing setuid. See the setuid man page: if real uid is\n> root then setuid(root) will succeed.\n\nThat is a valid concern, but the code doesn't actually prevent this. I\njust tried\n\nchmod u+s postgres\nsu -\npostmaster -D ...\n\nThen loaded the function\n\n#include <postgres.h>\n\nint32 touch(int32 a) {\n if (setuid(0) == -1)\n elog(ERROR, \"setuid: %m\");\n elog(DEBUG, \"getuid = %d, geteuid = %d\", getuid(), geteuid());\n system(\"touch /tmp/foofile\");\n setuid(500); /* my own */\n return a + 1;\n}\n\nand the output was\n\nDEBUG: getuid = 0, geteuid = 0\n\nand I got a file /tmp/foofile owned by root.\n\nISTM that the best way to prevent this exploit would be to check for both\ngeteuid() == 0 and getuid() == 0 in main.c.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sat, 21 Apr 2001 19:17:34 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: setuid(geteuid());? "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n>> We want real uid\n>> to become postgres as well --- otherwise our test to prevent execution\n>> as root is a waste of time, because nefarious code could become root\n>> again just by doing setuid. See the setuid man page: if real uid is\n>> root then setuid(root) will succeed.\n\n> That is a valid concern, but the code doesn't actually prevent this.\n\nAfter reading the setuid man page a third time, I think you are right.\n\nOn machines that have setreuid(), or even better setresuid(), we could\nforce the ruid (and suid for good measure) to match euid. Otherwise we\nprobably should refuse to start unless getuid matches geteuid.\n\nHmm ... setresuid may be an HP-ism ... does anyone else have that?\nsetreuid appears to be a BSD-ism, so it ought to be reasonably popular.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 21 Apr 2001 13:29:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: setuid(geteuid());? "
},
{
"msg_contents": "I said:\n> On machines that have setreuid(), or even better setresuid(), we could\n> force the ruid (and suid for good measure) to match euid. Otherwise we\n> probably should refuse to start unless getuid matches geteuid.\n\nBut on third thought, it's not worth the trouble of adding two more\nconfigure tests to support a configuration that I doubt anyone uses\nanyway (ie, setuid postgres executable). Let's just remove the setuid()\nand add a check for getuid() == geteuid() in main.c.\n\nPeter, unless you've already started in on this, I can take care of it\n--- I see a couple of other nits I want to fix in those two files, too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 21 Apr 2001 13:47:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: setuid(geteuid());? "
},
{
"msg_contents": "Tom Lane writes:\n\n> I said:\n> > On machines that have setreuid(), or even better setresuid(), we could\n> > force the ruid (and suid for good measure) to match euid. Otherwise we\n> > probably should refuse to start unless getuid matches geteuid.\n>\n> But on third thought, it's not worth the trouble of adding two more\n> configure tests to support a configuration that I doubt anyone uses\n> anyway (ie, setuid postgres executable). Let's just remove the setuid()\n> and add a check for getuid() == geteuid() in main.c.\n>\n> Peter, unless you've already started in on this, I can take care of it\n> --- I see a couple of other nits I want to fix in those two files, too.\n\nPlease do. I just ran across it for the session authorization deal, which\nwould be easy to merge.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sat, 21 Apr 2001 20:13:57 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: setuid(geteuid());? "
},
{
"msg_contents": "> > That is a valid concern, but the code doesn't actually prevent this.\n> \n> After reading the setuid man page a third time, I think you are right.\n> \n> On machines that have setreuid(), or even better setresuid(), we could\n> force the ruid (and suid for good measure) to match euid. Otherwise we\n> probably should refuse to start unless getuid matches geteuid.\n> \n> Hmm ... setresuid may be an HP-ism ... does anyone else have that?\n> setreuid appears to be a BSD-ism, so it ought to be reasonably popular.\n\nI have setreuid on BSD/OS, no setresuid.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 21 Apr 2001 15:57:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: setuid(geteuid());?"
},
{
"msg_contents": "> I said:\n> > On machines that have setreuid(), or even better setresuid(), we could\n> > force the ruid (and suid for good measure) to match euid. Otherwise we\n> > probably should refuse to start unless getuid matches geteuid.\n> \n> But on third thought, it's not worth the trouble of adding two more\n> configure tests to support a configuration that I doubt anyone uses\n> anyway (ie, setuid postgres executable). Let's just remove the setuid()\n> and add a check for getuid() == geteuid() in main.c.\n> \n> Peter, unless you've already started in on this, I can take care of it\n> --- I see a couple of other nits I want to fix in those two files, too.\n\nGood. That function call clearly needs a comment added.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 21 Apr 2001 15:58:36 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: setuid(geteuid());?"
},
{
"msg_contents": "> Tom Lane writes:\n> \n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > so it seems to make sure the real/saved uid matches the effective uid.\n> > > Now, considering we don't use uid/euid distinction for anything, I agree\n> > > it is useless and should be removed.\n> >\n> > No, it is NOT useless and must NOT be removed. The point of this little\n> > machination is to be dead certain that we have given up root rights if\n> > executed as setuid postgres. The scenario we're concerned about is\n> > where real uid = root and effective uid = postgres.\n> \n> If effective uid = postgres, then this will execute setuid(postgres),\n> which does nothing.\n\nI am a little confused. BSD/OS manual page says:\n\n The setuid() function sets the real and effective user IDs and the saved\n set-user-ID of the current process to the specified value. The setuid()\n function is permitted if the specified ID is equal to the real user ID of\n the process, or if the effective user ID is that of the super user.\n\n...\n\n If the user is not the super user, or the uid specified is not the real,\n effective ID, or saved ID, these functions return -1.\n\nso why does your test work? Does your manual say something different?\nIf setuid() sets user/effective/saved to postgres, how can you get back\nroot?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 21 Apr 2001 16:03:10 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: setuid(geteuid());?"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Bruce Momjian writes:\n>> so why does your test work? Does your manual say something different?\n>> If setuid() sets user/effective/saved to postgres, how can you get back\n>> root?\n\n> : setuid sets the effective user ID of the current process. If the\n> : effective userid of the caller is root, the real and saved user ID's\n> : are also set.\n\nHPUX has an even more bizarre definition:\n\n setuid() sets the real-user-ID (ruid),effective-user-ID (euid), and/or\n saved-user-ID (suid) of the calling process. The super-user's euid is\n zero. The following conditions govern setuid's behavior:\n\n o If the euid is zero, setuid() sets the ruid, euid, and suid to\n uid.\n\n o If the euid is not zero, but the argument uid is equal to the\n ruid or the suid, setuid() sets the euid to uid; the ruid and\n suid remain unchanged. (If a set-user-ID program is not\n running as super-user, it can change its euid to match its\n ruid and reset itself to the previous euid value.)\n\n o If euid is not zero, but the argument uid is equal to the\n euid, and the calling process is a member of a group that has\n the PRIV_SETRUGID privilege (see privgrp(4)), setuid() sets\n the ruid to uid; the euid and suid remain unchanged.\n\nRule #2 is what creates the security hole. Rule #3 would allow us to\nplug the hole, but only if we have PRIV_SETRUGID...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 21 Apr 2001 16:42:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: setuid(geteuid());? "
},
{
"msg_contents": "Bruce Momjian writes:\n\n> so why does your test work? Does your manual say something different?\n> If setuid() sets user/effective/saved to postgres, how can you get back\n> root?\n\n: setuid sets the effective user ID of the current process. If the\n: effective userid of the caller is root, the real and saved user ID's\n: are also set.\n:\n: Under Linux, setuid is implemented like the POSIX version with the\n: _POSIX_SAVED_IDS feature. This allows a setuid (other than root)\n: program to drop all of its user privileges, do some un-privileged\n: work, and then re-engage the original effective user ID in a secure\n: manner.\n\nI suppose your system doesn't have the _POSIX_SAVED_IDS feature.\n\nI also have:\n\n: CONFORMING TO\n: SVr4, SVID, POSIX.1. Not quite compatible with the 4.4BSD call,\n: which sets all of the real, saved, and effective user IDs.\n\nOn your system you would have to use seteuid() to do what setuid() does\nhere.\n\nOne more reason to avoid this area when possible.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sat, 21 Apr 2001 22:43:26 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: setuid(geteuid());?"
},
{
"msg_contents": "> HPUX has an even more bizarre definition:\n> \n> setuid() sets the real-user-ID (ruid),effective-user-ID (euid), and/or\n> saved-user-ID (suid) of the calling process. The super-user's euid is\n> zero. The following conditions govern setuid's behavior:\n> \n> o If the euid is zero, setuid() sets the ruid, euid, and suid to\n> uid.\n> \n> o If the euid is not zero, but the argument uid is equal to the\n> ruid or the suid, setuid() sets the euid to uid; the ruid and\n> suid remain unchanged. (If a set-user-ID program is not\n> running as super-user, it can change its euid to match its\n> ruid and reset itself to the previous euid value.)\n> \n> o If euid is not zero, but the argument uid is equal to the\n> euid, and the calling process is a member of a group that has\n> the PRIV_SETRUGID privilege (see privgrp(4)), setuid() sets\n> the ruid to uid; the euid and suid remain unchanged.\n> \n> Rule #2 is what creates the security hole. Rule #3 would allow us to\n> plug the hole, but only if we have PRIV_SETRUGID...\n\nI don't even want to twist my brain far enough to understand this. So\nbasically. BSD/OS is safe with a seteuid executable if we keep the\nsetuid(geteuid()) call, while other OS's have serious problems we can't\nplug. I knew there was some OS-specific stuff in setuid. Seems a check\nthat uid and euid are the same and not root is the way to go.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 21 Apr 2001 17:53:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: setuid(geteuid());?"
}
] |
[
{
"msg_contents": "Is there an easy way to read the WAL files generated by Postgres? I'm\nlooking into writting a replication deamon for postgres and think that\nthe WAL files are the best way to know what has happened to the db and\nwhat has to be replicated. I have a roughed out idea of how to code it\nup but the only thing I'm really wrestling with is how read the WAL\nfiles. Can anyone give me some pointers. I've looked at all the xlog*\ncode and have a basic understading of how it works, but not a real good\none. Any help on this would be appreciated. Thanks\n",
"msg_date": "Sun, 22 Apr 2001 17:36:58 +0500",
"msg_from": "Chad La Joie <clajoie@vt.edu>",
"msg_from_op": true,
"msg_subject": "Replication through WAL"
},
{
"msg_contents": "> Is there an easy way to read the WAL files generated by Postgres? I'm\n> looking into writting a replication deamon for postgres and think that\n> the WAL files are the best way to know what has happened to the db and\n> what has to be replicated. I have a roughed out idea of how to code it\n> up but the only thing I'm really wrestling with is how read the WAL\n> files. Can anyone give me some pointers. I've looked at all the xlog*\n> code and have a basic understading of how it works, but not a real good\n> one. Any help on this would be appreciated. Thanks\n\nMany believe WAL will be the basis for further replication features. \nWe will discuss this as part of 7.2 development in a few weeks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 22 Apr 2001 21:21:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Replication through WAL"
},
{
"msg_contents": "Bruce Momjian wrote:\nOkay, would it be helpful if I made a few suggestions on things that I\nas a user/tool developer of postgres might find helpful?\n\n> \n> > Is there an easy way to read the WAL files generated by Postgres? I'm\n> > looking into writting a replication deamon for postgres and think that\n> > the WAL files are the best way to know what has happened to the db and\n> > what has to be replicated. I have a roughed out idea of how to code it\n> > up but the only thing I'm really wrestling with is how read the WAL\n> > files. Can anyone give me some pointers. I've looked at all the xlog*\n> > code and have a basic understading of how it works, but not a real good\n> > one. Any help on this would be appreciated. Thanks\n> \n> Many believe WAL will be the basis for further replication features.\n> We will discuss this as part of 7.2 development in a few weeks.\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n",
"msg_date": "Mon, 23 Apr 2001 07:29:48 +0500",
"msg_from": "Chad La Joie <clajoie@vt.edu>",
"msg_from_op": true,
"msg_subject": "Re: Replication through WAL"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> Okay, would it be helpful if I made a few suggestions on things that I\n> as a user/tool developer of postgres might find helpful?\n\nNot sure. I recommend hanging around, and when the discussion starts,\nyou can add things.\n\n> \n> > \n> > > Is there an easy way to read the WAL files generated by Postgres? I'm\n> > > looking into writting a replication deamon for postgres and think that\n> > > the WAL files are the best way to know what has happened to the db and\n> > > what has to be replicated. I have a roughed out idea of how to code it\n> > > up but the only thing I'm really wrestling with is how read the WAL\n> > > files. Can anyone give me some pointers. I've looked at all the xlog*\n> > > code and have a basic understading of how it works, but not a real good\n> > > one. Any help on this would be appreciated. Thanks\n> > \n> > Many believe WAL will be the basis for further replication features.\n> > We will discuss this as part of 7.2 development in a few weeks.\n> > \n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> > \n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 23 Apr 2001 10:31:00 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: Replication through WAL"
}
] |
[
{
"msg_contents": "It's nothing but unused definitions. PostgreSQL7.1 compiles and works for me.\n\nI got following warnings for PL/PgSQL\n\nmake[2]: `/opt/rh7/postgresql/postgresql-7.1/src/pl'\nmake[3]: `/opt/rh7/postgresql/postgresql-7.1/src/pl/plpgsql'\nmake -C src all\nmake[4]: `/opt/rh7/postgresql/postgresql-7.1/src/pl/plpgsql/src'\ngcc -c -I. -I../../../../src/include -I/usr/local/ssl/include -O2 -Wall -Wmissin\ng-prototypes -Wmissing-declarations -fpic\n -o pl_parse.o pl_gram.c\nIn file included from gram.y:44:\nlex.plpgsql_yy.c: In function `plpgsql_yylex':\nlex.plpgsql_yy.c:972: warning: label `find_rule' defined but not used\ngram.y: At top level:\nlex.plpgsql_yy.c:2223: warning: `plpgsql_yy_flex_realloc' defined but not used\n\nHope this helps,\n\n--\nYasuo Ohgaki\n\n\n\n",
"msg_date": "Mon, 23 Apr 2001 12:03:54 +0900",
"msg_from": "\"Yasuo Ohgaki\" <yohgaki@hotmial.com>",
"msg_from_op": true,
"msg_subject": "Compiler warning (V7.1 plpgsql)"
}
] |
[
{
"msg_contents": "What I'd like to see in 7.2 is a WAL API with the following\nfunctionality:\n * Get the latest transaction in the WAL\n * Get transaction, transId, from the WAL\n * Was a given transaction rolled back?\n\nWhat I don't want to have to worry about is all the internals needed for\nwritting the log. I shouldn't need to know if a new seagment was\ncreated, when and where checkpoints are made, etc.\n\nPerhaps there could be a small library for just reading stuff the WAL\nfile?\n\n\nThe second suggestion I have is one I alreay alluded to and would offer\nto help work on. Namely replication for Postgres. Given the simple API\nlisted above I already have a roughed out design for replication deamon.\n\nJust some suggestions.\n",
"msg_date": "Mon, 23 Apr 2001 08:32:16 +0500",
"msg_from": "Chad La Joie <clajoie@vt.edu>",
"msg_from_op": true,
"msg_subject": "7.2 feature request"
}
] |
[
{
"msg_contents": "\nI am trying to add another authentication mechanism to PostgreSQL... And,\nin doing that, I need to verify the existance of an user within PG. Short\nof hacking together code from verify_password(), is there any way to check\nif a user exists in postgresql? (The actuall password verification will be\ntaken care of elsewhere... I just need to check if the user exists.)\n\n\nthanks,\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n",
"msg_date": "Mon, 23 Apr 2001 01:14:03 -0500 (CDT)",
"msg_from": "\"Dominic J. Eidson\" <sauron@the-infinite.org>",
"msg_from_op": true,
"msg_subject": "How to determine if a user exists.."
},
{
"msg_contents": "On Mon, 23 Apr 2001, Dominic J. Eidson wrote:\n\n> I am trying to add another authentication mechanism to PostgreSQL... And,\n> in doing that, I need to verify the existance of an user within PG. Short\n> of hacking together code from verify_password(), is there any way to check\n> if a user exists in postgresql? (The actuall password verification will be\n> taken care of elsewhere... I just need to check if the user exists.)\n\npg_user holds users\n\n(passwords in pg_shadow)\n\nHTH,\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n",
"msg_date": "Mon, 23 Apr 2001 09:05:07 -0400 (EDT)",
"msg_from": "Joel Burton <jburton@scw.org>",
"msg_from_op": false,
"msg_subject": "Re: How to determine if a user exists.."
},
{
"msg_contents": "On Mon, 23 Apr 2001, Joel Burton wrote:\n\n> pg_user holds users\n> \n> (passwords in pg_shadow)\n\nI doubt the -hackers people would let me add SPI_* stuff into libpq, just\nto retrieve whether a user exists or not.. My first thought was to check\nthe existance of users against $PGDATA/pg_pwd... One question I'd have\nthere, is whether pg_pwd always exists (or, can be relied upon existing.)?\n\n\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n",
"msg_date": "Mon, 23 Apr 2001 08:43:40 -0500 (CDT)",
"msg_from": "\"Dominic J. Eidson\" <sauron@the-infinite.org>",
"msg_from_op": true,
"msg_subject": "Re: How to determine if a user exists.."
},
{
"msg_contents": "\"Dominic J. Eidson\" <sauron@the-infinite.org> writes:\n> I am trying to add another authentication mechanism to PostgreSQL... And,\n> in doing that, I need to verify the existance of an user within PG. Short\n> of hacking together code from verify_password(), is there any way to check\n> if a user exists in postgresql?\n\nIf you're trying to do this from the postmaster, I think the only way is\nto look at $PGDATA/global/pg_pwd, which is a flat-file version of\npg_shadow.\n\nYou'd be well advised to study the existing verification mechanisms in\nsrc/backend/libpq/.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Apr 2001 10:02:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: How to determine if a user exists.. "
},
{
"msg_contents": "On Mon, 23 Apr 2001, Tom Lane wrote:\n\n> If you're trying to do this from the postmaster, I think the only way is\n> to look at $PGDATA/global/pg_pwd, which is a flat-file version of\n> pg_shadow.\n\nThis is what I thought - thanks.\n\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n",
"msg_date": "Mon, 23 Apr 2001 09:14:50 -0500 (CDT)",
"msg_from": "\"Dominic J. Eidson\" <sauron@the-infinite.org>",
"msg_from_op": true,
"msg_subject": "Re: How to determine if a user exists.. "
},
{
"msg_contents": "Dominic J. Eidson writes:\n\n> On Mon, 23 Apr 2001, Joel Burton wrote:\n>\n> > pg_user holds users\n> >\n> > (passwords in pg_shadow)\n>\n> I doubt the -hackers people would let me add SPI_* stuff into libpq, just\n> to retrieve whether a user exists or not..\n\nYou wouldn't have to do that. There are better ways to read system tables\nin the backend. See FAQ_DEV.\n\n> My first thought was to check\n> the existance of users against $PGDATA/pg_pwd... One question I'd have\n> there, is whether pg_pwd always exists (or, can be relied upon existing.)?\n\nNo it doesn't and no you can't.\n\nThe best way to verify a user's existence in the context of a new\nauthentication method is to not do that at all. None of the other methods\ndo it, the existence of a user is checked when authentication has\ncompleted and the backend starts.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Mon, 23 Apr 2001 17:34:55 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: How to determine if a user exists.."
}
] |
[
{
"msg_contents": "I have a question about pg_statistic: Can we safely remove all records\nfrom pg_statistic? In my understanding, we could do it safely, since\nthe table is just a result holder of vacuum analyze.\n--\nTatsuo Ishii\n\n",
"msg_date": "Mon, 23 Apr 2001 15:25:15 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "pg_statistic"
},
{
"msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> I have a question about pg_statistic: Can we safely remove all records\n> from pg_statistic?\n\nSure.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Apr 2001 09:41:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_statistic "
}
] |
[
{
"msg_contents": "\n> >> But you don't really need to look at the index (if it even exists\n> >> at the time you do the ANALYZE). The extent to which the data is\n> >> ordered in the table is a property of the table, not the index.\n> \n> > Think compound, ascending, descending and functional index.\n> > The (let's call it) cluster statistic for estimating indexscan cost can only \n> > be deduced from the index itself (for all but the simplest one column btree).\n> \n> If you want to write code that handles those cases, go right ahead ;-).\n> I think it's sufficient to look at the first column of a multicolumn\n> index for cluster-order estimation\n\nI often see first index columns that are even unique when the appl is installed\nfor a small company (like a company id column (e.g. \"mandt\" in SAP)).\n\n> --- remember all these numbers are pretty crude anyway.\n\nOk, you want to supply a value, that shows how well sorted single\ncolumns are in regard to < >. Imho this value should be stored in pg_attribute.\nLater someone can add a statistic to pg_index that shows how well clustered the index is.\nIn lack of a pg_index statistic the optimizer uses the pg_attribute value of the first \nindex column. I think that would be a good plan.\n\n> We have no such thing as a \"descending index\";\n> and I'm not going to worry about clustering estimation for functional\n> indexes.\n\nOk, an approach that reads ctid pointers from the index in index order\nwould not need to worry about how the index is actually filled. It would need\na method to sample (or read all) ctid pointers from the index in index order.\n\nAndreas\n",
"msg_date": "Mon, 23 Apr 2001 12:34:05 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: AW: RFC: planner statistics in 7.2 "
}
] |
[
{
"msg_contents": "Hi,\n\nI'm about to write a simple one-way replication script relying on xmin\nand would like to speed up things by putting an index on it.\n\nSo I have a few questions:\n\n1. Will something bad happen if I put index on xmin ?\n\n2. Is it just a bad idea to do it that way ?\n (there will be no deletes, just mainly inserts and some updates)\n\n\n------------\nHannu\n\n",
"msg_date": "Mon, 23 Apr 2001 14:44:38 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": true,
"msg_subject": "Will something bad happen if I put index on xmin ?"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> 1. Will something bad happen if I put index on xmin ?\n\nI was just testing that sort of thing yesterday. pg_dump prior to\nyesterday's patch will crash upon seeing such an index, but that was\nthe only problem I found.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Apr 2001 09:48:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Will something bad happen if I put index on xmin ? "
}
] |
[
{
"msg_contents": "I noticed in the documentation that row length is unlimited. I think I\ntook that to mean row name length is now unlimited. But, row\nname length still appears to be set to a static width. Do I still need to\nrecompile postgres to get 64 character row headers?\n\nPostgres 7.1 RPMS\nRedhat 6.2\n\n\nHelp is always appreciated\n\n-- \nAdam Rose\n\n\n",
"msg_date": "Mon, 23 Apr 2001 09:41:58 -0500 (CDT)",
"msg_from": "Adam Rose <adamr@eaze.net>",
"msg_from_op": true,
"msg_subject": "row name length"
},
{
"msg_contents": "Adam Rose writes:\n\n> I noticed in the documentation that row length is unlimited. I think I\n> took that to mean row name length is now unlimited. But, row\n\nYou took that wrong...\n\n> name length still appears to be set to a static width. Do I still need to\n> recompile postgres to get 64 character row headers?\n\nYes.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Mon, 23 Apr 2001 17:36:29 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: row name length"
},
{
"msg_contents": "Thanks for you help\n\nOn Mon, 23 Apr 2001, Peter Eisentraut wrote:\n\n> Adam Rose writes:\n> \n> > I noticed in the documentation that row length is unlimited. I think I\n> > took that to mean row name length is now unlimited. But, row\n> \n> You took that wrong...\n> \n> > name length still appears to be set to a static width. Do I still need to\n> > recompile postgres to get 64 character row headers?\n> \n> Yes.\n> \n> \n\n-- \nAdam Rose\n\n\n",
"msg_date": "Mon, 23 Apr 2001 11:08:12 -0500 (CDT)",
"msg_from": "Adam Rose <adamr@eaze.net>",
"msg_from_op": true,
"msg_subject": "Re: row name length"
},
{
"msg_contents": "Just a question, where is NAMEDATALEN now in 7.1, I didn't see it in\npostgres.h. If this is no longer used to change column name length, what\nis? Your help is appreciated.\n\n- \nAdam Rose\n\n",
"msg_date": "Mon, 23 Apr 2001 12:29:54 -0500 (CDT)",
"msg_from": "Adam Rose <adamr@eaze.net>",
"msg_from_op": true,
"msg_subject": "Re: row name length"
},
{
"msg_contents": "Is anyone else seeing this?\n\nI have the current CVS sources and \"make check\" ends up with one\nfailure. My regression.diffs shows:\n\n\n*** ./expected/join.out Thu Dec 14 17:30:45 2000\n--- ./results/join.out Mon Apr 23 20:23:15 2001\n***************\n*** 1845,1851 ****\n -- UNION JOIN isn't implemented yet\n SELECT '' AS \"xxx\", *\n FROM J1_TBL UNION JOIN J2_TBL;\n! ERROR: UNION JOIN is not implemented yet\n --\n -- Clean up\n --\n--- 1845,1851 ----\n -- UNION JOIN isn't implemented yet\n SELECT '' AS \"xxx\", *\n FROM J1_TBL UNION JOIN J2_TBL;\n! ERROR: parser: parse error at or near \"JOIN\"\n --\n -- Clean up\n --\n\n======================================================================\n\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Mon, 23 Apr 2001 21:10:21 -0400",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "ERROR: parser: parse error at or near \"JOIN\""
},
{
"msg_contents": "Please disregard this. This message was held by Majordomo for a couple\nof days and I have already resent it.\n\nTom Lane has already solved my problem (I had a miscompiled version of\nbison in my machine).\n\nRegards to all,\nFernando\n\n\nFernando Nasser wrote:\n> \n> Is anyone else seeing this?\n> \n> I have the current CVS sources and \"make check\" ends up with one\n> failure. My regression.diffs shows:\n> \n> *** ./expected/join.out Thu Dec 14 17:30:45 2000\n> --- ./results/join.out Mon Apr 23 20:23:15 2001\n> ***************\n> *** 1845,1851 ****\n> -- UNION JOIN isn't implemented yet\n> SELECT '' AS \"xxx\", *\n> FROM J1_TBL UNION JOIN J2_TBL;\n> ! ERROR: UNION JOIN is not implemented yet\n> --\n> -- Clean up\n> --\n> --- 1845,1851 ----\n> -- UNION JOIN isn't implemented yet\n> SELECT '' AS \"xxx\", *\n> FROM J1_TBL UNION JOIN J2_TBL;\n> ! ERROR: parser: parse error at or near \"JOIN\"\n> --\n> -- Clean up\n> --\n> \n\n-- \nFernando Nasser\nRed Hat Inc. E-Mail: fnasser@redhat.com\n",
"msg_date": "Thu, 26 Apr 2001 10:26:48 -0400",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: parser: parse error at or near \"JOIN\""
}
] |
[
{
"msg_contents": "\nFolks:\n I'm planning a port of Postgres to a multiprocessor\narchitecture in which all nodes have both local memory \nand fast access to a shared memory. Shared memory it more\nexpensive than local memory.\n\n\tMy intent is to put the shmem & lock structures in\nshared memory, but use a copy-in / copy-out approach to\nmaintain coherence in the buffer cache:\n\t- copy buffer from shared memroy on buffer allocate\n\t- write back buffer to shared memory when it is dirtied.\n\n\tIs that enough ?\n\n\tThe idea sketch is as follows (mostly, changes\ncontained in storage/buffer/bufmgr.c):\n\n\t-change BufferAlloc, etc, to create a node-local copy\nof the buffer (from shared memory). Copy both the BufferDesc\nentry and the buffer->data array\n\n\t-change WriteBuffer to copy the (locally changed) buffer\n\tto shared memory (this is the point in which the BM_DIRTY\n\tbit is set). [ I am assuming the buffer is locked & this\n\tis a safe time to make the buffer visible to other backends].\n\n[Assume, for this discussion, that the sem / locks structs in\nshared memory have been ported & work ]. Ditto for the hash access.\n\n\tMy concern is whether that is enough to maintain consistency\nin the buffer cache (i.e, are there other places in the code\nwhere a backend might have a leftover pointer to somewhere in\nthe buffer cache ? ) Because, in the scheme above, the buffer\ncache is not directly accessible to the backend except via this\ncopy in / copy -out approach.\n\n\t[BTW, I think this might be a way of providing a 'cluster'\nversion of Postgers, by using some global communication module to \nobtain/post the 'buffer cache' values]\n\n\t\tthanks\n\t\t\tregards\n\t\t\t\tMauricio\n\nMauricio Breternitz Jr, Ph.D.\nTimes N Systems Inc.\n1908 Kramer Ln, Braker Building B, Suite P\nAustin, TX 78758\nphone (512) 977 5368\nmauriciob@timesn.com \n\nMauricio Breternitz Jr, Ph.D.\nTimes N Systems Inc.\n1908 Kramer Ln, Braker Building B, Suite P\nAustin, TX 78758\nphone (512) 977 5368\nmauriciob@timesn.com \n",
"msg_date": "Mon, 23 Apr 2001 11:16:04 -0500",
"msg_from": "Mauricio Breternitz <MauricioB@timesn.com>",
"msg_from_op": true,
"msg_subject": "concurrent postgres in NUMA cluster postgres - design OK ?"
}
] |
[
{
"msg_contents": "\nFolks:\n I'm planning a port of Postgres to a multiprocessor\narchitecture in which all nodes have both local memory\nand fast access to a shared memory. Shared memory it more\nexpensive than local memory.\n\n\tMy intent is to put the shmem & lock structures in\nshared memory, but use a copy-in / copy-out approach to\nmaintain coherence in the buffer cache:\n\t- copy buffer from shared memroy on buffer allocate\n\t- write back buffer to shared memory when it is dirtied.\n\n\tIs that enough ?\n\n\tThe idea sketch is as follows (mostly, changes\ncontained to storage/buffer/bufmgr.c):\n\n\t-change BufferAlloc, etc, to create a node-local copy\nof the buffer (from shared memory). Copy both the BufferDesc\nentry and the buffer->data array\n\n\t-change WriteBuffer to copy the (locally changed) buffer\n\tto shared memory (this is the point in which the BM_DIRTY\n\tbit is set). [ I am assuming the buffer is locked & this\n\tis a safe time to make the buffer visible to other backends].\n\n[Assume, for this discussion, that the sem / locks structs in\nshared memory have been ported & work ]. Ditto for the hash access.\n\n\tMy concern is whether that is enough to maintain consistency\nin the buffer cache (i.e, are there other places in the code\nwhere a backend might have a leftover pointer to somewhere in\nthe buffer cache ? ) Because, in the scheme above, the buffer\ncache is not directly accessible to the backend except via this\ncopy in / copy -out approach.\n\n\t[BTW, I think this might be a way of providing a 'cluster'\nversion of Postgers, by using some global communication module to\nobtain/post the 'buffer cache' values]\n\n\t\tthanks\n\t\t\tregards\n\t\t\t\tMauricio\n\n mbjsql@hotmail.com\n\n\n\n_________________________________________________________________\nGet your FREE download of MSN Explorer at http://explorer.msn.com\n\n",
"msg_date": "Mon, 23 Apr 2001 11:41:31 -0500",
"msg_from": "\"Mauricio Breternitz\" <mbjsql@hotmail.com>",
"msg_from_op": true,
"msg_subject": "concurrent Postgres on NUMA - howto ? "
},
{
"msg_contents": "\"Mauricio Breternitz\" <mbjsql@hotmail.com> writes:\n> \tMy concern is whether that is enough to maintain consistency\n> in the buffer cache\n\nNo, it isn't --- for one thing, WriteBuffer wouldn't cause other\nbackends to update their copies of the page. At the very least you'd\nneed to synchronize where the LockBuffer calls are, not where\nWriteBuffer is called.\n\nI really question whether you want to do anything like this at all.\nSeems like accessing the shared buffers right where they are will be\nfastest; your approach will entail a huge amount of extra data copying.\nConsidering that a backend doesn't normally touch every byte on a page\nthat it accesses, I wouldn't be surprised if full-page copying would\nnet out to being more shared-memory traffic, rather than less.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Apr 2001 19:43:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: concurrent Postgres on NUMA - howto ? "
}
] |
[
{
"msg_contents": "Postgresql Programmer's Guide\nby Thomas Lockhart, Thomas Lochart (Editor)\n\n http://www.amazon.com/exec/obidos/ASIN/0595149170/ref=pd_sim_elt_l1/107-6921356-0996510\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 23 Apr 2001 13:07:00 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Look what book I found"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Postgresql Programmer's Guide\n> by Thomas Lockhart, Thomas Lochart (Editor)\n> \n> http://www.amazon.com/exec/obidos/ASIN/0595149170/ref=pd_sim_elt_l1/107-6921356-0996510\n\nOh! Have you changed the PostgreSQL logo to an unplugged old macintosh\nmouse ?\n\n-----------------\nHannu\n",
"msg_date": "Tue, 24 Apr 2001 11:33:43 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Look what book I found"
}
] |
[
{
"msg_contents": "Actually, the text I quoted was wrong. It was from the Amazon web page.\nThe book cover says:\n\n\tPostgresql Programmer's Guide\n\tby The PostgreSQL Development Team\n\tEdited by Thomas Lochart\n\nAlso, somone reviewed my book at:\n\n\thttp://Linuxiso.org/bookreviews/postgresql.html\n\nThis is how I found about about this other book.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 23 Apr 2001 13:09:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "book I found"
}
] |
[
{
"msg_contents": "\nAnyone thought of implementing this, similar to how sendmail does it? If\nload > n, refuse connections?\n\nBasically, if great to set max clients to 256, but if load hits 50 as a\nresult, the database is near to useless ... if you set it to 256, and 254\nidle connections are going, load won't rise much, so is safe, but if half\nof those processes are active, it hurts ...\n\nso, if it was set so that a .conf variable could be set so that max\nconnection == 256 *or* load > n to refuse connections, you'd hvae best of\nboth worlds ...\n\nsendmail does it now, and, apparently relatively portable across OSs ...\nokay, just looked at the code, and its kinda painful, but its in\nsrc/conf.c, as a 'getla' function ...\n\nIf nobody is working on something like this, does anyone but me feel that\nit has merit to make use of? I'll play with it if so ...\n\n\n\n",
"msg_date": "Mon, 23 Apr 2001 15:09:53 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "refusing connections based on load ..."
},
{
"msg_contents": "On Mon, Apr 23, 2001 at 03:09:53PM -0300, The Hermit Hacker wrote:\n> \n> Anyone thought of implementing this, similar to how sendmail does it? If\n> load > n, refuse connections?\n> ... \n> If nobody is working on something like this, does anyone but me feel that\n> it has merit to make use of? I'll play with it if so ...\n\nI agree that it would be useful. Even more useful would be soft load \nshedding, where once some load average level is exceeded the postmaster \ndelays a bit (proportionately) before accepting a connection. \n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Mon, 23 Apr 2001 12:11:05 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: refusing connections based on load ..."
},
{
"msg_contents": "Nathan Myers wrote:\n> On Mon, Apr 23, 2001 at 03:09:53PM -0300, The Hermit Hacker wrote:\n> >\n> > Anyone thought of implementing this, similar to how sendmail does it? If\n> > load > n, refuse connections?\n> > ...\n> > If nobody is working on something like this, does anyone but me feel that\n> > it has merit to make use of? I'll play with it if so ...\n>\n> I agree that it would be useful. Even more useful would be soft load\n> shedding, where once some load average level is exceeded the postmaster\n> delays a bit (proportionately) before accepting a connection.\n\n Or have the load check on AtXactStart, and delay new\n transactions until load is back below x, where x is\n configurable per user/group plus some per database scaling\n factor.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Mon, 23 Apr 2001 15:00:19 -0500 (EST)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: refusing connections based on load ..."
},
{
"msg_contents": "The soft load shedding idea is great.\n\nAlong the lines of \"lots of idle connections\" is the issue with the simple\nnumber of connections. I suspect in most real world apps you'll have\nlogic+web serving on a set of frontends talking to a single db backend\n(until clustering is really nailed).\n\nThe issue we hit is that if we all the frontends have 250 maxclients, the\nnumber on the backend goes way up.\n\nThis falls in the connection pooling realm, and could be implemented with\nthe client lib presenting a server view, so apps would simply treat the\npooler as a local server which would allocate connections as needed from a\npool of persistent connections. This also has a benefit in cases (cgi) where\npersistent connections cannot be maintained properly. I suspect we've got a\n10% duty cycle on the persistent connections we set up... This problem is\npredicated on the idea that holding a connection is not negligible (i.e.,\n5,000 connections open is worse than 200) for the same loads. Not sure if\nthat's the case...\n\nAZ\n\n\n\n\n\"Nathan Myers\" <ncm@zembu.com> wrote in message\nnews:20010423121105.Y3797@store.zembu.com...\n> On Mon, Apr 23, 2001 at 03:09:53PM -0300, The Hermit Hacker wrote:\n> >\n> > Anyone thought of implementing this, similar to how sendmail does it?\nIf\n> > load > n, refuse connections?\n> > ...\n> > If nobody is working on something like this, does anyone but me feel\nthat\n> > it has merit to make use of? I'll play with it if so ...\n>\n> I agree that it would be useful. Even more useful would be soft load\n> shedding, where once some load average level is exceeded the postmaster\n> delays a bit (proportionately) before accepting a connection.\n>\n> Nathan Myers\n> ncm@zembu.com\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n\n",
"msg_date": "Mon, 23 Apr 2001 16:45:25 -0400",
"msg_from": "\"August Zajonc\" <augustz@bigfoot.com>",
"msg_from_op": false,
"msg_subject": "Re: refusing connections based on load ..."
},
{
"msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n> sendmail does it now, and, apparently relatively portable across OSs ...\n\nsendmail expects to be root. It's unlikely (and very undesirable) that\npostgres will be installed with adequate privileges to read /dev/kmem,\nwhich is what it'd take to run the sendmail loadaverage code on most\nplatforms...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Apr 2001 19:51:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: refusing connections based on load ... "
},
{
"msg_contents": "On Mon, 23 Apr 2001, Tom Lane wrote:\n\n> The Hermit Hacker <scrappy@hub.org> writes:\n> > sendmail does it now, and, apparently relatively portable across OSs ...\n>\n> sendmail expects to be root. It's unlikely (and very undesirable) that\n> postgres will be installed with adequate privileges to read /dev/kmem,\n> which is what it'd take to run the sendmail loadaverage code on most\n> platforms...\n\nActually, not totally accurate ... sendmail has a 'RunAs' option for those\nthat don't wish to have it run as root, and still works for the loadavg\nstuff, to the best of my knowledge (its an option I haven't played with\nyet) ...\n\n\n",
"msg_date": "Mon, 23 Apr 2001 23:34:53 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: refusing connections based on load ... "
},
{
"msg_contents": "* The Hermit Hacker <scrappy@hub.org> [010423 21:38]:\n> On Mon, 23 Apr 2001, Tom Lane wrote:\n> \n> > The Hermit Hacker <scrappy@hub.org> writes:\n> > > sendmail does it now, and, apparently relatively portable across OSs ...\n> >\n> > sendmail expects to be root. It's unlikely (and very undesirable) that\n> > postgres will be installed with adequate privileges to read /dev/kmem,\n> > which is what it'd take to run the sendmail loadaverage code on most\n> > platforms...\n> \n> Actually, not totally accurate ... sendmail has a 'RunAs' option for those\n> that don't wish to have it run as root, and still works for the loadavg\n> stuff, to the best of my knowledge (its an option I haven't played with\n> yet) ...\nAnd 8.12.x will have some other options as well....\n\nLike the SUBMISSION prog only needs to be SGID, not SUID....\n\nLER\n\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 23 Apr 2001 21:42:36 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: refusing connections based on load ..."
},
{
"msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n> On Mon, 23 Apr 2001, Tom Lane wrote:\n>> sendmail expects to be root.\n\n> Actually, not totally accurate ... sendmail has a 'RunAs' option for those\n> that don't wish to have it run as root,\n\nTrue, it doesn't *have* to be root, but the loadavg code still requires\nprivileges beyond those of mere mortals (as does listening on port 25,\nlast I checked).\n\nOn my HPUX box:\n\n$ ls -l /dev/kmem\ncrw-r----- 1 bin sys 3 0x000001 Jun 10 1996 /dev/kmem\n\nso postgres would have to run setuid bin or setgid sys to read the load\naverage. Either one is equivalent to giving an attacker the keys to the\nkingdom (overwrite a few key /usr/bin/ executables and wait for root to\nrun one...)\n\nOn Linux and BSD it seems to be more common to put /dev/kmem into a\nspecialized group \"kmem\", so running postgres as setgid kmem is not so\nimmediately dangerous. Still, do you think it's a good idea to let an\nattacker have open-ended rights to read your kernel memory? It wouldn't\ntake too much effort to sniff passwords, for example.\n\nBasically, if we do this then we are abandoning the notion that Postgres\nruns as an unprivileged user. I think that's a BAD idea, especially in\nan environment that's open enough that you might feel the need to\nload-throttle your users. By definition you do not trust them, eh?\n\nA less dangerous way of approaching it might be to have an option\nwhereby the postmaster invokes 'uptime' via system() every so often\n(maybe once a minute?) and throttles on the basis of the results.\nThe reaction time would be poorer, but security would be a whole lot\nbetter.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Apr 2001 22:50:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: refusing connections based on load ... "
},
{
"msg_contents": "* Larry Rosenman <ler@lerctr.org> [010423 21:45]:\n> * The Hermit Hacker <scrappy@hub.org> [010423 21:38]:\n> > On Mon, 23 Apr 2001, Tom Lane wrote:\n> > \n> > > The Hermit Hacker <scrappy@hub.org> writes:\n> > > > sendmail does it now, and, apparently relatively portable across OSs ...\n> > >\n> > > sendmail expects to be root. It's unlikely (and very undesirable) that\n> > > postgres will be installed with adequate privileges to read /dev/kmem,\n> > > which is what it'd take to run the sendmail loadaverage code on most\n> > > platforms...\n> > \n> > Actually, not totally accurate ... sendmail has a 'RunAs' option for those\n> > that don't wish to have it run as root, and still works for the loadavg\n> > stuff, to the best of my knowledge (its an option I haven't played with\n> > yet) ...\n> And 8.12.x will have some other options as well....\n> \n> Like the SUBMISSION prog only needs to be SGID, not SUID....\nActually, the sendmail DAEMON will still have ROOT privs, so it can\nread /dev/kmem.\n\nI suspect I don't have as much of an issue if we are sgid kmem...\n\nLER\n\n> \n> LER\n> \n> > \n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 23 Apr 2001 21:50:56 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: refusing connections based on load ..."
},
{
"msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010423 21:54]:\n> The Hermit Hacker <scrappy@hub.org> writes:\n\n> On my HPUX box:\n> \n> $ ls -l /dev/kmem\n> crw-r----- 1 bin sys 3 0x000001 Jun 10 1996 /dev/kmem\n> \n> so postgres would have to run setuid bin or setgid sys to read the load\n> average. Either one is equivalent to giving an attacker the keys to the\n> kingdom (overwrite a few key /usr/bin/ executables and wait for root to\n> run one...)\nOn my UnixWare box it's 0440 sys.sys....\n\n> \n> On Linux and BSD it seems to be more common to put /dev/kmem into a\n> specialized group \"kmem\", so running postgres as setgid kmem is not so\n> immediately dangerous. Still, do you think it's a good idea to let an\n> attacker have open-ended rights to read your kernel memory? It wouldn't\n> take too much effort to sniff passwords, for example.\n> \n> Basically, if we do this then we are abandoning the notion that Postgres\n> runs as an unprivileged user. I think that's a BAD idea, especially in\n> an environment that's open enough that you might feel the need to\n> load-throttle your users. By definition you do not trust them, eh?\n> \n> A less dangerous way of approaching it might be to have an option\n> whereby the postmaster invokes 'uptime' via system() every so often\n> (maybe once a minute?) and throttles on the basis of the results.\n> The reaction time would be poorer, but security would be a whole lot\n> better.\nThen there are boxes like my UnixWare one where the load average is\nnot available AT ALL:\n\n$ uptime\n 10:05pm up 2 days, 3:16, 3 users\n$ \n\nIt's a threaded kernel, and SCO/Novell/whoever has removed all traces\nfrom userland of the load average. avenrun[] is still a symbol in the\nkernel, but...\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 23 Apr 2001 22:07:12 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: refusing connections based on load ..."
},
{
"msg_contents": "Tom Lane wrote:\n\n> A less dangerous way of approaching it might be to have an option\n> whereby the postmaster invokes 'uptime' via system() every so often\n> (maybe once a minute?) and throttles on the basis of the results.\n> The reaction time would be poorer, but security would be a whole lot\n> better.\n\nRather than do system('uptime') and incur the process start-up each time,\nyou could do fp = popen('vmstat 60', 'r'), then just read the fp.\n\nI believe vmstat is fairly standard. For those systems \nwhich don't support vmstat, it could be faked with a shell script.\n\nYou could write the specific code to handle each arch, but it's\na royal pain, because it's so different for many archs.\n\nAnother possibility could be to read from /proc for those systems\nthat support /proc. But I think this will be more variable than\nthe output from vmstat. Vmstat also has the added benefit of\nproviding other information.\n\nI agree with Tom about not wanting to open up /dev/kmem, \ndue to potential security problems.\n\nNeal\n",
"msg_date": "Mon, 23 Apr 2001 23:12:20 -0400",
"msg_from": "Neal Norwitz <neal@metaslash.com>",
"msg_from_op": false,
"msg_subject": "Re: refusing connections based on load ..."
},
{
"msg_contents": "> Rather than do system('uptime') and incur the process start-up each time,\n> you could do fp = popen('vmstat 60', 'r'), then just read the fp.\n\npopen doesn't incur a process start? Get real. But you're right, popen()\nis the right call not system(), because you need to read the stdout.\n\n> I believe vmstat is fairly standard.\n\nNot more so than uptime --- and the latter's output format is definitely\nless variable across platforms. The HPUX man page goes so far as to say\n\nWARNINGS\n Users of vmstat must not rely on the exact field widths and spacing of\n its output, as these will vary depending on the system, the release of\n HP-UX, and the data to be displayed.\n\nand that's just for *one* platform.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Apr 2001 23:21:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: refusing connections based on load ... "
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> On Linux and BSD it seems to be more common to put /dev/kmem into a\n> specialized group \"kmem\", so running postgres as setgid kmem is not so\n> immediately dangerous. Still, do you think it's a good idea to let an\n> attacker have open-ended rights to read your kernel memory? It wouldn't\n> take too much effort to sniff passwords, for example.\n\nOn Linux you can get the load average by doing `cat /proc/loadavg'.\nOn NetBSD you can get the load average via a sysctl. On those systems\nand others the uptime program is neither setuid nor setgid.\n\n> A less dangerous way of approaching it might be to have an option\n> whereby the postmaster invokes 'uptime' via system() every so often\n> (maybe once a minute?) and throttles on the basis of the results.\n> The reaction time would be poorer, but security would be a whole lot\n> better.\n\nThat is the way to do it on systems where obtaining the load average\nrequires special privileges. But do you really need the load average\nonce a minute? The load average printed by uptime is just as accurate\nas the load average obtained by examining the kernel.\n\nIan\n\n---------------------------(end of broadcast)---------------------------\nTIP 652: Life is a serious burden, which no thinking, humane person would\nwantonly inflict on someone else.\n\t\t-- Clarence Darrow\n",
"msg_date": "23 Apr 2001 20:24:51 -0700",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": false,
"msg_subject": "Re: refusing connections based on load ..."
},
{
"msg_contents": "\nother then a potential buffer overrun, what would be the problem with:\n\nopen(kmem)\nread values\nclose(kmem)\n\n?\n\nI would think it would be less taxing to the system then doing a system()\ncall, but still effectively as safe, no?\n\nOn Mon, 23 Apr 2001, Tom Lane wrote:\n\n> The Hermit Hacker <scrappy@hub.org> writes:\n> > On Mon, 23 Apr 2001, Tom Lane wrote:\n> >> sendmail expects to be root.\n>\n> > Actually, not totally accurate ... sendmail has a 'RunAs' option for those\n> > that don't wish to have it run as root,\n>\n> True, it doesn't *have* to be root, but the loadavg code still requires\n> privileges beyond those of mere mortals (as does listening on port 25,\n> last I checked).\n>\n> On my HPUX box:\n>\n> $ ls -l /dev/kmem\n> crw-r----- 1 bin sys 3 0x000001 Jun 10 1996 /dev/kmem\n>\n> so postgres would have to run setuid bin or setgid sys to read the load\n> average. Either one is equivalent to giving an attacker the keys to the\n> kingdom (overwrite a few key /usr/bin/ executables and wait for root to\n> run one...)\n>\n> On Linux and BSD it seems to be more common to put /dev/kmem into a\n> specialized group \"kmem\", so running postgres as setgid kmem is not so\n> immediately dangerous. Still, do you think it's a good idea to let an\n> attacker have open-ended rights to read your kernel memory? It wouldn't\n> take too much effort to sniff passwords, for example.\n>\n> Basically, if we do this then we are abandoning the notion that Postgres\n> runs as an unprivileged user. I think that's a BAD idea, especially in\n> an environment that's open enough that you might feel the need to\n> load-throttle your users. By definition you do not trust them, eh?\n>\n> A less dangerous way of approaching it might be to have an option\n> whereby the postmaster invokes 'uptime' via system() every so often\n> (maybe once a minute?) and throttles on the basis of the results.\n> The reaction time would be poorer, but security would be a whole lot\n> better.\n>\n> \t\t\tregards, tom lane\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Tue, 24 Apr 2001 01:20:42 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: refusing connections based on load ... "
},
{
"msg_contents": "On 23 Apr 2001, Ian Lance Taylor wrote:\n\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>\n> > On Linux and BSD it seems to be more common to put /dev/kmem into a\n> > specialized group \"kmem\", so running postgres as setgid kmem is not so\n> > immediately dangerous. Still, do you think it's a good idea to let an\n> > attacker have open-ended rights to read your kernel memory? It wouldn't\n> > take too much effort to sniff passwords, for example.\n>\n> On Linux you can get the load average by doing `cat /proc/loadavg'.\n> On NetBSD you can get the load average via a sysctl. On those systems\n> and others the uptime program is neither setuid nor setgid.\n\nGood call ... FreeBSD has it also, and needs no special privileges ...\njust checked, and the sysctl command isn't setuid/setgid anything, so I'm\nguessing that using sysctl() to pull these values shouldn't create any\nsecurity issues on those systems that support it ?\n\n\n",
"msg_date": "Tue, 24 Apr 2001 01:23:41 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: refusing connections based on load ..."
},
{
"msg_contents": "At 03:09 PM 23-04-2001 -0300, you wrote:\n>\n>Anyone thought of implementing this, similar to how sendmail does it? If\n>load > n, refuse connections?\n>\n>Basically, if great to set max clients to 256, but if load hits 50 as a\n>result, the database is near to useless ... if you set it to 256, and 254\n>idle connections are going, load won't rise much, so is safe, but if half\n>of those processes are active, it hurts ...\n\nSorry, but I still don't understand the reasons why one would want to do\nthis. Could someone explain?\n\nI'm thinking that if I allow 256 clients, and my hardware/OS bogs down when\n60 users are doing lots of queries, I either accept that, or figure that my\nhardware/OS actually can't cope with that many clients and reduce the max\nclients or upgrade the hardware (or maybe do a little tweaking here and\nthere).\n\nWhy not be more deterministic about refusing connections and stick to\nreducing max clients? If not it seems like a case where you're promised\nsomething but when you need it, you can't have it. \n\nCheerio,\nLink.\n\n\n\n",
"msg_date": "Tue, 24 Apr 2001 12:39:29 +0800",
"msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>",
"msg_from_op": false,
"msg_subject": "Re: refusing connections based on load ..."
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> > Rather than do system('uptime') and incur the process start-up each time,\n> > you could do fp = popen('vmstat 60', 'r'), then just read the fp.\n> \n> popen doesn't incur a process start? Get real. But you're right, popen()\n> is the right call not system(), because you need to read the stdout.\n\nTom,\n\nI think the point here is that the 'vmstat' process, once started,\nwill keep printing status output every 60 seconds (if invoked as\nabove) so you don't have to restart it every minute, just read the\npipe. \n\n> > I believe vmstat is fairly standard.\n> \n> Not more so than uptime --- and the latter's output format is definitely\n> less variable across platforms. The HPUX man page goes so far as to say\n> \n> WARNINGS\n> Users of vmstat must not rely on the exact field widths and spacing of\n> its output, as these will vary depending on the system, the release of\n> HP-UX, and the data to be displayed.\n> \n> and that's just for *one* platform.\n\nA very valid objection. I'm also dubious as to the utility of the\nwhole concept. What happens when Sendmail refuses a message based on\nload? It is requeued on the sending end to be tried later. What\nhappens when PG refuses a new client connection based on load? The\napplication stops working. Is this really better than having slow\nresponse time because the server is thrashing?\n\nI guess my point is that Sendmail is a store-and-forward situation\nwhere the mail system can \"catch up\" once the load returns to normal.\nWhereas, I would think, the majority of PG installations want a\nworking database, and whether it's refusing connections due to load or \nsimply bogged down isn't going to make a difference to users that\ncan't get their data.\n\n-Doug\n-- \nThe rain man gave me two cures; he said jump right in,\nThe first was Texas medicine--the second was just railroad gin,\nAnd like a fool I mixed them, and it strangled up my mind,\nNow people just get uglier, and I got no sense of time... --Dylan\n",
"msg_date": "24 Apr 2001 00:53:55 -0400",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: refusing connections based on load ..."
},
{
"msg_contents": "On Mon, Apr 23, 2001 at 10:50:42PM -0400, Tom Lane wrote:\n> Basically, if we do this then we are abandoning the notion that Postgres\n> runs as an unprivileged user. I think that's a BAD idea, especially in\n> an environment that's open enough that you might feel the need to\n> load-throttle your users. By definition you do not trust them, eh?\n\nNo. It's not a case of trust, but of providing an adaptive way\nto keep performance reasonable. The users may have no independent\nway to cooperate to limit load, but the DB can provide that.\n\n> A less dangerous way of approaching it might be to have an option\n> whereby the postmaster invokes 'uptime' via system() every so often\n> (maybe once a minute?) and throttles on the basis of the results.\n> The reaction time would be poorer, but security would be a whole lot\n> better.\n\nYes, this alternative looks much better to me. On Linux you have\nthe much more efficient alternative, /proc/loadavg. (I wouldn't\nuse system(), though.)\n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Mon, 23 Apr 2001 22:00:39 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: refusing connections based on load ..."
},
{
"msg_contents": "On Tue, Apr 24, 2001 at 12:39:29PM +0800, Lincoln Yeoh wrote:\n> At 03:09 PM 23-04-2001 -0300, you wrote:\n> >Basically, if great to set max clients to 256, but if load hits 50 \n> >as a result, the database is near to useless ... if you set it to 256, \n> >and 254 idle connections are going, load won't rise much, so is safe, \n> >but if half of those processes are active, it hurts ...\n> \n> Sorry, but I still don't understand the reasons why one would want to do\n> this. Could someone explain?\n> \n> I'm thinking that if I allow 256 clients, and my hardware/OS bogs down\n> when 60 users are doing lots of queries, I either accept that, or\n> figure that my hardware/OS actually can't cope with that many clients\n> and reduce the max clients or upgrade the hardware (or maybe do a\n> little tweaking here and there).\n>\n> Why not be more deterministic about refusing connections and stick\n> to reducing max clients? If not it seems like a case where you're\n> promised something but when you need it, you can't have it.\n\nThe point is that \"number of connections\" is a very poor estimate of \nsystem load. Sometimes a connection is busy, sometimes it's not.\nSome connections are busy, some are not. The goal is maximum \nthroughput or some tradeoff of maximum throughput against latency. \nIf system throughput varies nonlinearly with load (as it almost \nalways does) then this happens at some particular load level.\n\nRefusing a connection and letting the client try again later can be \na way to maximize throughput by keeping the system at the optimum \npoint. (Waiting reduces delay. Yes, this is counterintuitive, but \nwhy do we queue up at ticket windows?)\n\nDelaying response, when under excessive load, to clients who already \nhave a connection -- even if they just got one -- can have a similar \neffect, but with finer granularity and with less complexity in the \nclients. \n\nNathan Myers\nncm@zembu.com\n\n",
"msg_date": "Mon, 23 Apr 2001 22:59:07 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Re: refusing connections based on load ..."
},
{
"msg_contents": "Tom Lane writes:\n\n> The Hermit Hacker <scrappy@hub.org> writes:\n> > sendmail does it now, and, apparently relatively portable across OSs ...\n>\n> sendmail expects to be root. It's unlikely (and very undesirable) that\n> postgres will be installed with adequate privileges to read /dev/kmem,\n> which is what it'd take to run the sendmail loadaverage code on most\n> platforms...\n\nThis program:\n\n#include <stdio.h>\n\nint main()\n{\n double la[3];\n\n if (getloadavg(la, 3) == -1)\n perror(\"getloadavg\");\n\n printf(\"%f %f %f\\n\", la[0], la[1], la[2]);\n\n return 0;\n}\n\nworks unprivileged on Linux 2.2 and FreeBSD 4.3. Rumour[*] also has it\nthat there is a way to do this on Solaris and HP-UX 9. So I think that\ncovers enough users to be worthwhile.\n\n[*] - Autoconf AC_FUNC_GETLOADAVG\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Tue, 24 Apr 2001 17:08:15 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: refusing connections based on load ... "
},
{
"msg_contents": "\nApparently so under Solaris ...\n\nhestia:/> uname -a\nSunOS hestia 5.7 Generic_106542-12 i86pc i386 i86pc\n\nC Library Functions getloadavg(3C)\n\nNAME\n getloadavg - get system load averages\n\nSYNOPSIS\n #include <sys/loadavg.h>\n\n int getloadavg(double loadavg[], int nelem);\n\nDESCRIPTION\n\nHow hard would it be to knock up code that, by default, ignores loadavg,\nbut if, say, set in postgresql.conf:\n\nloadavg\t= 4\n\nit will just refuse connections?\n\n\nOn Tue, 24 Apr 2001, Peter Eisentraut wrote:\n\n> Tom Lane writes:\n>\n> > The Hermit Hacker <scrappy@hub.org> writes:\n> > > sendmail does it now, and, apparently relatively portable across OSs ...\n> >\n> > sendmail expects to be root. It's unlikely (and very undesirable) that\n> > postgres will be installed with adequate privileges to read /dev/kmem,\n> > which is what it'd take to run the sendmail loadaverage code on most\n> > platforms...\n>\n> This program:\n>\n> #include <stdio.h>\n>\n> int main()\n> {\n> double la[3];\n>\n> if (getloadavg(la, 3) == -1)\n> perror(\"getloadavg\");\n>\n> printf(\"%f %f %f\\n\", la[0], la[1], la[2]);\n>\n> return 0;\n> }\n>\n> works unprivileged on Linux 2.2 and FreeBSD 4.3. Rumour[*] also has it\n> that there is a way to do this on Solaris and HP-UX 9. So I think that\n> covers enough users to be worthwhile.\n>\n> [*] - Autoconf AC_FUNC_GETLOADAVG\n>\n> --\n> Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n>\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Tue, 24 Apr 2001 14:55:38 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: refusing connections based on load ... "
},
{
"msg_contents": "Doug McNaught writes:\n\n> A very valid objection. I'm also dubious as to the utility of the\n> whole concept. What happens when Sendmail refuses a message based on\n> load? It is requeued on the sending end to be tried later. What\n> happens when PG refuses a new client connection based on load? The\n> application stops working. Is this really better than having slow\n> response time because the server is thrashing?\n\nThe concept is just as dubious as the concept of rejecting clients based\non how many clients are already connected. There are some technical\nreasons for the latter, but it is still used as an administrative tool.\nThe rule is, if you don't like it, don't use it.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Tue, 24 Apr 2001 21:11:46 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: refusing connections based on load ..."
},
{
"msg_contents": "At 10:59 PM 23-04-2001 -0700, Nathan Myers wrote:\n>On Tue, Apr 24, 2001 at 12:39:29PM +0800, Lincoln Yeoh wrote:\n>> Why not be more deterministic about refusing connections and stick\n>> to reducing max clients? If not it seems like a case where you're\n>> promised something but when you need it, you can't have it.\n>\n>The point is that \"number of connections\" is a very poor estimate of \n>system load. Sometimes a connection is busy, sometimes it's not.\n\nActually I use number of connections to estimate how much RAM I will need,\nnot for estimating system load.\n\nBecause once the system runs out of RAM, performance drops a lot. If I can\nprevent the system running out of RAM, it can usually take whatever I throw\nat it at near the max throughput. \n\nFor my app say the max is X hits per second with a few concurrent\ntransactions. When I boost it the number of concurrent transactions (e.g.\n25 on a 128MB machine, load~13) it goes down to maybe 0.95X hits per\nsecond[1]. This is acceptable to me.\n\nBut once the machine starts swapping, things bog down drastically and some\nconnections get Server Error.\n\n>Refusing a connection and letting the client try again later can be \n>a way to maximize throughput by keeping the system at the optimum \n>point. (Waiting reduces delay. Yes, this is counterintuitive, but \n>why do we queue up at ticket windows?)\n>\n>Delaying response, when under excessive load, to clients who already \n>have a connection -- even if they just got one -- can have a similar \n>effect, but with finer granularity and with less complexity in the \n>clients. \n\nWith my web apps, refusing connection based on load doesn't help at all,\nthey are fastcgi processes and are already holding database connections\nopen, before even getting a web request ( might as well open the db\nconnection before the client talks to you).\n\nFor other apps maybe refusing connection could help. But are these cases in\nthe majority? In say a bank teller environment, the database connections\nare probably already open, and could remain open the whole day.\n\nDelaying transactions based on load is easier to understand for me.\n\nCheerio,\nLink.\n\n[1] This is a guesstimate: the hits per second drops gradually during the\nbenchmark.\nThe speed for a low concurrent test run AFTER the benchmark had a slower\nhits per second than the benchmark figures.\n\nThis is probably because there was a lot of selecting and updating of the\nsame row, and Postgresql needs a vacuum before the speed goes back up.\nSeems like the dead rows get in the way of the index or something - speed\ndoesn't slow down as much for lots of inserts and selects.\n\n\n\n",
"msg_date": "Wed, 25 Apr 2001 10:05:54 +0800",
"msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>",
"msg_from_op": false,
"msg_subject": "Re: Re: refusing connections based on load ..."
},
{
"msg_contents": "On Wed, 25 Apr 2001, Lincoln Yeoh wrote:\n\n> At 10:59 PM 23-04-2001 -0700, Nathan Myers wrote:\n> >On Tue, Apr 24, 2001 at 12:39:29PM +0800, Lincoln Yeoh wrote:\n> >> Why not be more deterministic about refusing connections and stick\n> >> to reducing max clients? If not it seems like a case where you're\n> >> promised something but when you need it, you can't have it.\n> >\n> >The point is that \"number of connections\" is a very poor estimate of\n> >system load. Sometimes a connection is busy, sometimes it's not.\n>\n> Actually I use number of connections to estimate how much RAM I will need,\n> not for estimating system load.\n>\n> Because once the system runs out of RAM, performance drops a lot. If I can\n> prevent the system running out of RAM, it can usually take whatever I throw\n> at it at near the max throughput.\n\nI have a Dual-866, 1gig of RAM and strip'd file systems ... this past\nweek, I've hit many times where CPU usage is 100%, RAM is 500Meg free and\ndisks are pretty much sitting idle ...\n\nIt turns out, in this case, that vacuum was in order (i vacuum 12x per day\nnow instead of 6), so that now it will run with 300 simultaneous\nconnections, but with a loadavg of 68 or so, 300 connections are just\nbuilding on each other to slow the rest down :(\n\n\n",
"msg_date": "Tue, 24 Apr 2001 23:28:17 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: Re: Re: refusing connections based on load ..."
},
{
"msg_contents": "At 11:28 PM 24-04-2001 -0300, The Hermit Hacker wrote:\n>\n>I have a Dual-866, 1gig of RAM and strip'd file systems ... this past\n>week, I've hit many times where CPU usage is 100%, RAM is 500Meg free and\n>disks are pretty much sitting idle ...\n>\n>It turns out, in this case, that vacuum was in order (i vacuum 12x per day\n>now instead of 6), so that now it will run with 300 simultaneous\n>connections, but with a loadavg of 68 or so, 300 connections are just\n>building on each other to slow the rest down :(\n>\n\nHmm then maybe we should refuse connections based on \"need to vacuum\"... :).\n\nSeriously though does the _total_ work throughput go down significantly\nwhen you have high loads? \n\nI got a load 13 with 25 concurrent connections (not much), and yeah things\ntook longer but the hits per second wasn't very much different from the\npeak possible with fewer connections. Basically in my case almost the same\namount of work is being done per second.\n\nSo maybe higher loads might be fine on your more powerful system?\n\nCheerio,\nLink.\n\n",
"msg_date": "Wed, 25 Apr 2001 13:15:53 +0800",
"msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>",
"msg_from_op": false,
"msg_subject": "Re: Re: Re: refusing connections based on load ..."
},
{
"msg_contents": "On Tue, Apr 24, 2001 at 11:28:17PM -0300, The Hermit Hacker wrote:\n> I have a Dual-866, 1gig of RAM and strip'd file systems ... this past\n> week, I've hit many times where CPU usage is 100%, RAM is 500Meg free and\n> disks are pretty much sitting idle ...\n\nAssuming \"strip'd\" above means \"striped\", it strikes me that you\nmight be much better off operating the drives independently, with\nthe various tables, indexes, and logs scattered each entirely on one \ndrive. That way the heads can move around independently reading and \nwriting N blocks, rather than all moving in concert reading or writing \nonly one block at a time. (Striping the WAL file on a couple of raw \ndevices might be a good idea along with the above. Can we do that?)\n\nBut of course speculation is much less useful than trying it. Some \nmeasurements before and after would be really, really interesting\nto many of us.\n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Tue, 24 Apr 2001 22:36:16 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: refusing connections based on load ..."
},
{
"msg_contents": "On Tue, 24 Apr 2001, Nathan Myers wrote:\n\n> On Tue, Apr 24, 2001 at 11:28:17PM -0300, The Hermit Hacker wrote:\n> > I have a Dual-866, 1gig of RAM and strip'd file systems ... this past\n> > week, I've hit many times where CPU usage is 100%, RAM is 500Meg free and\n> > disks are pretty much sitting idle ...\n>\n> Assuming \"strip'd\" above means \"striped\", it strikes me that you\n> might be much better off operating the drives independently, with\n> the various tables, indexes, and logs scattered each entirely on one\n> drive.\n\nhave you ever tried to maintain a database doing this? PgSQL is\ndefinitely not designed for this sort of setup, I had symlinks goign\neverywhere,a nd with the new numbering schema, this is even more difficult\nto try and do :)\n\n\n",
"msg_date": "Wed, 25 Apr 2001 09:41:57 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: Re: refusing connections based on load ..."
},
{
"msg_contents": "The whole argument over how to get load averages seems rather silly,\nand it's moot if the idea of using the load information to alter\nPG behavior is rejected.\n\nI personally have no use for it, but I don't think it's a bad idea in\ngeneral. Particularly given future redundancy/load sharing features.\nOn the other hand, I think almost all of this stuff can and should be\ndone outside of postmaster.\n\nHere is the 0-change version, for rejecting connections, and for\noperating systems that have built-in firewall capability, such as\nFreeBSD: a standalone daemon that adds a reject rule for the Postgres\nport when the load gets too high, and drops that rule when the load\ngoes back down.\n\nNow here's the small-change version: add support to Postgres for a SET\ncommand or similar way to say \"stop accepting connections\", or \"set\naccept/transaction delay to X\". Write a standalone daemon which\nmonitors the load and issues commands to Postgres as necessary. That\ndaemon may need extra privileges, but it is small, auditable, and\ndoesn't talk to the outside world. It's probably better to include\nin the Postgres protocol support for accepting (TCP-wise) a connection,\nthen closing it with an error message, because this daemon needs to\nbe able to connect to tell it to let users in again. It's probably as\nsimple as always letting the superuser in.\n\nThe latter is nicer in a number of ways. Persistent connections were\nalready mentioned - rejecting new connections may not be a good enough\nsolution there. With a fancier approach, you could even hang up on\nsome existing connections with an appropriate message, or just NOTICE\nthem that you're slowing them down or you'd like them to go away\nvoluntarily.\n\n From a web-hosting standpoint, someday it would be nifty to have\nper-user-per-connection limits, so I could put up a couple of big\nPG servers and only allow user X one connection, which can't use\nmore than Y amount of RAM, and passes a scheduling hint to the OS\nso it shares CPU time with other economy-class users, which can\nbe throttled down to 25% of what ultra-mega-hosting users get.\nSimple load shedding is a baby step in the right direction. If\nnothing else, it will cast a spotlight on some of the problem\nareas.\n-- \nChristopher Masto Senior Network Monkey NetMonger Communications\nchris@netmonger.net info@netmonger.net http://www.netmonger.net\n\nFree yourself, free your machine, free the daemon -- http://www.freebsd.org/\n",
"msg_date": "Wed, 25 Apr 2001 11:23:04 -0400",
"msg_from": "Christopher Masto <chris+pg-general@netmonger.net>",
"msg_from_op": false,
"msg_subject": "Re: refusing connections based on load ..."
},
{
"msg_contents": "Jan Wieck and I talked about this for awhile yesterday, and we came to\nthe conclusion that load-average-based throttling is a Bad Idea. Quite\naside from the portability and permissions issues that may arise in\ngetting the numbers, the available numbers are the wrong thing:\n\n(1) On most Unix systems, the finest-grain load average that you can get\nis a 1-minute average. This will lose both on the ramp up (by the time\nyou realize you overdid it, you've let *way* too many xacts through the\nstarting gate) and on the ramp down (you'll hold off xacts for circa a\nminute after the crunch is past).\n\n(2) You can also get shorter-time-frame CPU usage numbers (at least,\nmost versions of top(1) seem to display such things) but CPU load is\nreally not very helpful for measuring how badly the system is thrashing.\nPostgres tends to beat your disks into the ground long before it pegs\nthe CPU. Too bad there's no \"disk usage\" numbers.\n\nHowever, there is another possibility that would be simple to implement\nand perfectly portable: allow the dbadmin to impose a limit on the\nnumber of simultaneous concurrent transactions. (Setting this equal to\nthe max allowed number of backends would turn off the limit.) That\nway, you could have umpteen open connections, but you could limit how\nmany of them were actually *doing* something at any given instant.\nIf more than N try to start transactions at the same time, the later\nones have to wait for the earlier ones to finish before they can start.\nThis'd be trivial to do with a semaphore initialized to N --- P() it\nin StartTransaction and V() it in Commit/AbortTransaction.\n\nA conncurrent-xacts limit isn't perfect of course, but I think it'd\nbe pretty good, and certainly better than anything based on the\navailable load-average numbers.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Apr 2001 13:24:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: refusing connections based on load ... "
},
{
"msg_contents": "Tom Lane writes:\n\n> A conncurrent-xacts limit isn't perfect of course, but I think it'd\n> be pretty good, and certainly better than anything based on the\n> available load-average numbers.\n\nThe concurrent transaction limit would allow you to control the absolute\nload of the PostgreSQL server, but we can already do that and it's not\nwhat we're after here. The idea behind the load average based approach is\nto make the postmaster respect the situation of the overall system.\nAdditionally, the concurrent transaction limit would only be useful on\nsetups that have a lot of idle transactions. Those setups exist, but not\neverywhere.\n\nTo me, both of these approaches are in the \"if you don't like it, don't\nuse it\" category.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Wed, 25 Apr 2001 20:28:18 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: refusing connections based on load ... "
},
{
"msg_contents": "On Wed, 25 Apr 2001, Peter Eisentraut wrote:\n\n> Tom Lane writes:\n>\n> > A conncurrent-xacts limit isn't perfect of course, but I think it'd\n> > be pretty good, and certainly better than anything based on the\n> > available load-average numbers.\n>\n> The concurrent transaction limit would allow you to control the absolute\n> load of the PostgreSQL server, but we can already do that and it's not\n> what we're after here. The idea behind the load average based approach is\n> to make the postmaster respect the situation of the overall system.\n> Additionally, the concurrent transaction limit would only be useful on\n> setups that have a lot of idle transactions. Those setups exist, but not\n> everywhere.\n>\n> To me, both of these approaches are in the \"if you don't like it, don't\n> use it\" category.\n\nAgreed ... by default, the loadavg method could be set to zero, to ignore\n... I don't care if I'm off by 1min before I catch the increase, the fact\nis that I have caught it, and prevent any new ones coming in until it\ndrops off again ...\n\nMake it two variables:\n\ntransla\nrejectla\n\nif transla is hit, restrict on transactions, letting others connect, but\nputting them on hold while the la drops again ... if it goes above\nrejectla, refuse new connections altogether ...\n\nso now I can set something like:\n\ntransla = 8\nrejectla = 16\n\nbut if loadavg goes above 16, I want to get rid of what is causing the\nload to rise *before* adding new variables to the mix that will cause it\nto rise higher ...\n\nand your arg about permissions (Tom's, not Peter's) is moot in at least 3\nof the major systems (Linux, *BSD and Solaris) as there is a getloadavg()\nfunction in all three for doing this ...\n\n\n",
"msg_date": "Wed, 25 Apr 2001 16:03:10 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: refusing connections based on load ... "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> The idea behind the load average based approach is\n> to make the postmaster respect the situation of the overall system.\n\nThat'd be great if we could do it, but as I pointed out, the available\nstats do not allow us to do it very well.\n\nI think this will create a lot of portability headaches for no real\ngain. If it were something we could just do and forget, I would not\nobject --- but the porting issues will create a LOT more work than\nI think this can possibly be worth. The fact that the work is\ndistributed and will mostly be incurred by people other than the ones\nadvocating the change doesn't improve matters.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Apr 2001 15:25:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: refusing connections based on load ... "
},
{
"msg_contents": "On Wed, Apr 25, 2001 at 09:41:57AM -0300, The Hermit Hacker wrote:\n> On Tue, 24 Apr 2001, Nathan Myers wrote:\n> \n> > On Tue, Apr 24, 2001 at 11:28:17PM -0300, The Hermit Hacker wrote:\n> > > I have a Dual-866, 1gig of RAM and strip'd file systems ... this past\n> > > week, I've hit many times where CPU usage is 100%, RAM is 500Meg free\n> > > and disks are pretty much sitting idle ...\n> >\n> > Assuming \"strip'd\" above means \"striped\", it strikes me that you\n> > might be much better off operating the drives independently, with\n> > the various tables, indexes, and logs scattered each entirely on one\n> > drive.\n> \n> have you ever tried to maintain a database doing this? PgSQL is\n> definitely not designed for this sort of setup, I had symlinks going\n> everywhere, and with the new numbering schema, this is even more \n> difficult to try and do :)\n\nClearly you need to build a tool to organize it. It would help a lot if \nPG itself could provide some basic assistance, such as calling a stored\nprocedure to generate the pathname of the file.\n\nHas there been any discussion of anything like that?\n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Wed, 25 Apr 2001 13:25:45 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "tables/indexes/logs on different volumes"
},
{
"msg_contents": "On Wed, 25 Apr 2001, Tom Lane wrote:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > The idea behind the load average based approach is\n> > to make the postmaster respect the situation of the overall system.\n>\n> That'd be great if we could do it, but as I pointed out, the available\n> stats do not allow us to do it very well.\n>\n> I think this will create a lot of portability headaches for no real\n> gain. If it were something we could just do and forget, I would not\n> object --- but the porting issues will create a LOT more work than\n> I think this can possibly be worth. The fact that the work is\n> distributed and will mostly be incurred by people other than the ones\n> advocating the change doesn't improve matters.\n\nAs I mentioned, getloadavg() appears to be support on 3 of the primary\nplatforms we work with, so I'd say for most installations, portability\nissues aren't an issue ...\n\nAutoconf has a 'LOADAVG' check already, so what is so problematic about\nusing that to enabled/disable that feature?\n\nIf ( loadavg available on OS && enabled in postgresql.conf )\n operate on it\n} else ( loadavg not available on OS && enabled )\n noop with a WARN level error that its not available\n}\n\n\n\n",
"msg_date": "Wed, 25 Apr 2001 17:51:42 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: refusing connections based on load ... "
},
{
"msg_contents": "The Hermit Hacker wrote:\n> Agreed ... by default, the loadavg method could be set to zero, to ignore\n> ... I don't care if I'm off by 1min before I catch the increase, the fact\n> is that I have caught it, and prevent any new ones coming in until it\n> drops off again ...\n>\n> Make it two variables:\n>\n> transla\n> rejectla\n>\n> if transla is hit, restrict on transactions, letting others connect, but\n> putting them on hold while the la drops again ... if it goes above\n> rejectla, refuse new connections altogether ...\n>\n> so now I can set something like:\n>\n> transla = 8\n> rejectla = 16\n>\n> but if loadavg goes above 16, I want to get rid of what is causing the\n> load to rise *before* adding new variables to the mix that will cause it\n> to rise higher ...\n>\n> and your arg about permissions (Tom's, not Peter's) is moot in at least 3\n> of the major systems (Linux, *BSD and Solaris) as there is a getloadavg()\n> function in all three for doing this ...\n\n I've just recompiled my php4 module to get sysvsem support\n and limited the number of concurrent DB transactions on the\n application level. The (not yet finished) TPC-C\n implementation I'm working on scales about 3-4 times better\n now. That's an improvement!\n\n This proves that limiting the number of concurrently running\n transactions is sufficient to keep the system load down.\n Combined these two look as follows:\n\n - We start with a fairly high setting in the semaphore.\n\n - When the system load exceeds the high-watermark, we don't\n increment the semaphore back after transaction end (need\n to ensure that at least a small minimum of xacts is left,\n but that's easy).\n\n - When the system goes back to normal load level, we slowly\n increase the semaphore again.\n\n This way we might have some peek pushing the system against\n the wall for a moment. If that doesn't go away quickly, we\n just delay users (who see some delay anyway actually).\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Wed, 25 Apr 2001 16:03:31 -0500 (EST)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: refusing connections based on load ..."
},
{
"msg_contents": "Jan Wieck <JanWieck@yahoo.com> writes:\n> This proves that limiting the number of concurrently running\n> transactions is sufficient to keep the system load down.\n> Combined these two look as follows:\n\n> - We start with a fairly high setting in the semaphore.\n\n> - When the system load exceeds the high-watermark, we don't\n> increment the semaphore back after transaction end (need\n> to ensure that at least a small minimum of xacts is left,\n> but that's easy).\n\n> - When the system goes back to normal load level, we slowly\n> increase the semaphore again.\n\nThis is a nice way of dealing with the slow reaction time of the\nload average --- you don't let it directly drive the decision about\nwhen to start a new transaction, but instead let it tweak the ceiling\non number of concurrent xacts. I like it.\n\nYou probably don't need to have any additional \"slowness\" in the loop\nother than the inherent averaging in the kernel's load average.\n\nI'm still concerned about portability issues, and about whether load\naverage is really the right number to be looking at, however.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Apr 2001 17:12:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: refusing connections based on load ... "
},
{
"msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n> Autoconf has a 'LOADAVG' check already, so what is so problematic about\n> using that to enabled/disable that feature?\n\nBecause it's tied to a GNU getloadavg.c implementation, which we'd have\nlicense problems with using.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Apr 2001 17:14:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: refusing connections based on load ... "
},
{
"msg_contents": "On Wed, 25 Apr 2001, Tom Lane wrote:\n\n> The Hermit Hacker <scrappy@hub.org> writes:\n> > Autoconf has a 'LOADAVG' check already, so what is so problematic about\n> > using that to enabled/disable that feature?\n>\n> Because it's tied to a GNU getloadavg.c implementation, which we'd have\n> license problems with using.\n\nIt's part of the standard C library in FreeBSD. Any other platforms\nhave it built in?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Wed, 25 Apr 2001 18:23:39 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: refusing connections based on load ... "
},
{
"msg_contents": "On Wed, 25 Apr 2001, Vince Vielhaber wrote:\n\n> On Wed, 25 Apr 2001, Tom Lane wrote:\n>\n> > The Hermit Hacker <scrappy@hub.org> writes:\n> > > Autoconf has a 'LOADAVG' check already, so what is so problematic about\n> > > using that to enabled/disable that feature?\n> >\n> > Because it's tied to a GNU getloadavg.c implementation, which we'd have\n> > license problems with using.\n>\n> It's part of the standard C library in FreeBSD. Any other platforms\n> have it built in?\n\nAs has been mentioned, Solaris and Linux also have it ...\n\n\n",
"msg_date": "Wed, 25 Apr 2001 21:58:08 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: refusing connections based on load ... "
},
{
"msg_contents": "On Wed, 25 Apr 2001, Tom Lane wrote:\n\n> I'm still concerned about portability issues, and about whether load\n> average is really the right number to be looking at, however.\n\nIts worked for Sendmail for how many years now, and the code is there to\nuse, with all \"portability issues resolved for every platform they use ...\nand a growing number of platforms appear to have the mechanisms already\nbuilt into their C libraries ...\n\n\n\n",
"msg_date": "Wed, 25 Apr 2001 21:59:50 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: refusing connections based on load ... "
},
{
"msg_contents": "On Wed, 25 Apr 2001, The Hermit Hacker wrote:\n\n> On Wed, 25 Apr 2001, Vince Vielhaber wrote:\n>\n> > On Wed, 25 Apr 2001, Tom Lane wrote:\n> >\n> > > The Hermit Hacker <scrappy@hub.org> writes:\n> > > > Autoconf has a 'LOADAVG' check already, so what is so problematic about\n> > > > using that to enabled/disable that feature?\n> > >\n> > > Because it's tied to a GNU getloadavg.c implementation, which we'd have\n> > > license problems with using.\n> >\n> > It's part of the standard C library in FreeBSD. Any other platforms\n> > have it built in?\n>\n> As has been mentioned, Solaris and Linux also have it ...\n\nBut what's in FreeBSD's standard library isn't GNU.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 26 Apr 2001 05:37:40 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: refusing connections based on load ... "
},
{
"msg_contents": "On Thu, 26 Apr 2001, Vince Vielhaber wrote:\n\n> On Wed, 25 Apr 2001, The Hermit Hacker wrote:\n>\n> > On Wed, 25 Apr 2001, Vince Vielhaber wrote:\n> >\n> > > On Wed, 25 Apr 2001, Tom Lane wrote:\n> > >\n> > > > The Hermit Hacker <scrappy@hub.org> writes:\n> > > > > Autoconf has a 'LOADAVG' check already, so what is so problematic about\n> > > > > using that to enabled/disable that feature?\n> > > >\n> > > > Because it's tied to a GNU getloadavg.c implementation, which we'd have\n> > > > license problems with using.\n> > >\n> > > It's part of the standard C library in FreeBSD. Any other platforms\n> > > have it built in?\n> >\n> > As has been mentioned, Solaris and Linux also have it ...\n>\n> But what's in FreeBSD's standard library isn't GNU.\n\nWouldn't matter if it was, its part of the OSs standard library ... unless\nyou mean to pull it in and use it with the distribution, which I think\nmight be a bad idea ... if we pull anything in, sendmail's would be best\n... FreeBSD's will have had anything required for non-FreeBSD systems\nyanked out, if it was ever there, while sendmail's already has all the\n'hooks' in it ...\n\n\n",
"msg_date": "Thu, 26 Apr 2001 08:35:13 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: refusing connections based on load ... "
},
{
"msg_contents": "On Thu, 26 Apr 2001, The Hermit Hacker wrote:\n\n> On Thu, 26 Apr 2001, Vince Vielhaber wrote:\n>\n> > On Wed, 25 Apr 2001, The Hermit Hacker wrote:\n> >\n> > > On Wed, 25 Apr 2001, Vince Vielhaber wrote:\n> > >\n> > > > On Wed, 25 Apr 2001, Tom Lane wrote:\n> > > >\n> > > > > The Hermit Hacker <scrappy@hub.org> writes:\n> > > > > > Autoconf has a 'LOADAVG' check already, so what is so problematic about\n> > > > > > using that to enabled/disable that feature?\n> > > > >\n> > > > > Because it's tied to a GNU getloadavg.c implementation, which we'd have\n> > > > > license problems with using.\n> > > >\n> > > > It's part of the standard C library in FreeBSD. Any other platforms\n> > > > have it built in?\n> > >\n> > > As has been mentioned, Solaris and Linux also have it ...\n> >\n> > But what's in FreeBSD's standard library isn't GNU.\n>\n> Wouldn't matter if it was, its part of the OSs standard library ... unless\n> you mean to pull it in and use it with the distribution, which I think\n> might be a bad idea ... if we pull anything in, sendmail's would be best\n> ... FreeBSD's will have had anything required for non-FreeBSD systems\n> yanked out, if it was ever there, while sendmail's already has all the\n> 'hooks' in it ...\n\nThat wasn't what I was saying at all.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 26 Apr 2001 07:55:57 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: refusing connections based on load ... "
},
{
"msg_contents": "Vince Vielhaber <vev@michvhf.com> writes:\n> On Wed, 25 Apr 2001, The Hermit Hacker wrote:\n>> On Wed, 25 Apr 2001, Vince Vielhaber wrote:\n>> \n> On Wed, 25 Apr 2001, Tom Lane wrote:\n> Because it's tied to a GNU getloadavg.c implementation, which we'd have\n> license problems with using.\n> \n> It's part of the standard C library in FreeBSD. Any other platforms\n> have it built in?\n>> \n>> As has been mentioned, Solaris and Linux also have it ...\n\n> But what's in FreeBSD's standard library isn't GNU.\n\nObviously I confused some people. What Autoconf's LOADAVG macro\nactually does is\n (1) check to see if system has a getloadavg() library routine, and if\n so, set up to use that. Otherwise\n (2) apply a bunch of ad-hoc checks to find out whether a GNU-specific\n getloadavg module can be used. That module isn't actually\n included with autoconf; I imagine the one they have in mind is\n the one in GNU make.\n\nTherefore, Autoconf's macro is useless to us as a means of configuring\nload average support, because we won't be using GNU make's getloadavg\nmodule.\n\nThe Sendmail loadavg code should be more friendly from a licensing\nstandpoint, but IT HAS PRIVILEGE PROBLEMS. Reading /dev/kmem isn't\nsomething that we should expect to be able to do in Postgres.\n\nIn short, I haven't seen any evidence that we have a portable solution\navailable. Please don't reply (yet again) \"It works on $MYSYSTEM,\ntherefore there's no problem.\" If you want to implement this feature\nthen you need to take responsibility for making it work everywhere.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 26 Apr 2001 10:26:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: refusing connections based on load ... "
}
] |
[
{
"msg_contents": "\nBruce Momjian wrote:\n> > A different approach that's been discussed on pghackers is to make use\n> > of btree indexes for columns that have such indexes: we could scan the\n> > indexes to visit all the column values in sorted order. I have rejected\n> > that approach because (a) it doesn't help for columns without a suitable\n> > index; (b) our indexes don't distinguish deleted and live tuples,\n> > which would skew the statistics --- in particular, we couldn't tell a\n> > frequently-updated single tuple from a commonly repeated value; (c)\n> > scanning multiple indexes would likely require more total I/O than just\n> > grabbing sample tuples from the main table --- especially if we have to\n> > do that anyway to handle columns without indexes.\n>\n> Remember one idea is for index scans to automatically update the expired\n> flag in the index bitfields when they check the heap tuple.\n\n And we should really do that. While playing around with my\n (for 7.2 to be) access statistics stuff I found that when\n running pg_bench, a couple of thousand index scans cause\n millions and millions of buffer fetches, because that\n pg_bench updates one and the same row over and over again and\n it has a PKEY.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Mon, 23 Apr 2001 13:15:02 -0500 (EST)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: RFC: planner statistics in 7.2"
}
] |
[
{
"msg_contents": "Hi,\n\n I just got trapped by one of my own features in the\n referential integrity area.\n\n The problem is, that the trigger run on the FK row at UPDATE\n allways checks and locks the referenced PK, even if the FK\n attributes didn't change. That's because if there'd be an ON\n DELETE SET DEFAULTS and someone deletes a PK consisting of\n all the FK's column defaults, we wouldn't notice and let it\n pass through.\n\n The bad thing on it is now, if I have one XACT that locks the\n PK row first, then locks the FK row, and I have another XACT\n that just want's to update another field in the FK row, that\n second XACT must lock the PK row in the first place or this\n entire thing leads to deadlocks. If one table has alot of FK\n constraints, this causes not really wanted lock contention.\n\n The clean way to get out of it would be to skip non-FK-change\n events in the UPDATE trigger and do alot of extra work in the\n SET DEFAULTS trigger. Actually it'd be to check if we're\n actually deleting the FK defaults values from the PK table,\n and if so we'd have to check if references exist by doing\n another NO ACTION kinda test.\n\n Any other smart idea?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Mon, 23 Apr 2001 14:55:01 -0500 (EST)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": true,
"msg_subject": "RI oddness"
},
{
"msg_contents": "hi, there!\n\nOn Mon, 23 Apr 2001, Jan Wieck wrote:\n\n> I just got trapped by one of my own features in the\n> referential integrity area.\n> \n> The problem is, that the trigger run on the FK row at UPDATE\n> allways checks and locks the referenced PK, even if the FK\n> attributes didn't change. That's because if there'd be an ON\n> DELETE SET DEFAULTS and someone deletes a PK consisting of\n> all the FK's column defaults, we wouldn't notice and let it\n> pass through.\n> \n> The bad thing on it is now, if I have one XACT that locks the\n> PK row first, then locks the FK row, and I have another XACT\n> that just want's to update another field in the FK row, that\n> second XACT must lock the PK row in the first place or this\n> entire thing leads to deadlocks. If one table has alot of FK\n> constraints, this causes not really wanted lock contention.\n> \n> The clean way to get out of it would be to skip non-FK-change\n> events in the UPDATE trigger and do alot of extra work in the\n> SET DEFAULTS trigger. Actually it'd be to check if we're\n> actually deleting the FK defaults values from the PK table,\n> and if so we'd have to check if references exist by doing\n> another NO ACTION kinda test.\n> \n> Any other smart idea?\n\nread-write locks?\n\n/fjoe\n\n",
"msg_date": "Tue, 24 Apr 2001 19:34:08 +0700 (NSS)",
"msg_from": "Max Khon <fjoe@iclub.nsu.ru>",
"msg_from_op": false,
"msg_subject": "Re: RI oddness"
},
{
"msg_contents": "Max Khon wrote:\n> hi, there!\n>\n> On Mon, 23 Apr 2001, Jan Wieck wrote:\n>\n> > I just got trapped by one of my own features in the\n> > referential integrity area.\n> >\n> > The problem is, that the trigger run on the FK row at UPDATE\n> > allways checks and locks the referenced PK, even if the FK\n> > attributes didn't change. That's because if there'd be an ON\n> > DELETE SET DEFAULTS and someone deletes a PK consisting of\n> > all the FK's column defaults, we wouldn't notice and let it\n> > pass through.\n> >\n> > The bad thing on it is now, if I have one XACT that locks the\n> > PK row first, then locks the FK row, and I have another XACT\n> > that just want's to update another field in the FK row, that\n> > second XACT must lock the PK row in the first place or this\n> > entire thing leads to deadlocks. If one table has alot of FK\n> > constraints, this causes not really wanted lock contention.\n> >\n> > The clean way to get out of it would be to skip non-FK-change\n> > events in the UPDATE trigger and do alot of extra work in the\n> > SET DEFAULTS trigger. Actually it'd be to check if we're\n> > actually deleting the FK defaults values from the PK table,\n> > and if so we'd have to check if references exist by doing\n> > another NO ACTION kinda test.\n> >\n> > Any other smart idea?\n>\n> read-write locks?\n\n Just discussed it with Tom Lane while he'd been here in\n Norfolk and it's even more ugly. We couldn't even pull out\n the FK's column defaults at this time to check if we are\n about to delete the corresponding PK because they might call\n all kinds of functions with tons of side effects we don't\n want.\n\n Seems the only way to do it cleanly is to have the parser\n putting the information which TLEs are *OLD* and which are\n *NEW* somewhere and pass it all down through the executor\n (remembering it per tuple in the deferred trigger queue) down\n into the triggers.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Tue, 24 Apr 2001 15:40:37 -0500 (EST)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: RI oddness"
},
{
"msg_contents": "Jan Wieck wrote:\n> Just discussed it with Tom Lane while he'd been here in\n> Norfolk and it's even more ugly. We couldn't even pull out\n> the FK's column defaults at this time to check if we are\n> about to delete the corresponding PK because they might call\n> all kinds of functions with tons of side effects we don't\n> want.\n>\n> Seems the only way to do it cleanly is to have the parser\n> putting the information which TLEs are *OLD* and which are\n> *NEW* somewhere and pass it all down through the executor\n> (remembering it per tuple in the deferred trigger queue) down\n> into the triggers.\n\n While we know about the *right* way to fix it, that's a far\n too big of a change for 7.1.1. But I'd like to fix the\n likely deadlocks caused by referential integrity constraints.\n\n What'd be easy is this:\n\n - We already have two entry points for INSERT/UPDATE on FK\n table, but the one for UPDATE is fortunately unused.\n\n - We change analyze.c to install the RI_FKey_check_upd\n trigger if the constraint has an ON DELETE SET DEFAULT\n clause. Otherwise it uses RI_FKey_check_ins as it does\n now.\n\n - We change ri_triggers.c so that RI_FKey_check_ins will\n skip the PK check if the FK attributes did not change\n while RI_FKey_check_upd will enforce the check allways.\n\n This way it'll automatically gain a performance win for\n everyone using referential integrity.\n\n The bad side effect is, that these changes will require a\n dump/reload FOR DATABASES, where ON DELETE SET DEFAULT is\n used. If they don't dump/reload, it'll open the possibility\n of violating constraints that are defined ON DELETE SET\n DEFAULT by deleting the PK that consists of the column\n defaults of an existing FK reference. The DELETE would\n succeed and the stall references remain.\n\n I think the usage of ON DELETE SET DEFAULT is a very rare\n case out in the field. Thus the dump/reload requirement is\n limited to a small number of databases (if any). It is easy\n to detect if a DB's schema contains this clause by looking up\n pg_trigger for usage of RI_FKey_setdefault_del. We could\n provide a small script telling which databases need\n dump/reload.\n\n Comments?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Thu, 26 Apr 2001 10:58:38 -0500 (EST)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: RI oddness"
},
{
"msg_contents": "Jan Wieck <JanWieck@Yahoo.com> writes:\n> What'd be easy is this:\n\n> - We already have two entry points for INSERT/UPDATE on FK\n> table, but the one for UPDATE is fortunately unused.\n\n> - We change analyze.c to install the RI_FKey_check_upd\n> trigger if the constraint has an ON DELETE SET DEFAULT\n> clause. Otherwise it uses RI_FKey_check_ins as it does\n> now.\n\nUnfortunately, such a fix really isn't going to fly as a patch release.\nNot only does it not work for existing tables, but it won't work for\ntables created by dump and reload from a prior version (since they\nwon't have the right set of triggers ... another illustration of why\nthe lack of an abstract representation of the RI constraints was a\nBad Move). In fact I'm afraid that your proposed change would actively\nbreak tables imported from a prior version; wouldn't RI_FKey_check_ins\ndo the wrong thing if applied as an update trigger?\n\n> I think the usage of ON DELETE SET DEFAULT is a very rare\n> case out in the field. Thus the dump/reload requirement is\n> limited to a small number of databases (if any).\n\nBut dump/reload won't fix the tables' triggers.\n\nGiven that ON DELETE SET DEFAULT isn't used much, I think we should\nnot waste time creating an incomplete hack solution for 7.1.*, but\njust write it off as a known bug and move forward with a real solution\nfor 7.2.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 26 Apr 2001 12:41:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RI oddness "
},
{
"msg_contents": "Tom Lane wrote:\n> Jan Wieck <JanWieck@Yahoo.com> writes:\n> > What'd be easy is this:\n>\n> > - We already have two entry points for INSERT/UPDATE on FK\n> > table, but the one for UPDATE is fortunately unused.\n>\n> > - We change analyze.c to install the RI_FKey_check_upd\n> > trigger if the constraint has an ON DELETE SET DEFAULT\n> > clause. Otherwise it uses RI_FKey_check_ins as it does\n> > now.\n>\n> Unfortunately, such a fix really isn't going to fly as a patch release.\n> Not only does it not work for existing tables, but it won't work for\n> tables created by dump and reload from a prior version (since they\n> won't have the right set of triggers ... another illustration of why\n> the lack of an abstract representation of the RI constraints was a\n> Bad Move). In fact I'm afraid that your proposed change would actively\n> break tables imported from a prior version; wouldn't RI_FKey_check_ins\n> do the wrong thing if applied as an update trigger?\n>\n> > I think the usage of ON DELETE SET DEFAULT is a very rare\n> > case out in the field. Thus the dump/reload requirement is\n> > limited to a small number of databases (if any).\n>\n> But dump/reload won't fix the tables' triggers.\n\n Ech - you're right. It wouldn't fix 'em.\n\n>\n> Given that ON DELETE SET DEFAULT isn't used much, I think we should\n> not waste time creating an incomplete hack solution for 7.1.*, but\n> just write it off as a known bug and move forward with a real solution\n> for 7.2.\n\n It's not the rarely used ON DELETE SET DEFAULT case that's\n currently broken. It's ALL the other cases that can easily\n cause you to end up in deadlocks if you just update another\n field in a table having foreign keys and you don't lock all\n referenced rows properly first. Given the table:\n\n CREATE TABLE sample (\n a integer REFERENCES t1,\n b integer REFERENCES t2,\n c integer REFERENCES t3,\n d integer REFERENCES t4,\n data text\n );\n\n you'd have to SELECT ... FOR UPDATE tables t1, t2, t3 and t4\n (while NOT having a lock on \"sample\") before you can safely\n update \"data\". Otherwise, another transaction could lock one\n of those and try to lock your \"sample\" row and you have a\n deadlock.\n\n We could provide another script fixing it. It is run after\n the restore of a dump taken from a pre-7.1.1 database fixing\n the tgfoid for those triggers that use RI_FKey_check_ins\n where a matching RI_FKey_setdefault_del row exist with same\n arguments and constraint name.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Thu, 26 Apr 2001 13:40:50 -0500 (EST)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: RI oddness"
}
] |
[
{
"msg_contents": ">Nathan Myers wrote:\n>> On Mon, Apr 23, 2001 at 03:09:53PM -0300, The Hermit Hacker wrote:\n>> >\n>> > Anyone thought of implementing this, similar to how sendmail does it?\nIf\n>> > load > n, refuse connections?\n>> > ...\n>> > If nobody is working on something like this, does anyone but me feel\nthat\n>> > it has merit to make use of? I'll play with it if so ...\n>>\n>> I agree that it would be useful. Even more useful would be soft load\n>> shedding, where once some load average level is exceeded the postmaster\n>> delays a bit (proportionately) before accepting a connection.\n>\n> Or have the load check on AtXactStart, and delay new\n> transactions until load is back below x, where x is\n> configurable per user/group plus some per database scaling\n> factor.\n\nHow is this different than limiting the number of backends that can be\nrunning at once? It would seem to me that a user that has a \"delayed\"\nstartup is going to think there's something wrong with the server and keep\ntrying, where as a message like \"too many clients - try again later\"\nexplains what's really going on.\n\nlen morgan\n\n",
"msg_date": "Mon, 23 Apr 2001 17:20:36 -0500",
"msg_from": "\"Len Morgan\" <len-morgan@crcom.net>",
"msg_from_op": true,
"msg_subject": "Re: refusing connections based on load ..."
}
] |
[
{
"msg_contents": "\nHi,\n\nI believe i found two minor bugs in the linux start/stop scripts for the\ndownloadable rpm version of postgres 7.1. I don't think these have been\nreported already (i did some quik searches). Please look these over and see\nif i'm just smoking something or if these bugs are valid. Also, i did a\nquick cvs checkout / log of the contrib tree, and i noted that the\nstart/stop scripts have been restructured recently (i do not know where\nlogic of the scripts were moved to, so these points may still be valid, if\nnot, i was wondering if I pull the scripts from the cvs contrib tree myself,\nwould they work out of the box?).\n\n---\n\n#1. Every instance of (there are 2):\n\n pid=`pidof postmaster`\n if [ $pid ]\n\nshould be:\n\n pid=`pidof -s postmaster`\n if [ $pid ]\n\n(pidof may return multiple pids if postmaster forked or has multiple threads\n-- i'm not toofamiliar with postgres architecture, but postmaster does\nsometimes show multiple pids which could mean multiple threads or processes\nin linux) If pidof returns multiple pids, the \"if\" will barf giving\nsomething like the following:\n\nStopping postgresql service: [ OK ]\nChecking postgresql installation: [ OK ]\n/etc/rc.d/init.d/postgresql: [: 1223: unary operator expected\nStarting postgresql service: [FAILED]\n\n--------\n\n#2. /etc/rc.d/init.d/postgresql restart sometimes doesn't do what it should.\n\nie. end up with a fresh newly started postgres daemon.\n\nThis happens because the rc.d script does something very simple: stop;\nstart. This is correct, but stop doesn't do what it should. When stop\nreturns, postgres may not have fully stopped for some reason. start\ncomplains that postmaster is still running. After doing some testing, my\nhypothesis is this (i have no idea how postgres works intermally):\n\n1. I run a bunch of inserts, create tables\n2. I call postgres stop\n3. one of the postgres \"processes\" stops.\n4. the other processes are still trying to flush stuff onto the disk before\nthey quit.\n5. start is called, and it finds some \"postmaster\" processes, and thus says\n\"postmaster is running\".\n6. the other processes finally are done and stop.\n\nNow there are no more postgres running.\n\nWhen i added a sleep 10 between stop / start, everything was fine. The\n\"correct\" solution would be for postgres stop to actually wait for the\nentire db to exit cleanly. BTW, i uncovered this via an automated install /\nconfiguration / population of a postgress database which involves a restart\nright after population of a database.\n\nThanx.\n\n-rchit\n\n",
"msg_date": "Mon, 23 Apr 2001 21:58:23 -0700",
"msg_from": "Rachit Siamwalla <rachit@ensim.com>",
"msg_from_op": true,
"msg_subject": "start / stop scripts question"
},
{
"msg_contents": "You will find that that script is not distributed by us.\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> \n> Hi,\n> \n> I believe i found two minor bugs in the linux start/stop scripts for the\n> downloadable rpm version of postgres 7.1. I don't think these have been\n> reported already (i did some quik searches). Please look these over and see\n> if i'm just smoking something or if these bugs are valid. Also, i did a\n> quick cvs checkout / log of the contrib tree, and i noted that the\n> start/stop scripts have been restructured recently (i do not know where\n> logic of the scripts were moved to, so these points may still be valid, if\n> not, i was wondering if I pull the scripts from the cvs contrib tree myself,\n> would they work out of the box?).\n> \n> ---\n> \n> #1. Every instance of (there are 2):\n> \n> pid=`pidof postmaster`\n> if [ $pid ]\n> \n> should be:\n> \n> pid=`pidof -s postmaster`\n> if [ $pid ]\n> \n> (pidof may return multiple pids if postmaster forked or has multiple threads\n> -- i'm not toofamiliar with postgres architecture, but postmaster does\n> sometimes show multiple pids which could mean multiple threads or processes\n> in linux) If pidof returns multiple pids, the \"if\" will barf giving\n> something like the following:\n> \n> Stopping postgresql service: [ OK ]\n> Checking postgresql installation: [ OK ]\n> /etc/rc.d/init.d/postgresql: [: 1223: unary operator expected\n> Starting postgresql service: [FAILED]\n> \n> --------\n> \n> #2. /etc/rc.d/init.d/postgresql restart sometimes doesn't do what it should.\n> \n> ie. end up with a fresh newly started postgres daemon.\n> \n> This happens because the rc.d script does something very simple: stop;\n> start. This is correct, but stop doesn't do what it should. When stop\n> returns, postgres may not have fully stopped for some reason. start\n> complains that postmaster is still running. After doing some testing, my\n> hypothesis is this (i have no idea how postgres works intermally):\n> \n> 1. I run a bunch of inserts, create tables\n> 2. I call postgres stop\n> 3. one of the postgres \"processes\" stops.\n> 4. the other processes are still trying to flush stuff onto the disk before\n> they quit.\n> 5. start is called, and it finds some \"postmaster\" processes, and thus says\n> \"postmaster is running\".\n> 6. the other processes finally are done and stop.\n> \n> Now there are no more postgres running.\n> \n> When i added a sleep 10 between stop / start, everything was fine. The\n> \"correct\" solution would be for postgres stop to actually wait for the\n> entire db to exit cleanly. BTW, i uncovered this via an automated install /\n> configuration / population of a postgress database which involves a restart\n> right after population of a database.\n> \n> Thanx.\n> \n> -rchit\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 24 Apr 2001 10:28:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: start / stop scripts question"
}
] |
[
{
"msg_contents": "Is anyone else seeing this?\n\nI have the current CVS sources and \"make check\" ends up with one\nfailure. My regression.diffs shows:\n\n\n*** ./expected/join.out Thu Dec 14 17:30:45 2000\n--- ./results/join.out Mon Apr 23 20:23:15 2001\n***************\n*** 1845,1851 ****\n -- UNION JOIN isn't implemented yet\n SELECT '' AS \"xxx\", *\n FROM J1_TBL UNION JOIN J2_TBL;\n! ERROR: UNION JOIN is not implemented yet\n --\n -- Clean up\n --\n--- 1845,1851 ----\n -- UNION JOIN isn't implemented yet\n SELECT '' AS \"xxx\", *\n FROM J1_TBL UNION JOIN J2_TBL;\n! ERROR: parser: parse error at or near \"JOIN\"\n --\n -- Clean up\n --\n\n======================================================================\n\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Tue, 24 Apr 2001 01:18:36 -0400",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": true,
"msg_subject": "ERROR: parser: parse error at or near \"JOIN\""
},
{
"msg_contents": "Fernando Nasser <fnasser@redhat.com> writes:\n> Is anyone else seeing this?\n\nNo.\n\n> I have the current CVS sources and \"make check\" ends up with one\n> failure. My regression.diffs shows:\n\nI think you must have built gram.c with a broken bison or yacc. What\nexactly is configure picking, and what version is it?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Apr 2001 11:48:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: parser: parse error at or near \"JOIN\" "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Fernando Nasser <fnasser@redhat.com> writes:\n> > Is anyone else seeing this?\n> \n> No.\n> \n> > I have the current CVS sources and \"make check\" ends up with one\n> > failure. My regression.diffs shows:\n> \n> I think you must have built gram.c with a broken bison or yacc. What\n> exactly is configure picking, and what version is it?\n> \n\nYes you are right.\n\nWith:\n\n[12:03:04] > flex -V\nflex version 2.5.4\n \n[12:03:08] > bison -V\nGNU Bison version 1.28\n\nit fails, but using older versions of flex and bison the regression goes\naway:\n\n[12:05:30] > flex -V\nflex Cygnus version 2.5-gnupro-99r1\n\n[12:05:34] > bison -V\nGNU Bison version 1.25\n\n\nThank you very much.\n\n\n\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Wed, 25 Apr 2001 12:09:04 -0400",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": true,
"msg_subject": "Re: ERROR: parser: parse error at or near \"JOIN\""
},
{
"msg_contents": "Fernando Nasser <fnasser@redhat.com> writes:\n> Tom Lane wrote:\n>> I think you must have built gram.c with a broken bison or yacc. What\n>> exactly is configure picking, and what version is it?\n\n> Yes you are right.\n\n> With:\n\n> [12:03:04] > flex -V\n> flex version 2.5.4\n \n> [12:03:08] > bison -V\n> GNU Bison version 1.28\n\n> it fails, but using older versions of flex and bison the regression goes\n> away:\n\n> [12:05:30] > flex -V\n> flex Cygnus version 2.5-gnupro-99r1\n\n> [12:05:34] > bison -V\n> GNU Bison version 1.25\n\n\nEr, surely you stated that backwards? flex 2.5.4 and bison 1.28 are\nwhat all of the developers use, AFAIK (I know that's what I have\nanyway). bison 1.25 might well have some problems though...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Apr 2001 12:48:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: parser: parse error at or near \"JOIN\" "
}
] |
[
{
"msg_contents": "\nGot a query that looks like:\n\n========================================================================\n\nSELECT card_info.main_cat, category_details.sub_cat_flag,count(*)\n FROM send0,card_info,category_details\n WHERE send0.card_id=card_info.card_id\n AND category_details.mcategory='e-cards'\n AND card_info.main_cat=category_details.category\n AND send_date >= '2001/04/08'\n AND send_date <= '2001/05/14' group by 1,2\n\nUNION ALL\n\nSELECT card_info.main_cat, category_details.sub_cat_flag,count(*)\n FROM send1,card_info,category_details where send1.card_id=card_info.card_id\n AND category_details.mcategory='e-cards'\n AND card_info.main_cat=category_details.category\n AND send_date >= '2001/04/08'\n AND send_date <= '2001/05/14' group by 1,2\n\nUNION ALL\n\nSELECT card_info.main_cat, category_details.sub_cat_flag,count(*)\n FROM send2,card_info,category_details where send2.card_id=card_info.card_id\n AND category_details.mcategory='e-cards'\n AND card_info.main_cat=category_details.category\n AND send_date >= '2001/04/08'\n AND send_date <= '2001/05/14' group by 1,2\n\nUNION ALL\n\nSELECT card_info.main_cat, category_details.sub_cat_flag,count(*)\n FROM send3,card_info,category_details where send3.card_id=card_info.card_id\n AND category_details.mcategory='e-cards'\n AND card_info.main_cat=category_details.category\n AND send_date >= '2001/04/08'\n AND send_date <= '2001/05/14' group by 1,2\n\nUNION ALL\n\nSELECT card_info.main_cat, category_details.sub_cat_flag,count(*)\n FROM send4,card_info,category_details where send4.card_id=card_info.card_id\n AND category_details.mcategory='e-cards'\n AND card_info.main_cat=category_details.category\n AND send_date >= '2001/04/08'\n AND send_date <= '2001/05/14' group by 1,2\n\nUNION ALL\n\nSELECT card_info.main_cat, category_details.sub_cat_flag,count(*)\n FROM send5,card_info,category_details where send5.card_id=card_info.card_id\n AND category_details.mcategory='e-cards'\n AND card_info.main_cat=category_details.category\n AND send_date >= '2001/04/08'\n AND send_date <= '2001/05/14' group by 1,2\n\nUNION ALL\n\nSELECT card_info.main_cat, category_details.sub_cat_flag,count(*)\n FROM send6,card_info,category_details where send6.card_id=card_info.card_id\n AND category_details.mcategory='e-cards'\n AND card_info.main_cat=category_details.category\n AND send_date >= '2001/04/08'\n AND send_date <= '2001/05/14' group by 1,2\n\nUNION ALL\n\nSELECT card_info.main_cat, category_details.sub_cat_flag,count(*)\n\n========================================================================\n\n*Really* dreading the thought of changing it to an OUTER JOIN, and am\nwondering if there would be a noticeable speed difference between going\nfrom the UNION above to an OUTER JOIN, or should they be about the same?\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Tue, 24 Apr 2001 09:06:29 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "OUTER JOIN vs UNION ... faster?"
},
{
"msg_contents": "> SELECT card_info.main_cat, category_details.sub_cat_flag,count(*)\n> FROM send0,card_info,category_details\n> WHERE send0.card_id=card_info.card_id\n> AND category_details.mcategory='e-cards'\n> AND card_info.main_cat=category_details.category\n> AND send_date >= '2001/04/08'\n> AND send_date <= '2001/05/14' group by 1,2\n...\n> UNION ALL\n> \n> SELECT card_info.main_cat, category_details.sub_cat_flag,count(*)\n> FROM send6,card_info,category_details where send6.card_id=card_info.card_id\n> AND category_details.mcategory='e-cards'\n> AND card_info.main_cat=category_details.category\n> AND send_date >= '2001/04/08'\n> AND send_date <= '2001/05/14' group by 1,2\n> \n> UNION ALL\n> \n> SELECT card_info.main_cat, category_details.sub_cat_flag,count(*)\n> \n> ========================================================================\n> \n> *Really* dreading the thought of changing it to an OUTER JOIN, and am\n> wondering if there would be a noticeable speed difference between going\n> from the UNION above to an OUTER JOIN, or should they be about the same?\n\nafaict the point of this query is to do joins on separate tables send0\nthrough send6. An outer join won't help you here. The last clause pulls\neverything out of the other tables involved in the previous joins, so\nI'm *really* not sure what stats you are calculating. But they must be\nuseful to have done all this work ;)\n\nBut if you had constructed those tables (or are they views?) to avoid an\nouter join somehow, you could rethink that. An outer join on the two\ntables card_info and category_details should be much faster than six or\nseven inner joins on those tables plus the union aggregation.\n\n - Thomas\n",
"msg_date": "Tue, 24 Apr 2001 16:31:20 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: OUTER JOIN vs UNION ... faster?"
}
] |
[
{
"msg_contents": "\nTom:\n Notice that WriteBuffer would just put the fresh copy of the page\nout in the shared space.\n Other backends would get the latest copy of the page when\nTHEY execute BufferAlloc() afterwards. [Remember, backends would\nnot have a local buffer cache, only (temporary) copies of one buffer\nper BufferAlloc()/release pair].\n [Granted about the bandwidth needs. In my target arch,\naccess to shmem is costlier and local mem, and cannot be done\nvia pointers (so a lot of code that might have pointers inside the\nshmem buffer may need to be tracked down & changed)].\n My idea is to use high-bandwidth access via the copy-in/copy-out\napproach (hopefully pay only once that round-trip cost once per pair\nBufferAlloc -> make buffer dirty].\n\n[Mhy reasoning for this is that a backend needs to have exclusive\naccess to a buffer when it writes to it. And I think it 'advertises'\nthe new buffer contents to the world when it sets the BM_DIRTY flag.]\n\n About your suggestion of LockBuffer as synchronization points -\na simple protocol might be:\n - copy 'in' the buffer on a READ. SHARE or lock acquire\n (may have to be careful on an upgrade of a READ to a\n write lock)\n\t - copy 'out' the buffer on a WRITE lock release\n I would appreciate comments and input on this approach, as I\nforesee putting a lot of effort into it soon,\n regards\n Mauricio\n\n\n>From: Tom Lane <tgl@sss.pgh.pa.us>\n>To: \"Mauricio Breternitz\" <mbjsql@hotmail.com>\n>CC: pgsql-hackers@postgresql.org\n>Subject: Re: [HACKERS] concurrent Postgres on NUMA - howto ?\n>Date: Mon, 23 Apr 2001 19:43:05 -0400\n>\n>\"Mauricio Breternitz\" <mbjsql@hotmail.com> writes:\n> > \tMy concern is whether that is enough to maintain consistency\n> > in the buffer cache\n>\n>No, it isn't --- for one thing, WriteBuffer wouldn't cause other\n>backends to update their copies of the page. At the very least you'd\n>need to synchronize where the LockBuffer calls are, not where\n>WriteBuffer is called.\n>\n>I really question whether you want to do anything like this at all.\n>Seems like accessing the shared buffers right where they are will be\n>fastest; your approach will entail a huge amount of extra data copying.\n>Considering that a backend doesn't normally touch every byte on a page\n>that it accesses, I wouldn't be surprised if full-page copying would\n>net out to being more shared-memory traffic, rather than less.\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n>http://www.postgresql.org/users-lounge/docs/faq.html\n\n_________________________________________________________________\nGet your FREE download of MSN Explorer at http://explorer.msn.com\n\n",
"msg_date": "Tue, 24 Apr 2001 08:41:05 -0500",
"msg_from": "\"Mauricio Breternitz\" <mbjsql@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: concurrent Postgres on NUMA - howto ?"
},
{
"msg_contents": "\"Mauricio Breternitz\" <mbjsql@hotmail.com> writes:\n> Notice that WriteBuffer would just put the fresh copy of the page\n> out in the shared space.\n> Other backends would get the latest copy of the page when\n> THEY execute BufferAlloc() afterwards.\n\nYou seem to be assuming that BufferAlloc is mutually exclusive across\nbackends --- it's not. As I said, you'd have to look at transferring\ndata at LockBuffer time to make this work.\n\n> [Granted about the bandwidth needs. In my target arch,\n> access to shmem is costlier and local mem, and cannot be done\n> via pointers\n\nWhat? How do you manage to memcpy out of shmem then?\n\n> (so a lot of code that might have pointers inside the\n> shmem buffer may need to be tracked down & changed)].\n\nYou're correct, Postgres assumes it can have pointers to data inside the\npage buffers. I don't think changing that is feasible. I find it hard\nto believe that you can't have pointers to shmem though; IMHO it's not\nshmem if it can't be pointed at.\n\n> [Mhy reasoning for this is that a backend needs to have exclusive\n> access to a buffer when it writes to it. And I think it 'advertises'\n> the new buffer contents to the world when it sets the BM_DIRTY flag.]\n\nNo. BM_DIRTY only advises the buffer manager that the page must\neventually be written back to disk; it does not have anything to do with\nwhen/whether other backends see data changes within the page. One more\ntime: LockBuffer is what you need to be looking at.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Apr 2001 16:20:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: concurrent Postgres on NUMA - howto ? "
}
] |
[
{
"msg_contents": "Doug McNaught wrote:\n> A very valid objection. I'm also dubious as to the utility of the\n> whole concept. What happens when Sendmail refuses a message based on\n> load? It is requeued on the sending end to be tried later. What\n> happens when PG refuses a new client connection based on load? The\n> application stops working. Is this really better than having slow\n> response time because the server is thrashing?\n\n That's exactly the point why I suggested to delay transaction\n starts instead. The client app allways gets the connection.\n Doing dialog steps inside of open transactions is allways a\n bad design, leading to a couple of problems (coffee break\n with open locks), so we can assume that if an application\n starts a transaction, it'll keep this one backend as busy as\n possible until the transactions end.\n\n Processing too many transactions parallel is what get's the\n system into heavy swapping and exponential usage of\n resources. So if we delay starting transactions if the system\n load is above the limit, we probably speedup the overall per\n transaction response time, increasing the througput. And\n that's what this discussion is all about, no?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Tue, 24 Apr 2001 13:28:13 -0500 (EST)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: refusing connections based on load ..."
}
] |
[
{
"msg_contents": "who is it distributed by then? it was on the postgres ftp mirror sites, so\nit probably can't be redhat. I have found workarounds, so its not a big\ndeal, but... Also, i wonder what else is different from this package from\nthe \"real\" source distribution. I am sorry if this has been discussed or\nexplained in the past before, but i cannot find this info in a FAQ or know\nwhat keywords to use if i want to search on the mailing list :).\n\n-rchit\n\n-----Original Message-----\nFrom: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\nSent: Tuesday, April 24, 2001 7:28 AM\nTo: Rachit Siamwalla\nCc: PostgreSQL Development\nSubject: Re: [HACKERS] start / stop scripts question\n\n\nYou will find that that script is not distributed by us.\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> \n> Hi,\n> \n> I believe i found two minor bugs in the linux start/stop scripts for the\n> downloadable rpm version of postgres 7.1. I don't think these have been\n> reported already (i did some quik searches). Please look these over and\nsee\n> if i'm just smoking something or if these bugs are valid. Also, i did a\n> quick cvs checkout / log of the contrib tree, and i noted that the\n> start/stop scripts have been restructured recently (i do not know where\n> logic of the scripts were moved to, so these points may still be valid, if\n> not, i was wondering if I pull the scripts from the cvs contrib tree\nmyself,\n> would they work out of the box?).\n> \n> ---\n> \n> #1. Every instance of (there are 2):\n> \n> pid=`pidof postmaster`\n> if [ $pid ]\n> \n> should be:\n> \n> pid=`pidof -s postmaster`\n> if [ $pid ]\n> \n> (pidof may return multiple pids if postmaster forked or has multiple\nthreads\n> -- i'm not toofamiliar with postgres architecture, but postmaster does\n> sometimes show multiple pids which could mean multiple threads or\nprocesses\n> in linux) If pidof returns multiple pids, the \"if\" will barf giving\n> something like the following:\n> \n> Stopping postgresql service: [ OK ]\n> Checking postgresql installation: [ OK ]\n> /etc/rc.d/init.d/postgresql: [: 1223: unary operator expected\n> Starting postgresql service: [FAILED]\n> \n> --------\n> \n> #2. /etc/rc.d/init.d/postgresql restart sometimes doesn't do what it\nshould.\n> \n> ie. end up with a fresh newly started postgres daemon.\n> \n> This happens because the rc.d script does something very simple: stop;\n> start. This is correct, but stop doesn't do what it should. When stop\n> returns, postgres may not have fully stopped for some reason. start\n> complains that postmaster is still running. After doing some testing, my\n> hypothesis is this (i have no idea how postgres works intermally):\n> \n> 1. I run a bunch of inserts, create tables\n> 2. I call postgres stop\n> 3. one of the postgres \"processes\" stops.\n> 4. the other processes are still trying to flush stuff onto the disk\nbefore\n> they quit.\n> 5. start is called, and it finds some \"postmaster\" processes, and thus\nsays\n> \"postmaster is running\".\n> 6. the other processes finally are done and stop.\n> \n> Now there are no more postgres running.\n> \n> When i added a sleep 10 between stop / start, everything was fine. The\n> \"correct\" solution would be for postgres stop to actually wait for the\n> entire db to exit cleanly. BTW, i uncovered this via an automated install\n/\n> configuration / population of a postgress database which involves a\nrestart\n> right after population of a database.\n> \n> Thanx.\n> \n> -rchit\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 24 Apr 2001 13:47:34 -0700",
"msg_from": "Rachit Siamwalla <rachit@ensim.com>",
"msg_from_op": true,
"msg_subject": "RE: start / stop scripts question"
},
{
"msg_contents": "I would like to know myself. I just did a recursive grep of the entire\nPostgreSQL tree and don't see it. My guess is that it is part of the\nRPM. Not sure who to report that to. I know Lamar Owen works on it,\nbut I don't know if he is the contact.\n\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> who is it distributed by then? it was on the postgres ftp mirror sites, so\n> it probably can't be redhat. I have found workarounds, so its not a big\n> deal, but... Also, i wonder what else is different from this package from\n> the \"real\" source distribution. I am sorry if this has been discussed or\n> explained in the past before, but i cannot find this info in a FAQ or know\n> what keywords to use if i want to search on the mailing list :).\n> \n> -rchit\n> \n> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> Sent: Tuesday, April 24, 2001 7:28 AM\n> To: Rachit Siamwalla\n> Cc: PostgreSQL Development\n> Subject: Re: [HACKERS] start / stop scripts question\n> \n> \n> You will find that that script is not distributed by us.\n> \n> [ Charset ISO-8859-1 unsupported, converting... ]\n> > \n> > Hi,\n> > \n> > I believe i found two minor bugs in the linux start/stop scripts for the\n> > downloadable rpm version of postgres 7.1. I don't think these have been\n> > reported already (i did some quik searches). Please look these over and\n> see\n> > if i'm just smoking something or if these bugs are valid. Also, i did a\n> > quick cvs checkout / log of the contrib tree, and i noted that the\n> > start/stop scripts have been restructured recently (i do not know where\n> > logic of the scripts were moved to, so these points may still be valid, if\n> > not, i was wondering if I pull the scripts from the cvs contrib tree\n> myself,\n> > would they work out of the box?).\n> > \n> > ---\n> > \n> > #1. Every instance of (there are 2):\n> > \n> > pid=`pidof postmaster`\n> > if [ $pid ]\n> > \n> > should be:\n> > \n> > pid=`pidof -s postmaster`\n> > if [ $pid ]\n> > \n> > (pidof may return multiple pids if postmaster forked or has multiple\n> threads\n> > -- i'm not toofamiliar with postgres architecture, but postmaster does\n> > sometimes show multiple pids which could mean multiple threads or\n> processes\n> > in linux) If pidof returns multiple pids, the \"if\" will barf giving\n> > something like the following:\n> > \n> > Stopping postgresql service: [ OK ]\n> > Checking postgresql installation: [ OK ]\n> > /etc/rc.d/init.d/postgresql: [: 1223: unary operator expected\n> > Starting postgresql service: [FAILED]\n> > \n> > --------\n> > \n> > #2. /etc/rc.d/init.d/postgresql restart sometimes doesn't do what it\n> should.\n> > \n> > ie. end up with a fresh newly started postgres daemon.\n> > \n> > This happens because the rc.d script does something very simple: stop;\n> > start. This is correct, but stop doesn't do what it should. When stop\n> > returns, postgres may not have fully stopped for some reason. start\n> > complains that postmaster is still running. After doing some testing, my\n> > hypothesis is this (i have no idea how postgres works intermally):\n> > \n> > 1. I run a bunch of inserts, create tables\n> > 2. I call postgres stop\n> > 3. one of the postgres \"processes\" stops.\n> > 4. the other processes are still trying to flush stuff onto the disk\n> before\n> > they quit.\n> > 5. start is called, and it finds some \"postmaster\" processes, and thus\n> says\n> > \"postmaster is running\".\n> > 6. the other processes finally are done and stop.\n> > \n> > Now there are no more postgres running.\n> > \n> > When i added a sleep 10 between stop / start, everything was fine. The\n> > \"correct\" solution would be for postgres stop to actually wait for the\n> > entire db to exit cleanly. BTW, i uncovered this via an automated install\n> /\n> > configuration / population of a postgress database which involves a\n> restart\n> > right after population of a database.\n> > \n> > Thanx.\n> > \n> > -rchit\n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> > \n> > http://www.postgresql.org/search.mpl\n> > \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 24 Apr 2001 17:05:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: start / stop scripts question"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> I would like to know myself. I just did a recursive grep of the entire\n> PostgreSQL tree and don't see it. My guess is that it is part of the\n> RPM. Not sure who to report that to. I know Lamar Owen works on it,\n> but I don't know if he is the contact.\n\nYes, that would be me. I saw the original message come through, just\nhaven't had a chance to reply to it.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 24 Apr 2001 17:26:00 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: start / stop scripts question"
}
] |
[
{
"msg_contents": "Does anybody know:\n\n1) Is the tar/custom format of pg_dump is portable accross different\n platforms?\n\n2) if I want to dump out all of database cluster contents including\n large objects, is following procedure correct?\n\n (dump procedure)\n pg_dumpall -g\n pg_dump -F c .... for each database\n\t :\n\t :\n\n (restore procedure)\n initdb\n psql template1 < dumpout_of_pg_dumpall\n pg_restore ... for each database\n\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 25 Apr 2001 10:00:52 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "pg_dump"
},
{
"msg_contents": "At 10:00 25/04/01 +0900, Tatsuo Ishii wrote:\n>Does anybody know:\n>\n>1) Is the tar/custom format of pg_dump is portable accross different\n> platforms?\n\nIt's supposed to be; if it's not, it's a bug.\n\n\n>2) if I want to dump out all of database cluster contents including\n> large objects, is following procedure correct?\n>\n> (dump procedure)\n> pg_dumpall -g\n> pg_dump -F c .... for each database\n>\t :\n>\t :\n>\n> (restore procedure)\n> initdb\n> psql template1 < dumpout_of_pg_dumpall\n> pg_restore ... for each database\n\nLooks OK to me. You need a '-b' on the pg_dump (for BLOBS), and each dump\ncommand will need to go to a separate file - it won't currently work if\nmultiple dumps are being sent to stdout.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Wed, 25 Apr 2001 12:04:43 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump"
},
{
"msg_contents": "At 12:04 25/04/01 +1000, Philip Warner wrote:\n>\n>it won't currently work if\n>multiple dumps are being sent to stdout.\n\nThis latter could be fixed (at least for 'c' format) by modifying pg_dump\nto use open/read/write/lseek instead of fopen/fread/fwrite/fseek etc.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Wed, 25 Apr 2001 14:16:00 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump"
}
] |
[
{
"msg_contents": "\nHi,\nI think the pg_log file of my postgreSQL is corrupted.\nI can;t access to tables in my database now.\nThe error message appear when I tried to list all tables is as below :\n\" cannot flush block 8 of pg_log to stable store \"\nAnybody know how can I restore back the pg_log file.\nThanks.\n\nEmmanuel Wong\n\n\n\n",
"msg_date": "Wed, 25 Apr 2001 11:08:17 +0800",
"msg_from": "YekFu.Wong@seagate.com",
"msg_from_op": true,
"msg_subject": "pg_log file corrupted"
}
] |
[
{
"msg_contents": "Hello.\n\nI have a particular query which performs a 15-way join; I believe in \nnormalization ;-). Under 7.0.3, using the defaults where GEQO is \nenabled after 11, the query (which returns 1 row) takes 10 seconds. \nWith GEQO turned off, it takes 18 seconds. Naturally I intend to \nupgrade as soon as possible, but I looked through the change log and \ndidn't see anything specific WRT large joins. I was wondering if any \nwork had been done in that area for 7.1. I realize you can only \nsqueeze so much blood from stone, but....\n\nThanks for any info,\n\nMike Mascari\nmascarm@mascari.com\n\n",
"msg_date": "Wed, 25 Apr 2001 06:34:14 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": true,
"msg_subject": "Any optimizations to the join code in 7.1?"
},
{
"msg_contents": "Mike Mascari <mascarm@mascari.com> writes:\n> I have a particular query which performs a 15-way join;\n\nYou should read \nhttp://www.postgresql.org/devel-corner/docs/postgres/explicit-joins.html\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Apr 2001 12:42:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Any optimizations to the join code in 7.1? "
},
{
"msg_contents": "On Wed, 25 Apr 2001, Tom Lane wrote:\n\n> Mike Mascari <mascarm@mascari.com> writes:\n> > I have a particular query which performs a 15-way join;\n> \n> You should read \n> http://www.postgresql.org/devel-corner/docs/postgres/explicit-joins.html\n\nI was recently poring over this page myself, as I've been working w/some\nlarger-than-usual queries.\n\nTwo questions:\n\n1) it appears (from my tests) that SELECT * FROM\n\n CREATE VIEW joined as\n SELECT p.id,\n p.pname,\n c.cname\n FROM p\n LEFT OUTER JOIN c using (id)\n\n gives the same answer as SELECT * FROM\n\n CREATE VIEW nested\n SELECT p.id,\n p.pname,\n (select c.cname from c where c.id = p.id)\n FROM p\n\n However, I often am writing VIEWs that will be used by developers\n in a front-end system. Usually, this view might have 30 items in the\n select clause, but the developer using it is likely to only as for\n four or five items. In this case, I often prefer the\n subquery form because it appears that\n\n SELECT id, pname FROM joined\n\n is more complicated than\n\n SELECT id, pname FROM nested\n\n as the first has to perform the join, and the second doesn't.\n\n Is this actually correct?\n\n2) The explicit-joins help suggests that manual structuring and\n experimentation might help -- has anyone written (or could\n anyone write) anthing about where to start in guessing what\n join order might be optimal?\n\n\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n",
"msg_date": "Wed, 25 Apr 2001 13:10:30 -0400 (EDT)",
"msg_from": "Joel Burton <jburton@scw.org>",
"msg_from_op": false,
"msg_subject": "Re: Any optimizations to the join code in 7.1? "
},
{
"msg_contents": "Joel Burton <jburton@scw.org> writes:\n> 1) it appears (from my tests) that SELECT * FROM\n\n> CREATE VIEW joined as\n> SELECT p.id,\n> p.pname,\n> c.cname\n> FROM p\n> LEFT OUTER JOIN c using (id)\n\n> gives the same answer as SELECT * FROM\n\n> CREATE VIEW nested\n> SELECT p.id,\n> p.pname,\n> (select c.cname from c where c.id = p.id)\n> FROM p\n\nOnly if c.id is a unique column (ie, there are always 0 or 1 matches in\nc for any given p.id). Otherwise the subselect form will fail.\n\n> However, I often am writing VIEWs that will be used by developers\n> in a front-end system. Usually, this view might have 30 items in the\n> select clause, but the developer using it is likely to only as for\n> four or five items. In this case, I often prefer the\n> subquery form because it appears that\n> SELECT id, pname FROM joined\n> is more complicated than\n> SELECT id, pname FROM nested\n> as the first has to perform the join, and the second doesn't.\n\n> Is this actually correct?\n\nThis approach is probably reasonable if the cname field of the view\nresult is seldom wanted at all, and never used as a WHERE constraint.\nYou'd get a very nonoptimal plan if someone did\n\n\tselect * from nested where cname like 'foo%'\n\nsince the planner has no way to use the LIKE constraint to limit the\nrows fetched from p. In the JOIN format, on the other hand, I think\nthe constraint could be exploited.\n\nAlso bear in mind that the subselect form is essentially forcing the\njoin to be done via a nested loop. If you have an index on c.id then\nthis may not be too bad, but without one the performance will be\nhorrid. Even with an index, nested loop with inner indexscan is not\nthe join method of choice if you are retrieving a lot of rows.\n\n> 2) The explicit-joins help suggests that manual structuring and\n> experimentation might help -- has anyone written (or could\n> anyone write) anthing about where to start in guessing what\n> join order might be optimal?\n\nThe obvious starting point is the plan produced by the planner from an\nunconstrained query. Even if you don't feel like trying to improve it,\nyou could cut the time to reproduce the plan quite a bit --- just CROSS\nJOIN a few of the relation pairs that are joined first in the\nunconstrained plan.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Apr 2001 13:46:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Any optimizations to the join code in 7.1? "
},
{
"msg_contents": "On Wed, 25 Apr 2001, Tom Lane wrote:\n\n> > 2) The explicit-joins help suggests that manual structuring and\n> > experimentation might help -- has anyone written (or could\n> > anyone write) anthing about where to start in guessing what\n> > join order might be optimal?\n> \n> The obvious starting point is the plan produced by the planner from an\n> unconstrained query. Even if you don't feel like trying to improve it,\n> you could cut the time to reproduce the plan quite a bit --- just CROSS\n> JOIN a few of the relation pairs that are joined first in the\n> unconstrained plan.\n\nIn other words, let it do the work, and steal the credit for\nourselves. :-)\n\nThanks, Tom. I appreciate your answers to my questions.\n\n\n\nIn other DB systems I've used, some find that for this original query:\n\n SELECT * FROM a, b WHERE a.id=b.id AND b.name = 'foo';\n\nthat this version \n\n SELECT * FROM a JOIN b USING (id) WHERE b.name = 'foo';\n\nhas slower performance than\n \n SELECT * FROM b JOIN a USING (id) WHERE b.name = 'foo';\n\nbecause it can reduce b before any join. \n\nIs it safe to assume that this is a valid optimization in PostgreSQL?\n\n\nIf this whole thing were a view, except w/o the WHERE clause, and we were\nquerying the view w/the b.name WHERE clause, would we still see a\nperformance boost from the right arrangement? (ie, does our criteria get\npushed down early enough in the joining process?)\n\n\nTIA,\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n",
"msg_date": "Wed, 25 Apr 2001 17:09:46 -0400 (EDT)",
"msg_from": "Joel Burton <jburton@scw.org>",
"msg_from_op": false,
"msg_subject": "Re: Any optimizations to the join code in 7.1? "
},
{
"msg_contents": "Joel Burton <jburton@scw.org> writes:\n> In other DB systems I've used, some find that for this original query:\n> SELECT * FROM a, b WHERE a.id=b.id AND b.name = 'foo';\n> that this version \n> SELECT * FROM a JOIN b USING (id) WHERE b.name = 'foo';\n> has slower performance than\n> SELECT * FROM b JOIN a USING (id) WHERE b.name = 'foo';\n> because it can reduce b before any join. \n\n> Is it safe to assume that this is a valid optimization in PostgreSQL?\n\nIn general, that'd be a waste of time --- our planner considers the same\nset of plans in either case.\n\nHowever, it could make a difference if the planner thinks that the two\nchoices (a outer or b outer) have exactly the same cost. In that case\nthe order you wrote them in will influence which plan actually gets\npicked; and if the planner's estimate is wrong --- ie, there really is a\nconsiderable difference in the costs --- then you could see a change in\nperformance depending on which way you wrote it. That's a pretty\nunusual circumstance, maybe, but it just happens that I'm in the middle\nof looking at a planning bug wherein exactly this behavior occurs...\n\n> If this whole thing were a view, except w/o the WHERE clause, and we were\n> querying the view w/the b.name WHERE clause, would we still see a\n> performance boost from the right arrangement? (ie, does our criteria get\n> pushed down early enough in the joining process?)\n\nShouldn't make a difference; AFAIK the WHERE clause will get pushed down\nas far as possible, independently of whether a view is involved or you\nwrote it out the hard way.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Apr 2001 17:32:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Any optimizations to the join code in 7.1? "
}
] |
[
{
"msg_contents": "While searching for some info and using google.com I came across\n\n<http://www.epinions.com/ensw-review-7F55-42531AD1-3A43D81B-prod3>\n\nI am the first to understand that the opinion in such a site is\nworthless and the guy seems not to understand anything about DBMSs but\nit's quite harsh anyway.\n\n-- Alessio\n",
"msg_date": "Wed, 25 Apr 2001 15:15:09 +0300",
"msg_from": "Alessio Bragadini <alessio@albourne.com>",
"msg_from_op": true,
"msg_subject": "Comment about PostgreSQL on Epinions.com"
},
{
"msg_contents": "On Wed, 25 Apr 2001, Alessio Bragadini wrote:\n\n> While searching for some info and using google.com I came across\n>\n> <http://www.epinions.com/ensw-review-7F55-42531AD1-3A43D81B-prod3>\n>\n> I am the first to understand that the opinion in such a site is\n> worthless and the guy seems not to understand anything about DBMSs but\n> it's quite harsh anyway.\n\nConsidering the review was done in December he was more than likely\nusing an early beta, but even tho he was asked he didn't say. Some\nof his comments - speed mainly - looked like he had his mysql and\npostgresql numbers reversed based on EVERY benchmark I've seen. You\nare 100% correct tho, his opinion is worthless and is based on an\napparent lack of facts. He gives no data to back up any of his claims\nand according to the info on the left side, people actually listen to\nhim and trust his opinions. Go figure!\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Wed, 25 Apr 2001 08:56:31 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Comment about PostgreSQL on Epinions.com"
}
] |
[
{
"msg_contents": "Now that 7.1 is safely in the can, is it time to consider\nthis patch? It provides cursor support in PL.\n\n http://www.airs.com/ian/postgresql-cursor.patch\n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Wed, 25 Apr 2001 13:36:45 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": true,
"msg_subject": "Cursor support in pl/pg"
},
{
"msg_contents": "ncm@zembu.com (Nathan Myers) writes:\n> Now that 7.1 is safely in the can, is it time to consider\n> this patch?\n\nNot till we've forked the tree for 7.2, which is probably a week or so\naway...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Apr 2001 16:46:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Cursor support in pl/pg "
},
{
"msg_contents": "Tom Lane wrote:\n> ncm@zembu.com (Nathan Myers) writes:\n> > Now that 7.1 is safely in the can, is it time to consider\n> > this patch?\n>\n> Not till we've forked the tree for 7.2, which is probably a week or so\n> away...\n\n IIRC the patch only provides the syntax for CURSOR to\n PL/pgSQL. Not real cursor support on the SPI level. So it's\n still the same as before, the backend will try to suck up the\n entire resultset into the SPI tuple table (that's memory) and\n die if it's huge enough.\n\n What we really need is an improvement to the SPI manager to\n support cursor (or cursor like behaviour through repeated\n executor calls).\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Wed, 25 Apr 2001 16:10:40 -0500 (EST)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Cursor support in pl/pg"
},
{
"msg_contents": "Jan Wieck <JanWieck@Yahoo.com> writes:\n\n> IIRC the patch only provides the syntax for CURSOR to\n> PL/pgSQL. Not real cursor support on the SPI level. So it's\n> still the same as before, the backend will try to suck up the\n> entire resultset into the SPI tuple table (that's memory) and\n> die if it's huge enough.\n> \n> What we really need is an improvement to the SPI manager to\n> support cursor (or cursor like behaviour through repeated\n> executor calls).\n\nAgreed, but as I may have said before, 1) the problem you describe\nalready exists in PL/pgSQL when using the FOR x IN SELECT statement,\n2) the PL/pgSQL cursor patch is useful without the improvement to the\nSPI layer, 3) I would argue that the PL/pgSQL cursor patch is still\nneeded after the SPI layer is improved.\n\nSo I do not think that is a valid argument against installing the\nPL/pgSQL cursor patch.\n\nIan\n\n---------------------------(end of broadcast)---------------------------\nTIP 83: The only thing cheaper than hardware is talk.\n",
"msg_date": "25 Apr 2001 23:43:41 -0700",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": false,
"msg_subject": "Re: Cursor support in pl/pg"
},
{
"msg_contents": "Ian Lance Taylor wrote:\n> Jan Wieck <JanWieck@Yahoo.com> writes:\n>\n> > IIRC the patch only provides the syntax for CURSOR to\n> > PL/pgSQL. Not real cursor support on the SPI level. So it's\n> > still the same as before, the backend will try to suck up the\n> > entire resultset into the SPI tuple table (that's memory) and\n> > die if it's huge enough.\n> >\n> > What we really need is an improvement to the SPI manager to\n> > support cursor (or cursor like behaviour through repeated\n> > executor calls).\n>\n> Agreed, but as I may have said before, 1) the problem you describe\n> already exists in PL/pgSQL when using the FOR x IN SELECT statement,\n> 2) the PL/pgSQL cursor patch is useful without the improvement to the\n> SPI layer, 3) I would argue that the PL/pgSQL cursor patch is still\n> needed after the SPI layer is improved.\n>\n> So I do not think that is a valid argument against installing the\n> PL/pgSQL cursor patch.\n\n I don't object if we can be sure that it's implementing the\n syntax a final version with *real* cursor support will have.\n Can we?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Thu, 26 Apr 2001 08:48:13 -0500 (EST)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Cursor support in pl/pg"
},
{
"msg_contents": "Jan Wieck <JanWieck@Yahoo.com> writes:\n\n> I don't object if we can be sure that it's implementing the\n> syntax a final version with *real* cursor support will have.\n> Can we?\n\nI don't know, and I don't know what the decision criteria are.\n\nI intentionally implemented the Oracle cursor syntax. PL/pgSQL is\nvery similar to PL/SQL, and I didn't see any reason to introduce a\nspurious difference. Note in particular that simply passing\nOPEN/FETCH/CLOSE through to the Postgres SQL parser does not implement\nthe Oracle cursor syntax, so I wouldn't have done that even if it\nwould have worked.\n\n(I have a vested interest here. For various reasons, my company,\nZembu, has an interest in minimizing the strain of porting\napplications from Oracle to Postgres. I assume that the Postgres team\nalso has that interest, within reason. But I don't know for sure.)\n\nIan\n",
"msg_date": "26 Apr 2001 10:29:14 -0700",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": false,
"msg_subject": "Re: Cursor support in pl/pg"
},
{
"msg_contents": "Ian Lance Taylor wrote:\n> Jan Wieck <JanWieck@Yahoo.com> writes:\n>\n> > I don't object if we can be sure that it's implementing the\n> > syntax a final version with *real* cursor support will have.\n> > Can we?\n>\n> I don't know, and I don't know what the decision criteria are.\n>\n> I intentionally implemented the Oracle cursor syntax. PL/pgSQL is\n> very similar to PL/SQL, and I didn't see any reason to introduce a\n> spurious difference. Note in particular that simply passing\n> OPEN/FETCH/CLOSE through to the Postgres SQL parser does not implement\n> the Oracle cursor syntax, so I wouldn't have done that even if it\n> would have worked.\n\n Maybe it's \"very similar\" because I had an Oracle PL/SQL\n language reference at hand while writing the grammar file,\n maybe it's just by accident :-)\n\n>\n> (I have a vested interest here. For various reasons, my company,\n> Zembu, has an interest in minimizing the strain of porting\n> applications from Oracle to Postgres. I assume that the Postgres team\n> also has that interest, within reason. But I don't know for sure.)\n\n Who hasn't? O.K., you convinced me.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Thu, 26 Apr 2001 13:53:11 -0500 (EST)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Cursor support in pl/pg"
}
] |
[
{
"msg_contents": "I want to extract tables schema information, i've looked at \nsrc/bin/psql/describe.c but i cannot determine the datatype 'serial' and \n'references' from pg_*, i understand that triggers are generated for serial \nand references, so how i can understand from my perl application the full \nschema ?\n\nthanks,\nvalter\n_________________________________________________________________________\nGet Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.\n\n",
"msg_date": "Thu, 26 Apr 2001 02:40:20 +0200",
"msg_from": "\"V. M.\" <txian@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Schema Issue"
}
] |
[
{
"msg_contents": "Just a little note of pseudo humor.\n\nWe could not postmaster (pg version 7.0.3) and I could not figure out why. I\nchecked directory permissions, all that. It kept complaining that it could not\ncreate the pid file.\n\nI did not understand why it would not work. I grepped through all the postgres\nsource to find that this error could also be due to an inability to write the\npid file. \n\nI checked the disk space, of which there was none. Doh! I should have just done\na \"df\" at the start.\n\n-- \nI'm not offering myself as an example; every life evolves by its own laws.\n------------------------\nhttp://www.mohawksoft.com\n",
"msg_date": "Wed, 25 Apr 2001 21:34:38 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Open source is great, but too tempting"
}
] |
[
{
"msg_contents": "Dear friends,\n\nPgAdmin provides new features for dropping/creating functions, triggers and \nviews and relinking the whole system without restarting PostgreSQL.\nBy now, this new feature is only available as a patch to PgAdmin \nhttp://www.greatbridge.org/project/pgadmin/patch/patchlist.php.\n\nIf you are curious reading the code, download and install the patch.\nDon't use the patch on production systems, because there are still a lot of \nbugs.\nOtherwise, wait until Dave Page integrates the code in the CVS.\nI will inform you by mail when you can start using this new feature.\n\nWhen all this is implemented in PgAdmin, I will also provide you with \nPL/pgSQL code to perform the same things server-side.\nBy now, I don't really know if we can run all this stuff in a single \ntransaction. Any idea?\n\nHelp welcome on PgAdmin project, we need volunteers and feedback \nhttp://www.greatbridge.org/pipermail/pgadmin-hackers/.\nWe are always looking for beta testers, new members and developers.\n\nSome great new features that will transform PgAdmin into the most advanced \nIDE for PostgreSQL:\n- query/code loader in WDDX format for use in Php/C++/Perl/VB.\n- postgresql packages.\n- syntax checking and highlighting.\n\nAnd also:\n- PL/SQL Universal Schema: a set of PL/SQL functions to administer/migrate \nfrom/to Oracle, PostgreSQL and Ms SQL Server transparently.\nIf you are interested by PL/SQL Universal Schema, why not open a new branch \non PostgreSQL CVS, otherwise I will host the project on Greatbridge.\nEnvironments such as Php or Klyx should be able to administer databases \nwithout the use of odbc and Microbsoft stuff.\n- PostgreSQL WDDX support (seems easy, is it really?).\n- I do not agree that the ALTER FUNCTION etc... should be delayed to 7.2 \nrelease. By now, it is impossible to do serious development without \nbreaking dependencies somewhere. PostgreSQL is intended for end-users, right ?\n\nOther information: I do not work for Greatbridge, these are complete and \npure GPL open source contributions.\nSo, don't hesitate to visit http://www.greatbridge.org which provides \nexcellent tools for developers.\n\nGreetings from Jean-Michel POURE, Paris, France\nAxitrad, CEO\n\n",
"msg_date": "Thu, 26 Apr 2001 10:36:20 +0200",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Important news about PgAdmin >>>> Drop/create functions,\n\ttriggers and views and relink the whole system without restarting\n\tPostgreSQL"
}
] |
[
{
"msg_contents": "I am getting a bit concerned about Postgres 7.1 performance with multiple\nconnections. Postgres does not seem to scaling very well. Below there is a list\nof outputs from pgbench with different number of clients, you will see that\npostgres' performance in the benchmark drops with each new connection.\nShouldn't the tps stay fairly constant?\n\nI am using pgbench because I saw this performance issue with a project I was\ndeveloping. I decided to try doing operations in parallel, thinking that\npostgres would scale. What I found was the more machines I added to the task,\nthe slower the processing was. \n\nAnyone have any ideas? Is this how it is supposed to be?\n\nMy postmaster start line looks like:\n/usr/local/pgsql/bin/postmaster -A0 -N 24 -B 4096 -i -S -D/sqlvol/pgdev -o -F\n-fs -S 2048\n\nThe database is on a dedicated PCI-IDE/66 promise card, with a 5400rpm maxtor\ndrive, not the best hardware, I grant you, but that should have little to do\nwith the scaling aspect.\n\nI am running redhat linux 7.0, kernel 2.4.3. 512M ram, dual PIII 600mhz.\n\n[markw@snoopy pgbench]$ ./pgbench -v -c 1 -t 30 pgbench\nstarting vacuum...end.\nstarting full vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nnumber of clients: 1\nnumber of transactions per client: 30\nnumber of transactions actually processed: 30/30\ntps = 218.165952(including connections establishing)\ntps = 245.062001(excluding connections establishing)\n[markw@snoopy pgbench]$ ./pgbench -v -c 2 -t 30 pgbench\nstarting vacuum...end.\nstarting full vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nnumber of clients: 2\nnumber of transactions per client: 30\nnumber of transactions actually processed: 60/60\ntps = 200.861024(including connections establishing)\ntps = 221.175326(excluding connections establishing)\n[markw@snoopy pgbench]$ ./pgbench -v -c 3 -t 30 pgbench\nstarting vacuum...end.\nstarting full vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nnumber of clients: 3\nnumber of transactions per client: 30\nnumber of transactions actually processed: 90/90\ntps = 144.053242(including connections establishing)\ntps = 154.083205(excluding connections establishing)\n[markw@snoopy pgbench]$ ./pgbench -v -c 4 -t 30 pgbench\nstarting vacuum...end.\nstarting full vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nnumber of clients: 4\nnumber of transactions per client: 30\nnumber of transactions actually processed: 120/120\ntps = 129.709537(including connections establishing)\ntps = 137.852284(excluding connections establishing)\n[markw@snoopy pgbench]$ ./pgbench -v -c 5 -t 30 pgbench\nstarting vacuum...end.\nstarting full vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nnumber of clients: 5\nnumber of transactions per client: 30\nnumber of transactions actually processed: 150/150\ntps = 103.569559(including connections establishing)\ntps = 108.535287(excluding connections establishing)\n\n\n.......\n\n[markw@snoopy pgbench]$ ./pgbench -v -c 20 -t 30 pgbench\nstarting vacuum...end.\nstarting full vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nnumber of clients: 20\nnumber of transactions per client: 30\nnumber of transactions actually processed: 600/600\ntps = 40.600209(including connections establishing)\ntps = 41.352773(excluding connections establishing)\n",
"msg_date": "Thu, 26 Apr 2001 08:39:25 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "scaling multiple connections"
},
{
"msg_contents": "mlw <markw@mohawksoft.com> writes:\n> I am getting a bit concerned about Postgres 7.1 performance with\n> multiple connections. Postgres does not seem to scaling very\n> well. Below there is a list of outputs from pgbench with different\n> number of clients, you will see that postgres' performance in the\n> benchmark drops with each new connection. Shouldn't the tps stay\n> fairly constant?\n\nThere was quite a long thread about this in pghackers back in Jan/Feb\n(or so). You might want to review it. One thing I recall is that\nyou need a \"scaling factor\" well above 1 if you want meaningful results\n--- at scale factor 1, all of the transactions want to update the same\nrow, so of course there's no parallelism and a lot of lock contention.\n\nThe default WAL tuning parameters (COMMIT_DELAY, WAL_SYNC_METHOD, and\nfriends) are probably not set optimally in 7.1. We are hoping to hear\nabout some real-world performance results so that we can tweak them in\nfuture releases. I do not trust benchmarks as simplistic as pgbench for\ndoing that kind of tweaking, however.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 26 Apr 2001 11:53:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: scaling multiple connections "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n> > I am getting a bit concerned about Postgres 7.1 performance with\n> > multiple connections. Postgres does not seem to scaling very\n> > well. Below there is a list of outputs from pgbench with different\n> > number of clients, you will see that postgres' performance in the\n> > benchmark drops with each new connection. Shouldn't the tps stay\n> > fairly constant?\n> \n> There was quite a long thread about this in pghackers back in Jan/Feb\n> (or so). You might want to review it. One thing I recall is that\n> you need a \"scaling factor\" well above 1 if you want meaningful results\n> --- at scale factor 1, all of the transactions want to update the same\n> row, so of course there's no parallelism and a lot of lock contention.\n> \n> The default WAL tuning parameters (COMMIT_DELAY, WAL_SYNC_METHOD, and\n> friends) are probably not set optimally in 7.1. We are hoping to hear\n> about some real-world performance results so that we can tweak them in\n> future releases. I do not trust benchmarks as simplistic as pgbench for\n> doing that kind of tweaking, however.\n> \nI agree with you about the benchmarks, but it does behave similar to what I\nhave in my app, which is why I used it for an example.\n\nIf you are familiar with cddb (actually freedb.org) I am taking that data in\nputting it into postgres. The steps are: (pseudo code)\n\nselect nextval('cdid_seq');\n\nbegin;\n\ninsert into titles (...) values (...);\n\nfor(i=0; i < tracks; i++)\n\tinsert into tracks (...) values (...);\n\ncommit;\n\n\nWhen running stand alone on my machine, it will hovers around 130 full CDs per\nsecond. When I start two processes it drops to fewer than 100 inserts per\nsecond. When I add another, it drops even more. The results I posted with\npgbench pretty much showed what I was seeing in my program.\n\nI hacked the output of pgbench to get me tabbed delimited fields to chart, but\nit is easier to look at, see the results below. This is the same build and same\nstartup scripts on the two different machines. I know this isn't exactly\nscientific, but I have a few bells going off suggesting that postgres has some\nSMP scaling issues.\n\n\nMy Dual PIII 600MHZ, 500M RAM, Linux 2.4.3 SMP\npg_xlog is pointed to a different drive than is base.\nI/O Promise dual IDE/66, xlog on one drive, base on another.\n\ncount transaction\ttime (excluding connection)\n1 32 175.116\n2 32 138.288\n3 32 102.890\n4 32 88.243\n5 32 77.024\n6 32 62.648\n7 32 61.231\n8 32 60.017\n9 32 56.034\n10 32 57.854\n11 32 50.812\n12 32 53.019\n13 32 50.289\n14 32 46.421\n15 32 44.496\n16 32 45.297\n17 32 41.725\n18 32 46.048\n19 32 45.007\n20 32 41.584\n21 32 43.420\n22 32 39.640\n23 32 43.250\n24 32 41.617\n25 32 42.511\n26 32 38.369\n27 32 38.919\n28 32 38.813\n29 32 39.242\n30 32 39.859\n31 32 37.938\n32 32 41.516\n\n\nSingle processor PII 450, 256M, Linux 2.2.16\npg_xlog pointing to different drive than base\nI/O Adaptec 2940, Two seagate barracudas.\n\ncount transaction\ttime (excluding connection)\n\n1 32 154.539\n2 32 143.609\n3 32 144.608\n4 32 141.718\n5 32 128.759\n6 32 154.388\n7 32 144.097\n8 32 149.828\n9 32 143.092\n10 32 146.548\n11 32 141.613\n12 32 139.692\n13 32 137.425\n14 32 137.227\n15 32 134.669\n16 32 128.277\n17 32 127.440\n18 32 121.224\n19 32 121.915\n20 32 120.740\n21 32 118.562\n22 32 116.271\n23 32 113.883\n24 32 113.558\n25 32 109.293\n26 32 108.782\n27 32 108.796\n28 32 105.684\n29 32 103.614\n30 32 102.232\n31 32 100.514\n32 32 99.339\n",
"msg_date": "Thu, 26 Apr 2001 13:45:48 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: scaling multiple connections"
},
{
"msg_contents": "At 08:39 AM 26-04-2001 -0400, mlw wrote:\n>I am getting a bit concerned about Postgres 7.1 performance with multiple\n>connections. Postgres does not seem to scaling very well. Below there is a\nlist\n>of outputs from pgbench with different number of clients, you will see that\n>\n>My postmaster start line looks like:\n>/usr/local/pgsql/bin/postmaster -A0 -N 24 -B 4096 -i -S -D/sqlvol/pgdev -o -F\n>-fs -S 2048\n\nMaybe it's the -fs in your start up line.\n\nI tried a similar start line as yours but without -fs and I get consistent\ntps values for pgbench.\n\n./pgbench -v -c 1 -t 30 test\nstarting vacuum...end.\nstarting full vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nnumber of clients: 1\nnumber of transactions per client: 30\nnumber of transactions actually processed: 30/30\ntps = 161.938949(including connections establishing)\ntps = 180.060140(excluding connections establishing)\n[lylyeoh@nimbus pgbench]$ ./pgbench -v -c 3 -t 30 test\nstarting vacuum...end.\nstarting full vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nnumber of clients: 3\nnumber of transactions per client: 30\nnumber of transactions actually processed: 90/90\ntps = 172.909666(including connections establishing)\ntps = 189.845782(excluding connections establishing)\n[lylyeoh@nimbus pgbench]$ ./pgbench -v -c 4 -t 30 test\nstarting vacuum...end.\nstarting full vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nnumber of clients: 4\nnumber of transactions per client: 30\nnumber of transactions actually processed: 120/120\ntps = 172.909417(including connections establishing)\ntps = 189.319538(excluding connections establishing)\n\nTested machine is a Dell Poweredge 1300 uniprocessor PIII 500MHz with 128MB\nRAM, and a single 9GB HDD.\n\nWith -fs there's a decrease, but not as marked as your case. So not sure if\nit's really the problem.\n\nTry that out.\n\nCheerio,\nLink.\n\n",
"msg_date": "Fri, 27 Apr 2001 18:24:30 +0800",
"msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>",
"msg_from_op": false,
"msg_subject": "Re: scaling multiple connections"
}
] |
[
{
"msg_contents": "i know \"password\" can be used in creating/altering user\ninformation (as used via GRANT and REVOKE) but is there any\nfacility within postgres to CRYPT() a value?\n\n\tcreate rule new_folk as on insert to view_folk do instead\n\t\tinsert into folk_table\n\t\t\t(created,login,password)\n\t\t\tvalues\n\t\t\t(current_timestamp,new.login,CRYPT(new.password))\n\t\t\t;\n\nor must this be done (say, in perl) before postgres sees it?\n\n-- \nwill@serensoft.com\nhttp://sourceforge.net/projects/newbiedoc -- we need your brain!\nhttp://www.dontUthink.com/ -- your brain needs us!\n",
"msg_date": "Thu, 26 Apr 2001 09:15:45 -0500",
"msg_from": "will trillich <will@serensoft.com>",
"msg_from_op": true,
"msg_subject": "crypt(table.field) ?"
},
{
"msg_contents": "will trillich writes:\n\n> i know \"password\" can be used in creating/altering user\n> information (as used via GRANT and REVOKE) but is there any\n> facility within postgres to CRYPT() a value?\n\nSee contrib/pgcrypto for hashing functions.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Thu, 26 Apr 2001 17:20:53 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: crypt(table.field) ?"
},
{
"msg_contents": "On Thu, Apr 26, 2001 at 09:15:45AM -0500, will trillich wrote:\n> i know \"password\" can be used in creating/altering user\n> information (as used via GRANT and REVOKE) but is there any\n> facility within postgres to CRYPT() a value?\n\nAt the moment no. You should patch your PostgreSQL source for\nthat. There is a patch in techdocs site which imports system\ncrypt to SQL level and there is my pgcrypto package which does\nthis and more...\n\n http://www.l-t.ee/marko/pgsql/pgcrypto-0.3.tar.gz\n\n-- \nmarko\n\n",
"msg_date": "Thu, 26 Apr 2001 18:13:59 +0200",
"msg_from": "Marko Kreen <marko@l-t.ee>",
"msg_from_op": false,
"msg_subject": "Re: crypt(table.field) ?"
},
{
"msg_contents": "On Thu, Apr 26, 2001 at 05:20:53PM +0200, Peter Eisentraut wrote:\n> will trillich writes:\n> \n> > i know \"password\" can be used in creating/altering user\n> > information (as used via GRANT and REVOKE) but is there any\n> > facility within postgres to CRYPT() a value?\n> \n> See contrib/pgcrypto for hashing functions.\n\nI've got 7.0.3potato on my debian system, and i've also done\n\n\tapt-get install postgresql-contrib\n\nwhich looks like it's got lots of meat to it, but\n\n\tdpkg -L postgresql-contrib | grep crypt\n\nshows nada.\n\nCare to explain -- in terms a Debian newbie might grok --\nwhat \"contrib/pgcrypto\" means?\n\n-- \ndon't visit this page. it's bad for you. take my expert word for it.\nhttp://www.salon.com/people/col/pagl/2001/03/21/spring/index1.html\n\nwill@serensoft.com\nhttp://sourceforge.net/projects/newbiedoc -- we need your brain!\nhttp://www.dontUthink.com/ -- your brain needs us!\n",
"msg_date": "Thu, 26 Apr 2001 14:01:46 -0500",
"msg_from": "will trillich <will@serensoft.com>",
"msg_from_op": true,
"msg_subject": "Re: crypt(table.field) ?"
},
{
"msg_contents": "will trillich <will@serensoft.com> wrote:\n>On Thu, Apr 26, 2001 at 05:20:53PM +0200, Peter Eisentraut wrote:\n>> See contrib/pgcrypto for hashing functions.\n\n>Care to explain -- in terms a Debian newbie might grok -- what\n>\"contrib/pgcrypto\" means?\n\nPeter is referring to a directory in the PostgreSQL sources, not to a part\nof a binary package. \"apt-get source postgresql\" and look around.\n\nHTH,\nRay\n-- \nDon't think of yourself as an organic pain collector racing toward oblivion.\n\tDogbert\n\n",
"msg_date": "Thu, 26 Apr 2001 19:37:10 +0000 (UTC)",
"msg_from": "jdassen@cistron.nl (J.H.M. Dassen (Ray))",
"msg_from_op": false,
"msg_subject": "Re: crypt(table.field) ?"
},
{
"msg_contents": "On Thu, Apr 26, 2001 at 05:20:53PM +0200, Peter Eisentraut wrote:\n> will trillich writes:\n> > i know \"password\" can be used in creating/altering user\n> > information (as used via GRANT and REVOKE) but is there any\n> > facility within postgres to CRYPT() a value?\n> \n> See contrib/pgcrypto for hashing functions.\n\nProblem is the hashing functions are not good for\npassword storage.\n\n\nA general question: what is the status on patch acceptance\nnow, after 7.1 is successfully released? I did not\nwant to fuzz around with new code when 7.1 was in freeze,\nbut what is the status now?\n\nSpecifically - pgcrypto current state:\n\nIn the pgsql/contrib:\n\n* digest() / encode() - stable.\n\nIn my pgcrypto separate release:\n\n* digest() / encode() / hmac() - stable.\n I have changed the internal interfaces compared to main CVS.\n\n* crypt() / gen_salt() - stable. DES/MD5/Blowfish crypt()\n (Blowfish is unreleased). Code seems to be working quite\n well.\n\n* encrypt() / decrypt() - unstable. Not in the 'buggy'-sense,\n but the 0.3 encrypt() is unsatisfactory for long-term storage\n and security and compatibility. Also their spec is confusing\n to users. In the next release they will be renamed\n raw_encrypt() / raw_decrypt() as they really are interfaces\n to raw ciphers. I keep them coz they are good for testing\n pgcrypto code ;) and also they are ok for crypting short\n strings.\n\n* future: encrypt() / decrypt() will be minimal implementation\n of OpenPGP standard (RFC2440). \"Symmetrically Encrypted Data\"\n with passwords. (Is it too big? - The crypted data needs some\n structure and I dont think inventing some own format is good.)\n\nNow for this OpenPGP stuff I dont have ATM not even\nalpha-quality code. So full release takes some time.\nBut hmac() and crypt() code is quite ok and there is no point\non me sitting on it alone.\n\nSo I would like to submit the mostly ready parts to main\ntree. When is the right time for it?\n\n\n-- \nmarko\n\n",
"msg_date": "Thu, 26 Apr 2001 22:03:09 +0200",
"msg_from": "Marko Kreen <marko@l-t.ee>",
"msg_from_op": false,
"msg_subject": "status after 7.1 and pgcrypto update / crypt(table.field) ?"
},
{
"msg_contents": "On Thu, Apr 26, 2001 at 02:01:46PM -0500, will trillich wrote:\n> On Thu, Apr 26, 2001 at 05:20:53PM +0200, Peter Eisentraut wrote:\n> > will trillich writes:\n> > \n> > > i know \"password\" can be used in creating/altering user\n> > > information (as used via GRANT and REVOKE) but is there any\n> > > facility within postgres to CRYPT() a value?\n> > \n> > See contrib/pgcrypto for hashing functions.\n> \n> I've got 7.0.3potato on my debian system, and i've also done\n\n...\n\n> Care to explain -- in terms a Debian newbie might grok --\n> what \"contrib/pgcrypto\" means?\n\nFirst contrib/pgcrypto is 7.1-only. It is supposed to be a\nplace for cryptography-related functions. At the moment it\ncontains only hashing and ascii-conversion functions: digest(),\nencode(), decode().\n\nNow I have released my newer code as separate release (they were\nnot fit for 7.1-in-freeze) and it contains more stuff:\n\ncrypt(password, salt)\n\t- like the crypt(3) in UN*X-like systems for password\n\t crypting - DES and MD5-based crypt is supported.\n\ngen_salt(type) for above crypt() as generating salts with only\n\tSQL is pain.\n\nhmac(key, hash_type) is a implementation of RFC2104 \"Hashed\n\tMessage Authentication Code\". Sorta passworded-hash.\n\nencrypt(data, key, type) with decrypt() - access to raw ciphers\n\twith little bit more. They should be used only when you\n\tknow what you are doing. In the next release they will\n\tbe renamed to raw_encrypt()/raw_decrypt() and much\n\tbetter encrypt()/decrypt() will be provided based on\n\tOpenPGP (RFC2440) - I am still developing this.\n\nAlso pgcrypto-0.3 should work with both 7.0 and 7.1.\n\n-- \nmarko\n\n",
"msg_date": "Thu, 26 Apr 2001 22:32:27 +0200",
"msg_from": "Marko Kreen <marko@l-t.ee>",
"msg_from_op": false,
"msg_subject": "Re: crypt(table.field) ?"
},
{
"msg_contents": "Marko Kreen <marko@l-t.ee> writes:\n> A general question: what is the status on patch acceptance\n> now, after 7.1 is successfully released? I did not\n> want to fuzz around with new code when 7.1 was in freeze,\n> but what is the status now?\n\nWe're still in bug-fixes-only mode. I think the plan is to fork off\na 7.1 stable branch next week, and after that the floodgates will be\nopen for 7.2 development.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 26 Apr 2001 18:24:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: status after 7.1 and pgcrypto update / crypt(table.field) ? "
},
{
"msg_contents": "<quote who=\"J.H.M. Dassen (Ray)\">\n\n> will trillich <will@serensoft.com> wrote:\n> \n> >Care to explain -- in terms a Debian newbie might grok -- what\n> >\"contrib/pgcrypto\" means?\n> \n> Peter is referring to a directory in the PostgreSQL sources, not to a part\n> of a binary package. \"apt-get source postgresql\" and look around.\n\nYou'll often find things like these in the /usr/share/doc/<package>/examples\ndirectory under Debian. There's always a few goodies in there anyway. :)\n\n- Jeff\n\n-- \n o/~ In spite of all those keystrokes, you're addicted to vim. \n *ka-ching!* o/~ \n",
"msg_date": "Sun, 29 Apr 2001 15:04:18 +1000",
"msg_from": "Jeff Waugh <jdub@aphid.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: crypt(table.field) ?"
},
{
"msg_contents": "On Sun, Apr 29, 2001 at 03:04:18PM +1000, Jeff Waugh wrote:\n> <quote who=\"J.H.M. Dassen (Ray)\">\n> \n> > will trillich <will@serensoft.com> wrote:\n> > \n> > >Care to explain -- in terms a Debian newbie might grok -- what\n> > >\"contrib/pgcrypto\" means?\n> > \n> > Peter is referring to a directory in the PostgreSQL sources, not to a part\n> > of a binary package. \"apt-get source postgresql\" and look around.\n> \n> You'll often find things like these in the /usr/share/doc/<package>/examples\n> directory under Debian. There's always a few goodies in there anyway. :)\n\naha. there's \"apt-get install postgresql-crypt\" but for 7.0.3\nthere's no crypt yet. i'll wait. :)\n\n-- \ndon't visit this page. it's bad for you. take my expert word for it.\nhttp://www.salon.com/people/col/pagl/2001/03/21/spring/index1.html\n\nwill@serensoft.com\nhttp://sourceforge.net/projects/newbiedoc -- we need your brain!\nhttp://www.dontUthink.com/ -- your brain needs us!\n",
"msg_date": "Sun, 29 Apr 2001 01:09:33 -0500",
"msg_from": "will trillich <will@serensoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Re: crypt(table.field) ?"
}
] |
[
{
"msg_contents": "\nI want to extract tables schema information, i've looked at\nsrc/bin/psql/describe.c but i cannot determine the datatype\n'serial' and\n'references' from pg_*, i understand that triggers are generated for\nserial\nand references, so how i can understand from my perl application the\nfull\nschema ?\n\nthanks,\nvalter\n\n_________________________________________________________________________\nGet Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.\n\n",
"msg_date": "Thu, 26 Apr 2001 16:46:12 +0200",
"msg_from": "\"V. M.\" <txian@hotmail.com>",
"msg_from_op": true,
"msg_subject": "unanswered: Schema Issue"
},
{
"msg_contents": "On Thu, 26 Apr 2001, V. M. wrote:\n\n> \n> I want to extract tables schema information, i've looked at\n> src/bin/psql/describe.c but i cannot determine the datatype\n> 'serial' and\n> 'references' from pg_*, i understand that triggers are generated for\n> serial\n> and references, so how i can understand from my perl application the\n> full\n> schema ?\n\nSERIALs are just integers (int4). They don't use a trigger, but use a\nsequence\nas a default value.\n\nREFERENCES are not a type of data, but a foreign key/primary key\nrelationship. There's still a data type (int, text, etc.)\n\nYou can derive schema info from the system catalogs. Use psql with -E for\nexamples, or look in the Developer Manual.\n\nHTH,\n\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n",
"msg_date": "Thu, 26 Apr 2001 13:51:26 -0400 (EDT)",
"msg_from": "Joel Burton <jburton@scw.org>",
"msg_from_op": false,
"msg_subject": "Re: unanswered: Schema Issue"
}
] |
[
{
"msg_contents": "\n> We are currently calling 'vacuum' from within our code, using the psql++\n> PgDatabase ExecCommandOk() to send the SQL statement 'vacuum'. I seems to\n> work when we only have one process running, but when two processes (both\n> invoking the vacuum) are running it immediately hangs. Some postgres\n> processes exist... one 'idle' and one 'VACUUM'. The applications hang.\n> I tried using the PgTransaction class to invoke the SQL but received an\n> error stating that vacuum cannot be run in a transaction.\n> Any ideas?\n> \n> Sandy Barnes\n> email sandy.barnes@honeywell.com\n> \n",
"msg_date": "Thu, 26 Apr 2001 09:36:51 -0700",
"msg_from": "\"Barnes, Sandy (Sandra)\" <Sandy.Barnes@Honeywell.com>",
"msg_from_op": true,
"msg_subject": "applications hang when calling 'vacuum' "
},
{
"msg_contents": "On Thursday 26 April 2001 23:36, Barnes, Sandy (Sandra) wrote:\n> > We are currently calling 'vacuum' from within our code, using the psql++\n> > PgDatabase ExecCommandOk() to send the SQL statement 'vacuum'. I seems\n> > to work when we only have one process running, but when two processes\n> > (both invoking the vacuum) are running it immediately hangs. Some\n> > postgres processes exist... one 'idle' and one 'VACUUM'. The\n> > applications hang. I tried using the PgTransaction class to invoke the\n> > SQL but received an error stating that vacuum cannot be run in a\n> > transaction.\n> > Any ideas?\n\nCan confirm this. This is also the case on my systems. I already told about \nthis some time ago. There was no reply...\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: dyp@perchine.com\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n",
"msg_date": "Fri, 27 Apr 2001 10:23:39 +0700",
"msg_from": "Denis Perchine <dyp@perchine.com>",
"msg_from_op": false,
"msg_subject": "Re: applications hang when calling 'vacuum'"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.