threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "Hackers,\n\nI'm starting to read the existing algorithms for btree index shrinking.\nRight now I'm at 1996 SIGMOD proceedings, Zou and Salzberg \"On-line\nReorganization of Sparsely-populated B+-trees\".\n\nWhat I want to know is how different from B+-trees are PostgreSQL\nB-trees; I've read the README in src/backend/access/nbtree/, and it\nindicates some areas in which they are different from B-Trees (Lehmann\nand Yao's?). But I don't really know how B-Trees are different from\nB+-Trees (is my ignorance starting to show?). Where can I read about\nthat?\n\nAlso, Tom said some time ago that there is some literature on the\nconcurrent page merging camp. I haven't been able to found anything\nelse than the proceedings I have right now... is there something else?\nI'm not used to searching for this kind of things, and ACM won't let me\nin (althought my university has a subscription, I can't get any papers\non SIGMOD).\n\nThank you,\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Un poeta es un mundo encerrado en un hombre\" (Victor Hugo)\n",
"msg_date": "Thu, 12 Sep 2002 23:54:29 -0400",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": true,
"msg_subject": "btree page merging"
},
{
"msg_contents": "Alvaro Herrera <alvherre@atentus.com> writes:\n> What I want to know is how different from B+-trees are PostgreSQL\n> B-trees;\n\nPG's \"btrees\" are in fact B+-trees according to the more formal\nacademic notation. IIRC the + just indicates allowing any number\nof keys/downlinks in an internal tree node.\n\n I've read the README in src/backend/access/nbtree/, and it\n> indicates some areas in which they are different from B-Trees (Lehmann\n> and Yao's?).\n\nThe L-Y paper omits some details, and it makes some unrealistic\nassumptions like all keys being the same size. nbtree/README is\njust trying to tell you how we filled in those holes. It's not really\na new algorithm, just L-Y brought from academic to production status.\n\n> I'm not used to searching for this kind of things, and ACM won't let me\n> in (althought my university has a subscription, I can't get any papers\n> on SIGMOD).\n\nComplain --- I have half a dozen btree-related papers stashed that\nI got from ACM's online library. They are an essential resource.\n\nBTW, SIGMOD is presently selling DVDs with every durn paper they ever\npublished for the last couple or three decades. I was fortunate enough\nto get a set for US$25 when I went to their conference this summer.\nThe price for non-members is about triple that, but it's still a steal.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Sep 2002 01:11:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: btree page merging "
}
] |
[
{
"msg_contents": "Hello!\n\nI recommend two good sources of information in English:\n\nhttp://www.nist.gov/dads/ further look for balanced trees and kins\n\n(BTW, there is some other interesting algorithms alike patricia).\n\nand well-known Donald Knuth's monography, namely, volume 3. (I mean\n\"The Art of Computer Programming\".)\n\nits description can be found at\nhttp://www-cs-faculty.stanford.edu/~knuth/taocp.htm\n\nYou can also look at how MUMPS (where B+trees is the \"heart\" of\nDBMS) handles B+trees if curious:\n\nhttp://math-cs.cns.uni.edu/~okane/cgi-bin/newpres/index.cgi?array=lib&ml=2&a1=1002+Mumps+Language+Research&a2=1011+The+Mumps+Language\n\n-- \nWBR, Yury Bokhoncovich, Senior System Administrator, NOC of F1 Group.\nPhone: +7 (3832) 106228, ext.140, E-mail: byg@center-f1.ru.\nUnix is like a wigwam -- no Gates, no Windows, and an Apache inside.\n\n\n",
"msg_date": "Fri, 13 Sep 2002 11:08:48 +0700 (NOVST)",
"msg_from": "Yury Bokhoncovich <byg@center-f1.ru>",
"msg_from_op": true,
"msg_subject": "btree page merging"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nAn interesting development.\n\nAfilias and LibertyRMS, the people who've been happily running the .info\nnamespace on PostgreSQL servers, are the technical backend of the ISOC\napplication for management of the .org\nnamespace. However, ICANN is asking for more detail about the backend\ndatabase, to\nprove it is an \"appropriate choice for a mission critical\napplications\". In particular, ICANN wants proof that other companies\nare using PostgreSQL for Mission Critical things.\n\nThe Oracle/DB2/Sybase/etc guys have an advantage here because they\nalready have a bunch of case studies prepared and we're only beginning\nto get these together.\n\nAfilias and LibertyRMS are looking to pull as much relevant info\ntogether as possible and prove beyond a shadow of a doubt that\nPostgreSQL is up to the task, in time for their presentation on\nSaturday.\n\nThe kind of thing they're after is stuff that executives will be\ninterested in. i.e. Case Studies and examples of other businesses\nrunning PostgreSQL happily for Mission Critical stuff, under high load,\nand getting support when they need it, etc.\n\nThe questions that ICANN have asked are online here:\n\nhttp://www.icann.org/tlds/org/questions-to-applicants-13.htm\n\nAs you can see there is only a 2 day timeframe in which Afilias &\nLibertyRMS can get the info they need together, including today, so\nthere's not much time.\n\nThe details of the ISOC application itself is online here if anyone is\ninterested:\n\nhttp://www.icann.org/tlds/org/applications/isoc/\n\nA point to make clear is this is not in any way an endorsement of their\napplication. Some of the other places bidding also have significant\ninterests in PostgreSQL. The only thing we're interested in here is\nshowing off that PostgreSQL itself is up to the task.\n\nCan people please come forward to help them out with info about the\nreliability and performance of PostgreSQL in Mission Critical\nsituations?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Fri, 13 Sep 2002 15:28:12 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": true,
"msg_subject": "An opportunity to prove PostgreSQL and our requirement of Case Study\n\tinfo"
},
{
"msg_contents": "\nhttp://linux.oreillynet.com/pub/a/linux/2002/07/16/drake.html?page=2\n\nI see in there a mention of the American Chemical Society and their database \nof over a terabyte. Also there's a mention of BASF, another company that \nlooks big to me.\n\nAnd I use postgres and I get great support on this list here, but I don't \nthink ICANN cares about that :)\n\nAnd if nothing else, isn't the .info registry itself a good example?\n\nRegards,\n\tJeff\n\nOn Thursday 12 September 2002 10:28 pm, Justin Clift wrote:\n> Hi everyone,\n>\n> An interesting development.\n>\n> Afilias and LibertyRMS, the people who've been happily running the .info\n> namespace on PostgreSQL servers, are the technical backend of the ISOC\n> application for management of the .org\n> namespace. However, ICANN is asking for more detail about the backend\n> database, to\n> prove it is an \"appropriate choice for a mission critical\n> applications\". In particular, ICANN wants proof that other companies\n> are using PostgreSQL for Mission Critical things.\n>\n> The Oracle/DB2/Sybase/etc guys have an advantage here because they\n> already have a bunch of case studies prepared and we're only beginning\n> to get these together.\n>\n> Afilias and LibertyRMS are looking to pull as much relevant info\n> together as possible and prove beyond a shadow of a doubt that\n> PostgreSQL is up to the task, in time for their presentation on\n> Saturday.\n>\n> The kind of thing they're after is stuff that executives will be\n> interested in. i.e. Case Studies and examples of other businesses\n> running PostgreSQL happily for Mission Critical stuff, under high load,\n> and getting support when they need it, etc.\n>\n> The questions that ICANN have asked are online here:\n>\n> http://www.icann.org/tlds/org/questions-to-applicants-13.htm\n>\n> As you can see there is only a 2 day timeframe in which Afilias &\n> LibertyRMS can get the info they need together, including today, so\n> there's not much time.\n>\n> The details of the ISOC application itself is online here if anyone is\n> interested:\n>\n> http://www.icann.org/tlds/org/applications/isoc/\n>\n> A point to make clear is this is not in any way an endorsement of their\n> application. Some of the other places bidding also have significant\n> interests in PostgreSQL. The only thing we're interested in here is\n> showing off that PostgreSQL itself is up to the task.\n>\n> Can people please come forward to help them out with info about the\n> reliability and performance of PostgreSQL in Mission Critical\n> situations?\n>\n> :-)\n>\n> Regards and best wishes,\n>\n> Justin Clift\n\n",
"msg_date": "Thu, 12 Sep 2002 23:53:00 -0700",
"msg_from": "Jeff Davis <list-pgsql-general@empires.org>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] An opportunity to prove PostgreSQL and our requirement\n\tof Case Study info"
},
{
"msg_contents": "On Fri, Sep 13, 2002 at 03:28:12PM +1000,\n Justin Clift <justin@postgresql.org> wrote \n a message of 65 lines which said:\n\n> Afilias and LibertyRMS, the people who've been happily running the .info\n> namespace on PostgreSQL servers, are the technical backend of the ISOC\n> application for management of the .org\n> namespace. However, ICANN is asking for more detail about the backend\n> database, to\n> prove it is an \"appropriate choice for a mission critical\n> applications\". \n\nI'm sorry, Andrew, that I cannot help you as yo help many people on\nthe PostgreSQL mailing list, but we have the same problem. EUREG\n<URL:http://www.eureg.org/> applies for the management of the future\n.eu, we will use PostgreSQL also, and the European Commission asks us\nthe same questions. So, you can only say that other registries\nconsider using PostgreSQL.\n\nI would be happy to see the answer you gave to ICANN, it would be very\nuseful for us, too.\n \n",
"msg_date": "Fri, 13 Sep 2002 15:06:01 +0300",
"msg_from": "Stephane Bortzmeyer <bortzmeyer@eureg.org>",
"msg_from_op": false,
"msg_subject": "Re: An opportunity to prove PostgreSQL and our requirement of Case\n\tStudy info [EUreg #11]"
},
{
"msg_contents": "On Fri, Sep 13, 2002 at 03:28:12PM +1000, Justin Clift wrote:\n\n> Afilias and LibertyRMS, the people who've been happily running the\n> .info namespace on PostgreSQL servers, are the technical backend of\n> the ISOC application for management of the .org namespace. \n> However, ICANN is asking for more detail about the backend\n> database, to prove it is an \"appropriate choice for a mission\n> critical applications\". In particular, ICANN wants proof that\n> other companies are using PostgreSQL for Mission Critical things.\n\n[. . .]\n\n> Can people please come forward to help them out with info about the\n> reliability and performance of PostgreSQL in Mission Critical\n> situations?\n\nHi, everyone.\n\nI know I really shouldn't post this all over, but, well, I'm going to\nanyway. Speaking for myself, I want to thank publicly the many\npeople who responded to this and other requests from Justin. I\n_think_ I've already mailed everyone who responded directly. If you\nresponded, and I didn't mail you, I apologise.\n\nI knew that lots of people we doing cool things with Postgres, but\nthe range of activities still surprised me. And the conviction with\nwhich people responsed was tremendously helpful. Many of the manager\nfolks here come from a corporate background, and just couldn't\nbelieve the helpfulness of the community. It's things like that\nwhich make you realise how fantastic PostgreSQL support is, especially\nwhen you stack it up against the big commercial guys. They may have\nmillion-dollar marketing campaigns, but we have real strong belief in\nour software. You folks made what would have been an impossible task\ninto something achievable. And I think you've made converts out of\nsome people who were once rather leery of our PostgreSQL use.\n\nI especially want to thank Justin, who stopped at (as far as I could\ntell) nothing to provide great examples of PostgreSQL use. He was\ninvaluable to me, and I really appreciate it.\n\nIn case anyone wants to see the response-for-managers the corporate\npeople ultimately made, ICANN has posted it:\n\n<http://www.icann.org/tlds/org/questions-to-applicants-13.htm#Response13TheInternetSocietyISOC>\n\n(sorry, that's a long line).\n\nAgain, my thanks to everyone for such great software and so much\nhelp.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Mon, 16 Sep 2002 13:15:20 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: An opportunity to prove PostgreSQL and our requirement of Case\n\tStudy info"
},
{
"msg_contents": "This is very encouraging.\nI really hope to provide a case study sometimes in the future when my system\nis ready with big data ( as it's not even finish right now ).\nWell, this is all because of the great work of the PostgreSQL development\nteam and also all the supporting community.\nBravo !\nI hope to see PostgreSQL booming in the web hosting services as the schema\nwill be introduced in 7.3.\n\nBest regards\nAndy\n\n----- Original Message -----\nFrom: \"Andrew Sullivan\" <andrew@libertyrms.info>\nTo: \"PostgreSQL General Mailing List\" <pgsql-general@postgresql.org>;\n\"PostgreSQL Hackers Mailing List\" <pgsql-hackers@postgresql.org>;\n<pgsql-advocacy@postgresql.org>\nSent: Tuesday, September 17, 2002 12:15 AM\nSubject: Re: [pgsql-advocacy] An opportunity to prove PostgreSQL and our\nrequirement of Case Study info\n\n\n> On Fri, Sep 13, 2002 at 03:28:12PM +1000, Justin Clift wrote:\n>\n> > Afilias and LibertyRMS, the people who've been happily running the\n> > .info namespace on PostgreSQL servers, are the technical backend of\n> > the ISOC application for management of the .org namespace.\n> > However, ICANN is asking for more detail about the backend\n> > database, to prove it is an \"appropriate choice for a mission\n> > critical applications\". In particular, ICANN wants proof that\n> > other companies are using PostgreSQL for Mission Critical things.\n>\n> [. . .]\n>\n> > Can people please come forward to help them out with info about the\n> > reliability and performance of PostgreSQL in Mission Critical\n> > situations?\n>\n> Hi, everyone.\n>\n> I know I really shouldn't post this all over, but, well, I'm going to\n> anyway. Speaking for myself, I want to thank publicly the many\n> people who responded to this and other requests from Justin. I\n> _think_ I've already mailed everyone who responded directly. If you\n> responded, and I didn't mail you, I apologise.\n>\n> I knew that lots of people we doing cool things with Postgres, but\n> the range of activities still surprised me. And the conviction with\n> which people responsed was tremendously helpful. Many of the manager\n> folks here come from a corporate background, and just couldn't\n> believe the helpfulness of the community. It's things like that\n> which make you realise how fantastic PostgreSQL support is, especially\n> when you stack it up against the big commercial guys. They may have\n> million-dollar marketing campaigns, but we have real strong belief in\n> our software. You folks made what would have been an impossible task\n> into something achievable. And I think you've made converts out of\n> some people who were once rather leery of our PostgreSQL use.\n>\n> I especially want to thank Justin, who stopped at (as far as I could\n> tell) nothing to provide great examples of PostgreSQL use. He was\n> invaluable to me, and I really appreciate it.\n>\n> In case anyone wants to see the response-for-managers the corporate\n> people ultimately made, ICANN has posted it:\n>\n>\n<http://www.icann.org/tlds/org/questions-to-applicants-13.htm#Response13TheI\nnternetSocietyISOC>\n>\n> (sorry, that's a long line).\n>\n> Again, my thanks to everyone for such great software and so much\n> help.\n>\n> A\n>\n> --\n> ----\n> Andrew Sullivan 204-4141 Yonge Street\n> Liberty RMS Toronto, Ontario Canada\n> <andrew@libertyrms.info> M2P 2A8\n> +1 416 646 3304 x110\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n>\n\n\n",
"msg_date": "Tue, 17 Sep 2002 14:54:06 +0700",
"msg_from": "\"Andy Samuel\" <anci@centrin.net.id>",
"msg_from_op": false,
"msg_subject": "Re: [pgsql-advocacy] An opportunity to prove PostgreSQL and our\n\trequirement of Case Study info"
},
{
"msg_contents": "On Monday 16 September 2002 01:15 pm, Andrew Sullivan wrote:\n> On Fri, Sep 13, 2002 at 03:28:12PM +1000, Justin Clift wrote:\n> > Afilias and LibertyRMS, the people who've been happily running the\n> > .info namespace on PostgreSQL servers, are the technical backend of\n> > the ISOC application for management of the .org namespace.\n\nTalk about full circle. See my e-mail address's domain to get the punch line.\n\nIn more than one way WGCR relies on PostgreSQL for mission-critical data \nstorage.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 17 Sep 2002 11:43:55 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] An opportunity to prove PostgreSQL and our requirement\n\tof Case Study info"
}
] |
[
{
"msg_contents": "\nThere is a misfeature in 7.2.2 that appears when I have a foreign key that\nreferences two columns of a table. Consider following simplified example:\n\nCREATE TABLE a (\n a int PRIMARY KEY,\n b int\n);\n\nCREATE TABLE b (\n aref int,\n bref int,\n FOREIGN KEY (aref, bref) REFERENCES a(a, b)\n MATCH FULL ON DELETE CASCADE ON UPDATE CASCADE\n);\n\nI get an error\n\n\"UNIQUE constraint matching given keys for referenced table \"a\" not\nfound.\"\n\nbecause I have unique constraint only on the first field (which is still\nenough to make the whole combination unique. (b is not even unique))...\n\nSo I need to add an useless(?) UNIQUE constraint to \"(a, b)\" for table \"a\"\njust to allow creation of multicol FOREIGN KEYs for table \"b\".\n\nAnd I get NOTICE: CREATE TABLE / UNIQUE will create implicit index\n'a_a_key' for table.\n\nAFAIK, the extra index only slows down my inserts - it basically contains\nno usable information... shouldn't the presence of _primary_key_ in\nmulticol foreign key be enough to decide whether the whole key is unique\nor not? And shouldn't it be enough to find out the tuple in table 'a'\ncorresponding newly inserted tuple in b?\n\nOr should I just write my own triggers for checking the integrity of\n\"b\"/\"bref\" column pair to avoid needless index creation?\n\n-- \nAntti Haapala\n\n\n\n\n",
"msg_date": "Fri, 13 Sep 2002 11:18:03 +0300 (EEST)",
"msg_from": "Antti Haapala <antti.haapala@iki.fi>",
"msg_from_op": true,
"msg_subject": "Multicolumn foreign keys need useless unique indices?"
},
{
"msg_contents": "> AFAIK, the extra index only slows down my inserts - it basically contains\n> no usable information...\n\nNot 100% true. It will speed up cascade delete and update...\n\n> shouldn't the presence of _primary_key_ in\n> multicol foreign key be enough to decide whether the whole key is unique\n> or not?\n\nHmmm - thinking about it, I don't see why postgres would need the entire\nthing to be unique...can't think of a reason at the moment. Stephen?\n\nChris\n\n",
"msg_date": "Fri, 13 Sep 2002 16:27:20 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Multicolumn foreign keys need useless unique indices?"
},
{
"msg_contents": "\n> > AFAIK, the extra index only slows down my inserts - it basically contains\n> > no usable information...\n>\n> Not 100% true. It will speed up cascade delete and update...\n\nTo clarify things:\n\nCREATE TABLE original (\n a int PRIMARY KEY,\n b int\n);\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index\n'original_pkey' for table 'original'\nCREATE\n\nCREATE TABLE referencer (\n aref int,\n bref int,\n FOREIGN KEY (aref, bref) REFERENCES original(a, b)\n MATCH FULL ON DELETE CASCADE ON UPDATE CASCADE\n);\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN\nKEY check(s)\nERROR: UNIQUE constraint matching given keys for referenced table\n\"original\" not found\n\nCREATE TABLE original (\n a int PRIMARY KEY,\n b int,\n UNIQUE (a,b)\n);\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index\n'original_pkey' for table 'original'\nNOTICE: CREATE TABLE / UNIQUE will create implicit index 'original_a_key'\nfor table 'original'\nCREATE\n\nCREATE TABLE referencer (\n aref int,\n bref int,\n FOREIGN KEY (aref, bref) REFERENCES original(a, b)\n MATCH FULL ON DELETE CASCADE ON UPDATE CASCADE\n);\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN\nKEY check(s)\nCREATE\n\nilmo=# \\d original\n Table \"original\"\n Column | Type | Modifiers\n--------+---------+-----------\n a | integer | not null\n b | integer |\nPrimary key: a_pkey\nUnique keys: a_a_key\nTriggers: RI_ConstraintTrigger_41250,\n RI_ConstraintTrigger_41252\n\nilmo=# \\d referencer\n Table \"referencer\"\n Column | Type | Modifiers\n--------+---------+-----------\n aref | integer |\n bref | integer |\nTriggers: RI_ConstraintTrigger_41248\n\nActually nothing changes. The unique constraint doesn't add anything new -\nit allows NULLs in column b and requires that combination (a, b) is\nunique... and it definitely is because column 'a' is unique (primary key).\nIt just creates a multicol index and adds an useless extra constraint\ncheck, while almost the same data is available in index \"original_a_pkey\".\n\n-- \nAntti Haapala\n\n",
"msg_date": "Fri, 13 Sep 2002 12:04:25 +0300 (EEST)",
"msg_from": "Antti Haapala <antti.haapala@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Multicolumn foreign keys need useless unique indices?"
},
{
"msg_contents": "On Fri, 2002-09-13 at 04:27, Christopher Kings-Lynne wrote:\n> > AFAIK, the extra index only slows down my inserts - it basically contains\n> > no usable information...\n> \n> Not 100% true. It will speed up cascade delete and update...\n> \n> > shouldn't the presence of _primary_key_ in\n> > multicol foreign key be enough to decide whether the whole key is unique\n> > or not?\n> \n> Hmmm - thinking about it, I don't see why postgres would need the entire\n> thing to be unique...can't think of a reason at the moment. Stephen?\n\nIf it's not all unique, you cannot be guaranteed there is a single row\nwith those values in the referenced table.\n\n-- \n Rod Taylor\n\n",
"msg_date": "13 Sep 2002 07:27:17 -0400",
"msg_from": "Rod Taylor <rbt@rbt.ca>",
"msg_from_op": false,
"msg_subject": "Re: Multicolumn foreign keys need useless unique indices?"
},
{
"msg_contents": "Rod Taylor <rbt@rbt.ca> writes:\n> On Fri, 2002-09-13 at 04:27, Christopher Kings-Lynne wrote:\n>> Hmmm - thinking about it, I don't see why postgres would need the entire\n>> thing to be unique...can't think of a reason at the moment. Stephen?\n\n> If it's not all unique, you cannot be guaranteed there is a single row\n> with those values in the referenced table.\n\nRight. The single-column unique constraint guarantees at most one\nmatch, but it isn't helpful for checking if there's at least one match.\nThe spec obviously intends that the index supporting the unique\nconstraint be useful for verifying the existence of a match.\n\nI read this in SQL92:\n\n a) If the <referenced table and columns> specifies a <reference\n column list>, then the set of column names of that <refer-\n ence column list> shall be equal to the set of column names\n in the unique columns of a unique constraint of the refer-\n enced table.\n\nIt says \"equal to\", not \"superset of\". So we are behaving per spec.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Sep 2002 10:00:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Multicolumn foreign keys need useless unique indices? "
},
{
"msg_contents": "\n> Rod Taylor <rbt@rbt.ca> writes:\n> > On Fri, 2002-09-13 at 04:27, Christopher Kings-Lynne wrote:\n> >> Hmmm - thinking about it, I don't see why postgres would need the entire\n> >> thing to be unique...can't think of a reason at the moment. Stephen?\n>\n> > If it's not all unique, you cannot be guaranteed there is a single row\n> > with those values in the referenced table.\n>\n> Right. The single-column unique constraint guarantees at most one\n> match, but it isn't helpful for checking if there's at least one match.\n> The spec obviously intends that the index supporting the unique\n> constraint be useful for verifying the existence of a match.\n>\n> I read this in SQL92:\n>\n> a) If the <referenced table and columns> specifies a <reference\n> column list>, then the set of column names of that <refer-\n> ence column list> shall be equal to the set of column names\n> in the unique columns of a unique constraint of the refer-\n> enced table.\n>\n> It says \"equal to\", not \"superset of\". So we are behaving per spec.\n\nThat's what I used when doing it. It possibly is a stronger than\nnecessary statement but I assumed at the time they had some reason for\nwanting to define it that way.\n\n\n",
"msg_date": "Fri, 13 Sep 2002 07:50:19 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Multicolumn foreign keys need useless unique indices?"
},
{
"msg_contents": "On Fri, 13 Sep 2002, Antti Haapala wrote:\n\n> > > AFAIK, the extra index only slows down my inserts - it basically contains\n> > > no usable information...\n> >\n> > Not 100% true. It will speed up cascade delete and update...\n>\n> To clarify things:\n>\n> CREATE TABLE original (\n> a int PRIMARY KEY,\n> b int\n> );\n> NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index\n> 'original_pkey' for table 'original'\n> CREATE\n>\n> CREATE TABLE referencer (\n> aref int,\n> bref int,\n> FOREIGN KEY (aref, bref) REFERENCES original(a, b)\n> MATCH FULL ON DELETE CASCADE ON UPDATE CASCADE\n> );\n> NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN\n> KEY check(s)\n> ERROR: UNIQUE constraint matching given keys for referenced table\n> \"original\" not found\n\nSQL 92 would want you to normalize and remove bref from referencer\nsince it's redundant. You're storing a reference to a table and\nsome of the dependent values to that reference in another table.\nThat's probably the best workaround, although I assume your real\ncase is more complicated.\n\n\n",
"msg_date": "Fri, 13 Sep 2002 08:06:17 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Multicolumn foreign keys need useless unique indices?"
},
{
"msg_contents": "\n> hannu=# update t set i=i+1;\n> ERROR: Cannot insert a duplicate key into unique index t_i_key\n\nA possibility may be to reverse the sequential scan order for the simple\ncases, but anything any more complex and the check should be deferred\ntill end of statement, rather than checking immediately.\n\n\n-- \n Rod Taylor\n\n",
"msg_date": "13 Sep 2002 11:42:21 -0400",
"msg_from": "Rod Taylor <rbt@rbt.ca>",
"msg_from_op": false,
"msg_subject": "Re: Multicolumn foreign keys need useless unique indices?"
},
{
"msg_contents": "On Fri, 2002-09-13 at 16:00, Tom Lane wrote:\n> Rod Taylor <rbt@rbt.ca> writes:\n> > On Fri, 2002-09-13 at 04:27, Christopher Kings-Lynne wrote:\n> >> Hmmm - thinking about it, I don't see why postgres would need the entire\n> >> thing to be unique...can't think of a reason at the moment. Stephen?\n> \n> > If it's not all unique, you cannot be guaranteed there is a single row\n> > with those values in the referenced table.\n> \n> Right. The single-column unique constraint guarantees at most one\n> match, but it isn't helpful for checking if there's at least one match.\n\nDue to postgres's implementation we can't do the 'at least' part using\nonly index anyway - we must check the actual table.\n\n> The spec obviously intends that the index supporting the unique\n> constraint be useful for verifying the existence of a match.\n\nDoes the spec say _anything_ about implementing unique contraint using\nan unique index ?\n\n> I read this in SQL92:\n> \n> a) If the <referenced table and columns> specifies a <reference\n> column list>, then the set of column names of that <refer-\n> ence column list> shall be equal to the set of column names\n> in the unique columns of a unique constraint of the refer-\n> enced table.\n> \n> It says \"equal to\", not \"superset of\". So we are behaving per spec.\n\nBut we are doing it in a suboptimal way.\n\nIf we have unique index on t.i and we define additional unique\nconstraint on (t.i, t.j), then we don't need the extra unique index to\nbe created - the index on t.i is enough to quarantee the uniqueness of\n(t.i,t.j) or any set of columns that includes t.i.\n\n---------------\nHannu\n\nPS. IMHO our unique is still broken as shown by the following:\n\nhannu=# create table t(i int unique);\nNOTICE: CREATE TABLE / UNIQUE will create implicit index 't_i_key' for\ntable 't'\nCREATE TABLE\nhannu=# insert into t values(1);\nINSERT 41555 1\nhannu=# insert into t values(2);\nINSERT 41556 1\nhannu=# update t set i=i-1;\nUPDATE 2\nhannu=# update t set i=i+1;\nERROR: Cannot insert a duplicate key into unique index t_i_key\nhannu=# \n\nDB2 has no problems doing it:\n\ndb2 => create table t(i int not null unique)\nDB20000I The SQL command completed successfully.\ndb2 => insert into t values(1)\nDB20000I The SQL command completed successfully.\ndb2 => insert into t values(2)\nDB20000I The SQL command completed successfully.\ndb2 => update t set i=i+1\nDB20000I The SQL command completed successfully.\ndb2 => update t set i=i-1\nDB20000I The SQL command completed successfully.\n\nneither has Oracle\n\nSQL> create table t(i int not null unique);\nTable created.\nSQL> insert into t values(1);\n1 row created.\nSQL> insert into t values(2);\n1 row created.\nSQL> update t set i=i+1;\n2 rows updated.\nSQL> update t set i=i-1;\n2 rows updated.\nSQL> \n\n----------------\nHannu\n\n",
"msg_date": "13 Sep 2002 18:31:29 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Multicolumn foreign keys need useless unique indices?"
},
{
"msg_contents": "On Fri, 2002-09-13 at 17:42, Rod Taylor wrote:\n> \n> > hannu=# update t set i=i+1;\n> > ERROR: Cannot insert a duplicate key into unique index t_i_key\n> \n> A possibility may be to reverse the sequential scan order for the simple\n> cases, but anything any more complex and the check should be deferred\n> till end of statement, rather than checking immediately.\n\nOr we could keep a 'conflict list' that would be dynamically added to\nand deleted from during the statement and the statement would be aborted\nif \n\n1) there were any entries in the list at the end of statement\n\nor\n\n2) if the list overflowed at some predefined limit (say 1000 or 100.000\nconflicts) during the statement.\n\nin our simple case we would have at most 1 conflict in the list at any\ntime.\n\n--------------\nHannu\n",
"msg_date": "13 Sep 2002 18:59:49 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Multicolumn foreign keys need useless unique indices?"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> If we have unique index on t.i and we define additional unique\n> constraint on (t.i, t.j), then we don't need the extra unique index to\n> be created - the index on t.i is enough to quarantee the uniqueness of\n> (t.i,t.j) or any set of columns that includes t.i.\n\nYou missed the point: we are concerned about existence of a row, not only\nuniqueness.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 14 Sep 2002 11:14:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Multicolumn foreign keys need useless unique indices? "
},
{
"msg_contents": "On Sat, 2002-09-14 at 20:14, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > If we have unique index on t.i and we define additional unique\n> > constraint on (t.i, t.j), then we don't need the extra unique index to\n> > be created - the index on t.i is enough to quarantee the uniqueness of\n> > (t.i,t.j) or any set of columns that includes t.i.\n> \n> You missed the point: we are concerned about existence of a row, not only\n> uniqueness.\n\nMaybe I'm missing something, but I'll reiterate my two points\n\n1) to check for existance of a referenced tuple for a foreigh key we\nhave to:\n\n* lookup the row in index\n\nand\n\n* check if the row is live in the relation\n\nso the index will help us equally for both cases, as it will point to N\nentries of which only one can be alive at a time and which all have to\nbe checked.\n\nIt will be only marginally more work to check if the only live entry\ndoes match the non-index columns.\n\n\nAnd I think that my other point holds as well - there is no need for\nextra unique index on (redundant) unique constraint that is put over a\nsuperset of columns covered by _another_ unique constraint. \n\nThere will probably be additional work if we want to drop the original\nconstraint, but this is a separate issue.\n\n---------------\nHannu\n\n\n",
"msg_date": "14 Sep 2002 22:47:17 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Multicolumn foreign keys need useless unique indices?"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> so the index will help us equally for both cases, as it will point to N\n> entries of which only one can be alive at a time and which all have to\n> be checked.\n> It will be only marginally more work to check if the only live entry\n> does match the non-index columns.\n\nBut the \"marginally more work\" represents code that does not exist at\nall right now, and which there's no really convenient place to add AFAIR.\nThis seems to me to be going rather out of our way to support a coding\npractice that is specifically disallowed by the standard.\n\nSomething that no one has bothered to ask, but seems to me relevant,\nis exactly why we should consider it important to support foreign keys\nof this form? Aren't we talking about a poor schema design in the first\nplace, if the referenced column set covers more than just the unique key\nof the referenced table? At the very least this is a violation of\nnormalization, and so it's inherently inefficient.\n\n> There will probably be additional work if we want to drop the original\n> constraint, but this is a separate issue.\n\nIt's not separate, because it's more work that we *will* have to do,\nto support a feature that is nonstandard and of debatable usefulness.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 14 Sep 2002 16:44:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Multicolumn foreign keys need useless unique indices? "
}
] |
[
{
"msg_contents": "\n> > Sure it is. The float=>int casts need to be made implicit, or we'll have\n> > tons of problems like this.\n> \n> Well, yeah. That did not seem to bother anyone last spring, when we\n> were discussing tightening the implicit-casting rules. Shall we\n> abandon all that work and go back to \"any available cast can \n> be applied implicitly\"?\n> \n> My vote is \"tough, time to fix your SQL code\".\n\nI personally don't think that is good. SQL users are used to using implicit casts.\nOther db's do handle them whereever possible. It is imho no answer to drop so many \nimplicit casts only because of the corner cases where it does not work.\n\nWhat we imho really need is a runtime check that checks whether an implicit cast\ncaused a loss of precision and abort in that case only. That is what other db's do.\n\nI thought that I voiced my opinion strong enough on this before, but I'll do it again,\nI think we should allow a lot more implicit casts than are now in beta.\nEspecially in the numeric area.\n\nI don't have any strong arguments (other than other db's can do it), but this is\nmy opinion.\n\nAndreas\n",
"msg_date": "Fri, 13 Sep 2002 12:43:33 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: "
}
] |
[
{
"msg_contents": "Hi all,\n\nOne of my friends is evaluating postgres for large databases. This is a select \nintensive application which is something similar to data-warehousing as far as \nI can see.\n\nThe data is 150GB in flat files so would swell to 200GB+ with indexes.\n\nIs anybody running that kind of site? Any url? Any performance numbers/tuning \ntips for random selects?\n\nI would hate to put mysql there but we are evaluating that too. I would hate if \npostgres loses this to mysql because I didn't know few things about postgres.\n\nSecondly would it make a difference if I host that database on say, an HP-UX \nbox? From some tests I have done for my job, single CPU HP-UX box trounces 4 \nway xeon box. Any suggestions in this directions?\n\nTIA..\n\nBye\n Shridhar\n\n--\nQuality Control, n.:\tThe process of testing one out of every 1,000 units coming \noff\ta production line to make sure that at least one out of 100 works.\n\n",
"msg_date": "Fri, 13 Sep 2002 16:34:51 +0530",
"msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>",
"msg_from_op": true,
"msg_subject": "[OT]Physical sites handling large data"
},
{
"msg_contents": "I moved this over to general, where it's more on topic...\n\nOn Fri, 13 Sep 2002, Shridhar Daithankar wrote:\n\n> Hi all,\n> \n> One of my friends is evaluating postgres for large databases. This is a select \n> intensive application which is something similar to data-warehousing as far as \n> I can see.\n> \n> The data is 150GB in flat files so would swell to 200GB+ with indexes.\n> \n> Is anybody running that kind of site? Any url? Any performance numbers/tuning \n> tips for random selects?\n> \n> I would hate to put mysql there but we are evaluating that too. I would hate if \n> postgres loses this to mysql because I didn't know few things about postgres.\n> \n> Secondly would it make a difference if I host that database on say, an HP-UX \n> box? From some tests I have done for my job, single CPU HP-UX box trounces 4 \n> way xeon box. Any suggestions in this directions?\n\nOften times the real limiter for database performance is IO bandwidth and \nsubsystem, not the CPUs. After that memory access speed and bandwidth are \nvery important too, so I can see a big HP UX box beating the pants off of \na Xeon.\n\nHonestly, I'd put a dual 1G PIII 1G ram up against a quad xeon with 2 \nGig ram if I got to spend the difference in cost on a very fast RAID \narray for the PIII. Since a quad Xeon with 2 Gigs ram and a pair of 18 \ngig SCSI drives goes for ~ $27,500 on Dell, and a Dual PIII 1Ghz with 5 \n15KRPM 18 gig drives goes for ~ $6,700, that leaves me with about $20,000 \nto spend on an external RAID array on top of the 5 15kRPM drives I've \nalready got configured. An external RAID array with 144GB of 15krpm 18gig \ndrives runs ~$7700, so you could get three if you got the dual PIII \nwithout all those drives built into it. That makes for 24 15kRPM drives \nand about 430 Gigs of storage, all in a four unit Rack mounted setup.\n\nMy point being, spend more money on the drive subsystem than anything else \nand you'll probably be fine, but postgresql may or may not be your best \nanswer. It may be better to use something like berkeley db to handle this \njob than a SQL database.\n\n",
"msg_date": "Fri, 13 Sep 2002 15:55:35 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>",
"msg_from_op": false,
"msg_subject": "Physical sites handling large data"
},
{
"msg_contents": "Hi Scott,\n\nGood move. :)\n\nShridhar, any idea of the kind of demands they'll be placing on the\ndatabase?\n\nFor example, does your friend have an idea of:\n\na) how many clients will be simultaneously connecting to the database(s)\nnormally, and at peak times\n\nb) how sensitive to performance lag and downtime is the application?\n\nc) what are the data integrity requirements? Large array's of data that\nare mission critical need to be treated differently than small arrays,\nespecially when taking b) into consideration. Higher end non-intel\nservers generally have better features in their OS and hardware for\ndealing with large amounts of important data.\n\nd) what kind of stuff is your friend familar with? For example, is he\nok with unix in general, etc?\n\nThe more info you can get to us, the better we can help yourselves out.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n\"scott.marlowe\" wrote:\n> \n> I moved this over to general, where it's more on topic...\n> \n> On Fri, 13 Sep 2002, Shridhar Daithankar wrote:\n> \n> > Hi all,\n> >\n> > One of my friends is evaluating postgres for large databases. This is a select\n> > intensive application which is something similar to data-warehousing as far as\n> > I can see.\n> >\n> > The data is 150GB in flat files so would swell to 200GB+ with indexes.\n> >\n> > Is anybody running that kind of site? Any url? Any performance numbers/tuning\n> > tips for random selects?\n> >\n> > I would hate to put mysql there but we are evaluating that too. I would hate if\n> > postgres loses this to mysql because I didn't know few things about postgres.\n> >\n> > Secondly would it make a difference if I host that database on say, an HP-UX\n> > box? From some tests I have done for my job, single CPU HP-UX box trounces 4\n> > way xeon box. Any suggestions in this directions?\n> \n> Often times the real limiter for database performance is IO bandwidth and\n> subsystem, not the CPUs. After that memory access speed and bandwidth are\n> very important too, so I can see a big HP UX box beating the pants off of\n> a Xeon.\n> \n> Honestly, I'd put a dual 1G PIII 1G ram up against a quad xeon with 2\n> Gig ram if I got to spend the difference in cost on a very fast RAID\n> array for the PIII. Since a quad Xeon with 2 Gigs ram and a pair of 18\n> gig SCSI drives goes for ~ $27,500 on Dell, and a Dual PIII 1Ghz with 5\n> 15KRPM 18 gig drives goes for ~ $6,700, that leaves me with about $20,000\n> to spend on an external RAID array on top of the 5 15kRPM drives I've\n> already got configured. An external RAID array with 144GB of 15krpm 18gig\n> drives runs ~$7700, so you could get three if you got the dual PIII\n> without all those drives built into it. That makes for 24 15kRPM drives\n> and about 430 Gigs of storage, all in a four unit Rack mounted setup.\n> \n> My point being, spend more money on the drive subsystem than anything else\n> and you'll probably be fine, but postgresql may or may not be your best\n> answer. It may be better to use something like berkeley db to handle this\n> job than a SQL database.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Sat, 14 Sep 2002 09:39:51 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Physical sites handling large data"
},
{
"msg_contents": "On 14 Sep 2002 at 9:39, Justin Clift wrote:\n\n> Hi Scott,\n> \n> Good move. :)\n> \n> Shridhar, any idea of the kind of demands they'll be placing on the\n> database?\n\nFirst of all, thanks you all guys for your quick and helpful responses. Robert \nE. Bruccoleri send me his sites description which gave me an idea what \npostgresql can do at that scale.\n\nI spent more than 3 hours looking for urls of such a large installation. Didn't \nget a single one via google(May be I am a bad search person..) Now from what \nRobert E. Bruccoleri tells me, there is a bigger installation of postgres than \nmentioned anywhere else.\n\nI would request people to put up some information if they hav such an \ninstallation. I would do the same subject to permission from client. Postgres \ndeserves this sort of publicity. Such information is very crucial when it comes \nto convince government bodies to consider open source alternatives to \ncommercial ones, thr. LUGs etc..\n\nI understand that designing database for such a site would require detailed \nknowledge of application. I am persuing my friend/colleague to get as much \ninformation to get out of it..\n\nNow to answer your queries..\n\n> For example, does your friend have an idea of:\n> \n> a) how many clients will be simultaneously connecting to the database(s)\n> normally, and at peak times\n\nAFAIK, this job is like analysis of log data(telecom domain). So number of \nclients would not be big if there is no/small parallalism to be extracted at \napplication level. Most importantly number of clients will not be fluctuating \nmuch.. So it gives some deterministic levels of prediction, to say so..\n \n> b) how sensitive to performance lag and downtime is the application?\n\nHardly.. This is a replica of production system. Data loss is not going to be \nan issue. Of course if the database pulls wrong data and hence wrong results \nthat is obviously unacceptable.. But I guess things won't come down to that \nlevel..\n\nThere are not set performance goals as of now. Obviously faster is better..\n \n> c) what are the data integrity requirements? Large array's of data that\n> are mission critical need to be treated differently than small arrays,\n> especially when taking b) into consideration. Higher end non-intel\n> servers generally have better features in their OS and hardware for\n> dealing with large amounts of important data.\n\nIMO if I have to write a evaluation and proposal for this task, intel hardware \nwon't feature in just because it does not have a proven 64 bit CPU. Personally \nI would recommend HP hardware knowing the client's profile..\n \n> d) what kind of stuff is your friend familar with? For example, is he\n> ok with unix in general, etc?\n\nThat's OK. These guys are into HP-UX heavily.. That's not a problem. Postgres \nhas figured on their radar to save licensing costs towards oracle installation.\n\n> \"scott.marlowe\" wrote:\n> > Often times the real limiter for database performance is IO bandwidth and\n> > subsystem, not the CPUs. After that memory access speed and bandwidth are\n> > very important too, so I can see a big HP UX box beating the pants off of\n> > a Xeon.\n\nPerfectly agreed. Slightly OT here. The test I referred to in my OP, xeon \nmachine had mirrored SCSI raid and 533MHz FSB. HP-UX box was single PA-\n8700/750MHz CPU with single SCSI disk.\n\nIMO even bandwidth consideration were in xeon's favour. Only things in favour \nof HP box were the 3MB on chip cache and 64 bit RISC CPU. If that makes so much \nof a difference, intel machines won't stand a chance for a long time IMO..\n\n> > My point being, spend more money on the drive subsystem than anything else\n> > and you'll probably be fine, but postgresql may or may not be your best\n> > answer. It may be better to use something like berkeley db to handle this\n> > job than a SQL database.\n\nAgreed but with few differences...\n\nI am going to divide my pitch as follows.\n\n1) Can postgresql can do the job?\n\nYes. Refer to FAQ for limits of postgresql and real world installations like \nposted by Robert E. Bruccoleri in person, as example.\n\n2) Why postgresql?\n\na) It can do the job as illustrated by 1.\n\nb) Architecture of postgres\n\nSome strong points, \n\n i) It's a most complete and/or useful open source SQL implementation. If they \nwant, they can customise it as they want later. Using berkeley DB might do good \nat this level but depending upon complexity of application(Which I don't know \nmuch), I would not rather put it.\n\n ii) Does not offer idiocies/features that would limit implementations. e.g. \nsince it relies on OS to take care of storage, it will run with same peace on \nIDE disk to fiber array(or what ever highest end tech. available). One need not \nwait till you get storage driver from database vendors.\n\nLess unneeded features==Cleaner implementation\n \niii) Since table is split in multiples of 1GB, no upper limit on table size \nand/or splitting the table across storages etc.. (Hand tweaking basically) \n\nSome weak points(These guys are considering distributed databases but won't \nmind spending on hardware if the proposal is that worth.)\n\n i) A database can not span mulptiple machines. So clustering is out. If data \nis split in multiple databases on multiple machines, application will have to \ndo merging etc.\n\nAny pointers on this? Partitioning etc?\n\n ii) No *out of box* replication. (OK I can take down this point but when mysql \npops up, I got to include this for fair comparison.)\n\niii) Being a process driven architecture, it can not process data in parallel \neven if possible. e..g say a table is 100GB in size. So split across 100 \nsegments on file system. But it can not return data from all 100 segments \nsimaltaneously because there is one process per connection. Besides it won't be \nable to spread any computational load across multiple CPUs.\n \n3) How to do it?\n\na) Get a 64 bit architecture. Depending upon computational requirements and \nprojected number of connections, add the CPUs. Start with one CPU. I guess that \nwould be enough given how good PA-Risc CPUs are. (OK I haven't used anything \nelse but PA-Risc CPUs look good to me for data manipulation)\n\nb) Get loads of RAM. 4-8GB sounds good to me. May be even better. \n\nOn this topic, say I decide to devote 8GB to postgres that means 1048576 \nbuffers for 8K page size. Need I push available number of shared memory \nsegments beyond this value? Can I tweak size of page? If I can would it help \nfor such an installation?\n\nc) Get a *fast* storage. I would rather not elaborate on this point because the \nclient is suppose to know better being in telecom business. But would offer my \ninputs based upon information from Scott etc. But I don't think it would come \nto that.\n\nAny suggestions? Modifications? This is a rough plot. Will finalise today \nevening/tom morning.. ( I am from India BTW, just to let you know my time \nzone..;-)). I hope I haven't missed any point I have thought of in last two \ndays..\n\nOnce again, thanks a lot for the help offered. I can not put in words how on \npoint all this thread has been..\n\nBye\n Shridhar\n\n--\nDeVries' Dilemma:\tIf you hit two keys on the typewriter, the one you don't want\t\nhits the paper.\n\n",
"msg_date": "Sun, 15 Sep 2002 15:51:05 +0530",
"msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>",
"msg_from_op": true,
"msg_subject": "Re: Physical sites handling large data"
},
{
"msg_contents": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in> writes:\n> On this topic, say I decide to devote 8GB to postgres that means 1048576 \n> buffers for 8K page size. Need I push available number of shared memory \n> segments beyond this value? Can I tweak size of page? If I can would it help \n> for such an installation?\n\nI do not believe you can push the shared memory size past 2GB, or about\n250K buffers, because the size calculations for it are done in \"int\"\narithmetic. Of course, this could be fixed if anyone cared to do the\nlegwork. I doubt there's much point in worrying about it though.\nA larger memory is still usable, it's just that the rest of it will be\nused in the form of kernel-level disk cache not Postgres buffers.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 15 Sep 2002 11:01:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Physical sites handling large data "
},
{
"msg_contents": "Hmmm...\n\nUsing the bigmem kernel and RH7.3, we were able to set Postgresql shared\nmemory to 3.2Gigs (out of 6GB Ram). Does this mean that Postgresql will\nonly use the first 2Gigs?\n\nOur settings are:\n\nshmmax = 3192000000\nshared_buffers = 38500\n\nipcs output:\n0x0052e2c1 98304 postgres 600 324018176 51 \n\n- Ericson Smith\neric@did-it.com\n\nOn Sun, 2002-09-15 at 11:01, Tom Lane wrote:\n> \"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in> writes:\n> > On this topic, say I decide to devote 8GB to postgres that means 1048576 \n> > buffers for 8K page size. Need I push available number of shared memory \n> > segments beyond this value? Can I tweak size of page? If I can would it help \n> > for such an installation?\n> \n> I do not believe you can push the shared memory size past 2GB, or about\n> 250K buffers, because the size calculations for it are done in \"int\"\n> arithmetic. Of course, this could be fixed if anyone cared to do the\n> legwork. I doubt there's much point in worrying about it though.\n> A larger memory is still usable, it's just that the rest of it will be\n> used in the form of kernel-level disk cache not Postgres buffers.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n\n",
"msg_date": "15 Sep 2002 11:33:59 -0400",
"msg_from": "Ericson Smith <eric@did-it.com>",
"msg_from_op": false,
"msg_subject": "Re: Physical sites handling large data"
},
{
"msg_contents": "Ericson Smith <eric@did-it.com> writes:\n> Using the bigmem kernel and RH7.3, we were able to set Postgresql shared\n> memory to 3.2Gigs (out of 6GB Ram). Does this mean that Postgresql will\n> only use the first 2Gigs?\n\nI think you are skating on thin ice there --- there must have been some\ninteger overflows in the shmem size calculations. It evidently worked\nas an unsigned result, but...\n\nIIRC we have an open bug report from someone who tried to set\nshared_buffers so large that the shmem size would have been ~5GB;\nthe overflowed size request was ~1GB and then it promptly dumped\ncore from trying to access memory beyond that. We need to put in\nsome code to detect overflows in those size calculations.\n\nIn any case, pushing PG's shared memory to 50% of physical RAM is\ncompletely counterproductive. See past discussions (mostly on\n-hackers and -admin if memory serves) about appropriate sizing of\nshared buffers. There are different schools of thought about this,\nbut I think everyone agrees that a shared-buffer pool that's roughly\nequal to the size of the kernel's disk buffer cache is a waste of\nmemory. One should be much bigger than the other. I personally think\nit's appropriate to let the kernel cache do most of the work, and so\nI favor a shared_buffers setting of just a few thousand.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 15 Sep 2002 11:47:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Physical sites handling large data "
},
{
"msg_contents": "On 15 Sep 2002 at 11:47, Tom Lane wrote:\n\n> Ericson Smith <eric@did-it.com> writes:\n> > Using the bigmem kernel and RH7.3, we were able to set Postgresql shared\n> > memory to 3.2Gigs (out of 6GB Ram). Does this mean that Postgresql will\n> > only use the first 2Gigs?\n> In any case, pushing PG's shared memory to 50% of physical RAM is\n> completely counterproductive. See past discussions (mostly on\n> -hackers and -admin if memory serves) about appropriate sizing of\n> shared buffers. There are different schools of thought about this,\n> but I think everyone agrees that a shared-buffer pool that's roughly\n> equal to the size of the kernel's disk buffer cache is a waste of\n> memory. One should be much bigger than the other. I personally think\n> it's appropriate to let the kernel cache do most of the work, and so\n> I favor a shared_buffers setting of just a few thousand.\n\nSo you mean, at large sites tuning kernel disks buffer becomes a part of tuning \npostgres?\n\nIIRC kernel disk caching behaviour varies across unices and I really don't know \nif kernel will honour the caching request with cache size more than say 2Gigs.. \nLinux kernel is a different story. It eats everything it can but still..\n\nThat comes back to my other question.. Is it possible to change page size and \nwould that be beneficial under some conditions?\n\nI am not asking because I don't want to listen to you but I would better back \nmy claims when I am making an evaluation proposals. Do you have any numbers \nhandy? (I will search them as well..)\n\n\nBye\n Shridhar\n\n--\nPainting, n.:\tThe art of protecting flat surfaces from the weather, and\t\nexposing them to the critic.\t\t-- Ambrose Bierce\n\n",
"msg_date": "Sun, 15 Sep 2002 21:32:37 +0530",
"msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>",
"msg_from_op": true,
"msg_subject": "Re: Physical sites handling large data "
},
{
"msg_contents": "On 15 Sep 2002 11:33:59 -0400, Ericson Smith <eric@did-it.com> wrote:\n> shared memory to 3.2Gigs (out of 6GB Ram). [...]\n>shared_buffers = 38500\n>\n>ipcs output:\n>0x0052e2c1 98304 postgres 600 324018176 51 \n\nEricson, this looks more like 300MB to me; which might be a good\nchoice anyway ;-)\n\nServus\n Manfred\n",
"msg_date": "Mon, 16 Sep 2002 21:33:29 +0200",
"msg_from": "Manfred Koizar <mkoi-pg@aon.at>",
"msg_from_op": false,
"msg_subject": "Re: Physical sites handling large data"
},
{
"msg_contents": "... that sound you hear is the sound of me knocking my head against the\nbrick wall in here...\n\nWell it looks like Tom Lane was right (as always) on this one. On our\nprevious server, we had 4 Gigs of RAM and 1.6 Gigs of shared memory.\nDoes this mean now that the OS is efficiently caching disk, and they our\n320MB of shared memory is good enough? \n\nOur database is about 4 Gigs at this point with some tables having\nhundreds of thousands or millions of records.\n\nRunning free looks like this.\n[root@pg root]# free\n total used free shared buffers \ncached\nMem: 5939524 5868720 70804 0 90732 \n5451808\n-/+ buffers/cache: 326180 5613344\nSwap: 2096440 0 2096440\n\nThere are 58 client processes running, with at times up to 220. The load\non this machine never runs more than 1 with Dual CPU's.\n\nTop looks like this:\n97 processes: 96 sleeping, 1 running, 0 zombie, 0 stopped\nCPU0 states: 1.2% user, 3.2% system, 0.0% nice, 94.5% idle\nCPU1 states: 0.1% user, 0.0% system, 0.0% nice, 99.4% idle\nCPU2 states: 0.3% user, 0.2% system, 0.0% nice, 99.0% idle\nCPU3 states: 0.3% user, 0.2% system, 0.0% nice, 99.0% idle\nMem: 5939524K av, 5874740K used, 64784K free, 0K shrd, 91344K\nbuff\nSwap: 2096440K av, 0K used, 2096440K free 5451892K\ncached\n\nAny definitive insight here as to why I'm running so well at this point?\n- Ericson\n\n\n\n\nOn Mon, 2002-09-16 at 15:33, Manfred Koizar wrote:\n> On 15 Sep 2002 11:33:59 -0400, Ericson Smith <eric@did-it.com> wrote:\n> > shared memory to 3.2Gigs (out of 6GB Ram). [...]\n> >shared_buffers = 38500\n> >\n> >ipcs output:\n> >0x0052e2c1 98304 postgres 600 324018176 51 \n> \n> Ericson, this looks more like 300MB to me; which might be a good\n> choice anyway ;-)\n> \n> Servus\n> Manfred\n\n\n",
"msg_date": "16 Sep 2002 17:01:29 -0400",
"msg_from": "Ericson Smith <eric@did-it.com>",
"msg_from_op": false,
"msg_subject": "Re: Physical sites handling large data"
},
{
"msg_contents": "On 16 Sep 2002 at 17:01, Ericson Smith wrote:\n\n> ... that sound you hear is the sound of me knocking my head against the\n> brick wall in here...\n> \n> Well it looks like Tom Lane was right (as always) on this one. On our\n> previous server, we had 4 Gigs of RAM and 1.6 Gigs of shared memory.\n> Does this mean now that the OS is efficiently caching disk, and they our\n> 320MB of shared memory is good enough? \n\nLooks like you are asking but if you ask me you just proved that it's enough..\n \n> Our database is about 4 Gigs at this point with some tables having\n> hundreds of thousands or millions of records.\n> Any definitive insight here as to why I'm running so well at this point?\n\nI would suggest looking at pg metadata regarding memory usage as well as ipcs \nstats. Besides what are the kernle disk buffer setting. I believe you are using \nlinux and these buffer settings can be controlled via/for bdflush.\n\nYour typical ipcs usage would be a much valuable figure along with free..\n\nAnd BTW, what's your vacuum frequency? Just to count that in..\n\n\n\n\n\n\nBye\n Shridhar\n\n--\nWorst Vegetable of the Year:\tThe brussels sprout. This is also the worst \nvegetable of next year.\t\t-- Steve Rubenstein\n\n",
"msg_date": "Tue, 17 Sep 2002 12:07:24 +0530",
"msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>",
"msg_from_op": true,
"msg_subject": "Re: Physical sites handling large data"
},
{
"msg_contents": "Out Vacuum frequency is once daily, with EXPLAINS happening every 2\nhours.\n\nWe use the default kernel buffer settings.\n\n- Ericson\n\nOn Tue, 2002-09-17 at 02:37, Shridhar Daithankar wrote:\n> On 16 Sep 2002 at 17:01, Ericson Smith wrote:\n> \n> > ... that sound you hear is the sound of me knocking my head against the\n> > brick wall in here...\n> > \n> > Well it looks like Tom Lane was right (as always) on this one. On our\n> > previous server, we had 4 Gigs of RAM and 1.6 Gigs of shared memory.\n> > Does this mean now that the OS is efficiently caching disk, and they our\n> > 320MB of shared memory is good enough? \n> \n> Looks like you are asking but if you ask me you just proved that it's enough..\n> \n> > Our database is about 4 Gigs at this point with some tables having\n> > hundreds of thousands or millions of records.\n> > Any definitive insight here as to why I'm running so well at this point?\n> \n> I would suggest looking at pg metadata regarding memory usage as well as ipcs \n> stats. Besides what are the kernle disk buffer setting. I believe you are using \n> linux and these buffer settings can be controlled via/for bdflush.\n> \n> Your typical ipcs usage would be a much valuable figure along with free..\n> \n> And BTW, what's your vacuum frequency? Just to count that in..\n> \n> \n> \n> \n> \n> \n> Bye\n> Shridhar\n> \n> --\n> Worst Vegetable of the Year:\tThe brussels sprout. This is also the worst \n> vegetable of next year.\t\t-- Steve Rubenstein\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n\n",
"msg_date": "17 Sep 2002 08:59:55 -0400",
"msg_from": "Ericson Smith <eric@did-it.com>",
"msg_from_op": false,
"msg_subject": "Re: Physical sites handling large data"
},
{
"msg_contents": "On 17 Sep 2002 at 8:59, Ericson Smith wrote:\n\n> Out Vacuum frequency is once daily, with EXPLAINS happening every 2\n> hours.\n> \n> We use the default kernel buffer settings.\n\nI must say you have a good database installation. As many people have found and \ndocumented before, an in time vacuum can almost double the performance.\n\nAnd I am really amazed that things worked with default buffer settings. AFAIK, \nit takes upto half the physical RAM for buffers. But many distro. (for linux at \nleast) limit it to 200MB or half the RAM. So that on huge RAM, it would not eat \nup all..\n\nBye\n Shridhar\n\n--\nMIT:\tThe Georgia Tech of the North\n\n",
"msg_date": "Tue, 17 Sep 2002 19:27:17 +0530",
"msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>",
"msg_from_op": true,
"msg_subject": "Re: Physical sites handling large data"
}
] |
[
{
"msg_contents": "\nHi All,\n\n I've got a PostgreSQL 7.2.1 server running on FreeBSD 4.7 PRERELEASE,\nwith loads of memory and disk space, but I keep getting an error with this\nquery and I can not, for the life of me, figure out what is causing it:\n\n SELECT co.first_name, co.last_name, co.email_address,\n a.password, c.company_number\n FROM contact co, domain d\n LEFT JOIN account_info a ON (co.contact_id = a.contact_id)\n LEFT JOIN company c ON (co.company_id = c.company_id)\n WHERE d.domain_id = '666'\n AND d.company_id = co.company_id;\n\nI keep getting this error:\n\n Relation \"co\" does not exist.\n\nBut if I strip the query down to this:\n\n SELECT co.first_name, co.last_name, co.email_address\n FROM contact co, domain d\n WHERE d.domain_id = '666'\n AND d.company_id = co.company_id;\n\nIt works with out a hitch, so I think I'm right in saying that the left\njoins are throwing it off somehow. The funny part is that I've been\nworking with queries exactly like this first one with other areas of my\ndatabase and they do not complain...\n\nany one got any ideas? Ran into this before? As far as I can tell that\nfirst query is right..\n\n\n\n Chris Bowlby,\n -----------------------------------------------------\n Manager of Information and Technology.\n excalibur@hub.org\n www.hub.org\n 1-902-542-3657\n -----------------------------------------------------\n\n",
"msg_date": "Fri, 13 Sep 2002 10:25:15 -0300 (ADT)",
"msg_from": "Chris Bowlby <excalibur@hub.org>",
"msg_from_op": true,
"msg_subject": "Query having issues..."
},
{
"msg_contents": "Chris,\n\nI believe its the order of your \"FROM\" clause.\n\ntry\n\n FROM contact co\n LEFT JOIN account_info a ON (co.contact_id = a.contact_id)\n LEFT JOIN company c ON (co.company_id = c.company_id)\n\n\nJim\n\n> Hi All,\n> \n> I've got a PostgreSQL 7.2.1 server running on FreeBSD 4.7 PRERELEASE,\n> with loads of memory and disk space, but I keep getting an error with this\n> query and I can not, for the life of me, figure out what is causing it:\n> \n> SELECT co.first_name, co.last_name, co.email_address,\n> a.password, c.company_number\n> FROM contact co, domain d\n> LEFT JOIN account_info a ON (co.contact_id = a.contact_id)\n> LEFT JOIN company c ON (co.company_id = c.company_id)\n> WHERE d.domain_id = '666'\n> AND d.company_id = co.company_id;\n> \n> I keep getting this error:\n> \n> Relation \"co\" does not exist.\n> \n> But if I strip the query down to this:\n> \n> SELECT co.first_name, co.last_name, co.email_address\n> FROM contact co, domain d\n> WHERE d.domain_id = '666'\n> AND d.company_id = co.company_id;\n> \n> It works with out a hitch, so I think I'm right in saying that the left\n> joins are throwing it off somehow. The funny part is that I've been\n> working with queries exactly like this first one with other areas of my\n> database and they do not complain...\n> \n> any one got any ideas? Ran into this before? As far as I can tell that\n> first query is right..\n> \n> Chris Bowlby,\n> -----------------------------------------------------\n> Manager of Information and Technology.\n> excalibur@hub.org\n> www.hub.org\n> 1-902-542-3657\n> -----------------------------------------------------\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n\n\n\n",
"msg_date": "Fri, 13 Sep 2002 09:47:28 -0400",
"msg_from": "\"Jim Buttafuoco\" <jim@spectrumtelecorp.com>",
"msg_from_op": false,
"msg_subject": "Re: Query having issues..."
},
{
"msg_contents": "Chris Bowlby dijo: \n\n> SELECT co.first_name, co.last_name, co.email_address,\n> a.password, c.company_number\n> FROM contact co, domain d\n> LEFT JOIN account_info a ON (co.contact_id = a.contact_id)\n> LEFT JOIN company c ON (co.company_id = c.company_id)\n> WHERE d.domain_id = '666'\n> AND d.company_id = co.company_id;\n\nNote that you are JOINing \"domain d\" with \"account_info a\"; by the time\nthis is looked up, there is no relation co there, so the qualification\ndoesn't make sense. Try this:\n\n SELECT co.first_name, co.last_name, co.email_address,\n a.password, c.company_number\n FROM domain d, contact co\n LEFT JOIN account_info a ON (co.contact_id = a.contact_id)\n LEFT JOIN company c ON (co.company_id = c.company_id)\n WHERE d.domain_id = '666'\n AND d.company_id = co.company_id;\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Now I have my system running, not a byte was off the shelf;\nIt rarely breaks and when it does I fix the code myself.\nIt's stable, clean and elegant, and lightning fast as well,\nAnd it doesn't cost a nickel, so Bill Gates can go to hell.\"\n\n",
"msg_date": "Fri, 13 Sep 2002 10:57:39 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: Query having issues..."
},
{
"msg_contents": "On Fri, 13 Sep 2002, Jim Buttafuoco wrote:\n\nHi All,\n\n Ok, the order did have effect on the query, might I suggest that it\nshouldn't matter :> (I'm CC'ing the hackers list as part of this response\nin the hopes that someone over there will see my request :>)..\n\n\n> Chris,\n>\n> I believe its the order of your \"FROM\" clause.\n>\n> try\n>\n> FROM contact co\n> LEFT JOIN account_info a ON (co.contact_id = a.contact_id)\n> LEFT JOIN company c ON (co.company_id = c.company_id)\n>\n>\n> Jim\n>\n> > Hi All,\n> >\n> > I've got a PostgreSQL 7.2.1 server running on FreeBSD 4.7 PRERELEASE,\n> > with loads of memory and disk space, but I keep getting an error with this\n> > query and I can not, for the life of me, figure out what is causing it:\n> >\n> > SELECT co.first_name, co.last_name, co.email_address,\n> > a.password, c.company_number\n> > FROM contact co, domain d\n> > LEFT JOIN account_info a ON (co.contact_id = a.contact_id)\n> > LEFT JOIN company c ON (co.company_id = c.company_id)\n> > WHERE d.domain_id = '666'\n> > AND d.company_id = co.company_id;\n> >\n> > I keep getting this error:\n> >\n> > Relation \"co\" does not exist.\n> >\n> > But if I strip the query down to this:\n> >\n> > SELECT co.first_name, co.last_name, co.email_address\n> > FROM contact co, domain d\n> > WHERE d.domain_id = '666'\n> > AND d.company_id = co.company_id;\n> >\n> > It works with out a hitch, so I think I'm right in saying that the left\n> > joins are throwing it off somehow. The funny part is that I've been\n> > working with queries exactly like this first one with other areas of my\n> > database and they do not complain...\n> >\n> > any one got any ideas? Ran into this before? As far as I can tell that\n> > first query is right..\n> >\n> > Chris Bowlby,\n> > -----------------------------------------------------\n> > Manager of Information and Technology.\n> > excalibur@hub.org\n> > www.hub.org\n> > 1-902-542-3657\n> > -----------------------------------------------------\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n>\n>\n>\n>\n>\n\n Chris Bowlby,\n -----------------------------------------------------\n Manager of Information and Technology.\n excalibur@hub.org\n www.hub.org\n 1-902-542-3657\n -----------------------------------------------------\n\n",
"msg_date": "Sat, 14 Sep 2002 16:06:21 -0300 (ADT)",
"msg_from": "Chris Bowlby <excalibur@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: Query having issues..."
},
{
"msg_contents": "Chris Bowlby <excalibur@hub.org> writes:\n> Ok, the order did have effect on the query, might I suggest that it\n> shouldn't matter :>\n\nIf you think that, then you are wrong.\n\n> SELECT co.first_name, co.last_name, co.email_address,\n> a.password, c.company_number\n> FROM contact co, domain d\n> LEFT JOIN account_info a ON (co.contact_id = a.contact_id)\n> LEFT JOIN company c ON (co.company_id = c.company_id)\n> WHERE d.domain_id = '666'\n> AND d.company_id = co.company_id;\n\nThe interpretation of this command per spec is\n\nFROM\n\tcontact co,\n\t((domain d LEFT JOIN account_info a ON (co.contact_id = a.contact_id))\n\t LEFT JOIN company c ON (co.company_id = c.company_id))\n\nwhich perhaps will make it a little clearer why co can't be referenced\nwhere you are trying to reference it. A comma is not the same as a JOIN\noperator; it has much lower precedence.\n\nIt would be legal to do this:\n\nFROM contact co JOIN domain d ON (d.company_id = co.company_id)\nLEFT JOIN account_info a ON (co.contact_id = a.contact_id)\nLEFT JOIN company c ON (co.company_id = c.company_id)\nWHERE d.domain_id = '666';\n\nThis gets implicitly parenthesized left-to-right as\n\nFROM ((contact co JOIN domain d ON (d.company_id = co.company_id))\n LEFT JOIN account_info a ON (co.contact_id = a.contact_id))\n LEFT JOIN company c ON (co.company_id = c.company_id)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 14 Sep 2002 15:21:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Query having issues... "
}
] |
[
{
"msg_contents": "\nIn 7.3\n\nIs this default for data type time also without timezone now?\n\nThanks,\n\n-- \nLaurette Cisneros\nThe Database Group\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\n----------------------------------\nA wiki we will go...\n\n",
"msg_date": "Fri, 13 Sep 2002 10:09:48 -0700 (PDT)",
"msg_from": "Laurette Cisneros <laurette@nextbus.com>",
"msg_from_op": true,
"msg_subject": "time default"
},
{
"msg_contents": "Laurette Cisneros <laurette@nextbus.com> writes:\n> In 7.3\n> Is this default for data type time also without timezone now?\n\nAFAIR it always was.\n\ntest72=# select version();\n version\n---------------------------------------------------------------\n PostgreSQL 7.2.2 on hppa-hp-hpux10.20, compiled by GCC 2.95.3\n(1 row)\n\ntest72=# create table foo (f1 time);\nCREATE\ntest72=# \\d foo\n Table \"foo\"\n Column | Type | Modifiers\n--------+------------------------+-----------\n f1 | time without time zone |\n\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 14 Sep 2002 11:20:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: time default "
}
] |
[
{
"msg_contents": "\nIn 7.2.1\n\nFor database x:\n1) On primary sytem:\n pg_dump -f x.pgdmp -Fc x\n2) On a bkuphost system that has 7.2.2:\n createdb -T template0 x\n pg_restore -U xuser -h bkuphost -p 5432 -d x -Fc x.pgdmp\n\n get error: \n pg_restore: [archiver (db)] could not execute query: ERROR:\n stat failed on file '/usr/local/pgsql-7.2/lib/plpgsql': No such file or\n directory\n\nSo, I go back to primary system and\ndroplang -d x plperl\n\nI do 1) and 2) again. Same ERROR.\n\nSo, I go back to primary system again, and for database x:\nselect proname, probin from pg_proc\n\nand see:\n...\nplperl_call_handler | $libdir/plperl \n...\n\nShouldn't this be gone after I droplang?\n\nI know that if I never did a createlang plperl in a database it is not\nthere...so why does it stay? Is this what is still causing the error on\nthe pg_restore? Bug? Work around is to drop this record from the pg_proc\ntable?\n\nThanks,\n\n-- \nLaurette Cisneros\nThe Database Group\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\n----------------------------------\nA wiki we will go...\n\n",
"msg_date": "Fri, 13 Sep 2002 10:34:04 -0700 (PDT)",
"msg_from": "Laurette Cisneros <laurette@nextbus.com>",
"msg_from_op": true,
"msg_subject": "droplang doesn't completely work?"
},
{
"msg_contents": "Laurette Cisneros writes:\n\n> 2) On a bkuphost system that has 7.2.2:\n> createdb -T template0 x\n> pg_restore -U xuser -h bkuphost -p 5432 -d x -Fc x.pgdmp\n>\n> get error:\n> pg_restore: [archiver (db)] could not execute query: ERROR:\n> stat failed on file '/usr/local/pgsql-7.2/lib/plpgsql': No such file or\n> directory\n\nWell, is that file (+ .so) there? What does this path refer to? Is it\nthe correct installation location?\n\n> So, I go back to primary system and\n> droplang -d x plperl\n>\n> I do 1) and 2) again. Same ERROR.\n\nNot surprising, since the error refers to plpgsql, not plperl.\n\n> So, I go back to primary system again, and for database x:\n> select proname, probin from pg_proc\n>\n> and see:\n> ...\n> plperl_call_handler | $libdir/plperl\n> ...\n>\n> Shouldn't this be gone after I droplang?\n\nMaybe you still have plperlu installed, in which case the language handler\nis kept. Otherwise the answer is yes and you should try to see what\ndroplang is doing wrong.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sat, 14 Sep 2002 00:37:44 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: droplang doesn't completely work?"
},
{
"msg_contents": "On Sat, 14 Sep 2002, Peter Eisentraut wrote:\n\n> Laurette Cisneros writes:\n> \n> > 2) On a bkuphost system that has 7.2.2:\n> > createdb -T template0 x\n> > pg_restore -U xuser -h bkuphost -p 5432 -d x -Fc x.pgdmp\n> >\n> > get error:\n> > pg_restore: [archiver (db)] could not execute query: ERROR:\n> > stat failed on file '/usr/local/pgsql-7.2/lib/plpgsql': No such file or\n> > directory\n> \n> Well, is that file (+ .so) there? What does this path refer to? Is it\n> the correct installation location?\n\nOops, sent the wrong error message, the message I get is:\n\npg_restore: [archiver (db)] could not execute query: ERROR: Load of file\n/usr/local/pgsql7.2.2/lib/plperl.so failed: libperl.so: cannot open shared\nobject file: No such file or directory\n\nAnd, this is the point, libperl is not installed so I need to get rid of\nreferences to the language.\n\n> \n> > So, I go back to primary system and\n> > droplang -d x plperl\n> >\n> > I do 1) and 2) again. Same ERROR.\n> \n> Not surprising, since the error refers to plpgsql, not plperl.\n\nMeant to send message for plperl...see above.\n\n> \n> > So, I go back to primary system again, and for database x:\n> > select proname, probin from pg_proc\n> >\n> > and see:\n> > ...\n> > plperl_call_handler | $libdir/plperl\n> > ...\n> >\n> > Shouldn't this be gone after I droplang?\n> \n> Maybe you still have plperlu installed, in which case the language handler\n> is kept. Otherwise the answer is yes and you should try to see what\n> droplang is doing wrong.\n\nYep, that was it! Fixed now!\n\nIt's friday that's for sure.\n\nThanks,\n\n-- \nLaurette Cisneros\nThe Database Group\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\n----------------------------------\nA wiki we will go...\n\n",
"msg_date": "Fri, 13 Sep 2002 16:08:44 -0700 (PDT)",
"msg_from": "Laurette Cisneros <laurette@nextbus.com>",
"msg_from_op": true,
"msg_subject": "Re: droplang doesn't completely work?"
}
] |
[
{
"msg_contents": "Hello,\n\nIm a dutch student, working on a project where security of user \ninformation stored in a database is priority 1. So the database \nmust be designed with high security in mind. I've searched the \nnet very intesive, but did'nt find a good recource which can help \nme with \"secure database design\". I hope someone can help me on \nsuch a recource, a good book may help too.\nThanx in advange.\n\n",
"msg_date": "Fri, 13 Sep 2002 20:47:03 +0200",
"msg_from": "Jan Vaartjes <j.vaartjes@quicknet.nl>",
"msg_from_op": true,
"msg_subject": "Secure DB design ?"
},
{
"msg_contents": "On Fri, Sep 13, 2002 at 20:47:03 +0200,\n Jan Vaartjes <j.vaartjes@quicknet.nl> wrote:\n> Hello,\n> \n> Im a dutch student, working on a project where security of user \n> information stored in a database is priority 1. So the database \n> must be designed with high security in mind. I've searched the \n> net very intesive, but did'nt find a good recource which can help \n> me with \"secure database design\". I hope someone can help me on \n> such a recource, a good book may help too.\n> Thanx in advange.\n\nTranslucent Databases by Peter Wayner describes using encryption and hashing\nto secure data in databases. There are limits on what you can do with this,\nbut the methods used can be helpful in some cases.\n",
"msg_date": "Mon, 16 Sep 2002 10:38:58 -0500",
"msg_from": "Bruno Wolff III <bruno@wolff.to>",
"msg_from_op": false,
"msg_subject": "Re: Secure DB design ?"
},
{
"msg_contents": "Jan Vaartjes writes:\n\n> Im a dutch student, working on a project where security of user\n> information stored in a database is priority 1. So the database must\n> be designed with high security in mind. I've searched the net very\n> intesive, but did'nt find a good recource which can help me with\n> \"secure database design\". I hope someone can help me on such a\n> recource, a good book may help too.\n\nThe first thing you will need to decide is: What do you mean by security?\n\nThere is the integrity of the data: Does the database system preserve\nthe data accurately, or does it have bugs that corrupt data?\n\nThere is identification: How sure are you (or your database system)\nthat a user of the system is who they say they are?\n\nThere is authorization: Does the database system (or layers you put on\ntop of it) provide good enough access control for your application,\nboth in what they can read and change? Bugs or design errors in the\nsystem can sometimes circumvent the access controls.\n\nThere is transport privacy: Is the user's traffic secure enough\nagainst eavesdropping?\n\nDepending on your application, you may have to address other types of\nsecurity. Unfortunately, \"security\" by itself is so vague as to not\nbe a useful metric of databaes design.\n\n-- Michael\n",
"msg_date": "16 Sep 2002 11:58:50 -0400",
"msg_from": "Michael Poole <poole@troilus.org>",
"msg_from_op": false,
"msg_subject": "Re: Secure DB design ?"
}
] |
[
{
"msg_contents": "Since almost every cast to \"text\" is implicit, then I believe so should\n\ninet -> text\nmacaddr -> text\nint4 -> varchar\nint8 -> varchar\n\nwhich are currently not.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sat, 14 Sep 2002 00:37:28 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Inconsistent casts"
},
{
"msg_contents": "Peter Eisentraut dijo: \n\n> Since almost every cast to \"text\" is implicit, then I believe so should\n> \n> inet -> text\n> macaddr -> text\n> int4 -> varchar\n> int8 -> varchar\n> \n> which are currently not.\n\nAlso, some casts seem to be missing; numeric -> text, for example.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"La naturaleza, tan fragil, tan expuesta a la muerte... y tan viva\"\n\n",
"msg_date": "Fri, 13 Sep 2002 19:22:06 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent casts"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Since almost every cast to \"text\" is implicit, then I believe so should\n> inet -> text\n> macaddr -> text\n> int4 -> varchar\n> int8 -> varchar\n> which are currently not.\n\nI would like to see us *eliminate* implicit casts to text. Not add more.\n\nSee my prior rants on subject... but the core of the matter is that if\nevery datatype can be implicitly casted to text then you have no type\nsafety worthy of the term. We have open bug reports that reduce to this.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Sep 2002 22:57:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent casts "
}
] |
[
{
"msg_contents": "When I run `make installcheck` from ~/pgsql/contrib the test progresses until \nit hits one that fails (cube is currently failing for me), and then the test \nstops. Is there any way to get the test to continue through the rest of \ncontrib, despite one or more individual failures?\n\nThanks,\n\nJoe\n\n",
"msg_date": "Fri, 13 Sep 2002 16:10:43 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "make installcheck in contrib"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> When I run `make installcheck` from ~/pgsql/contrib the test progresses until\n> it hits one that fails (cube is currently failing for me), and then the test \n> stops. Is there any way to get the test to continue through the rest of \n> contrib, despite one or more individual failures?\n\nHave not tried it, but offhand I would expect \"make -k installcheck\" to\ndo what you want.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Sep 2002 23:07:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: make installcheck in contrib "
},
{
"msg_contents": "Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n> \n>>When I run `make installcheck` from ~/pgsql/contrib the test progresses until\n>>it hits one that fails (cube is currently failing for me), and then the test \n>>stops. Is there any way to get the test to continue through the rest of \n>>contrib, despite one or more individual failures?\n> \n> Have not tried it, but offhand I would expect \"make -k installcheck\" to\n> do what you want.\n\nDooooh, I was thinking it was a Makefile thing. `make -k installcheck` doesn't \nseem to do it, but `make -i installcheck` does.\n\nThanks!\n\nJoe\n\np.s. FWIW, the following contrib regression tests are failing (on RH7.3)\n - cube -- error message format\n - seg -- error message format\n - tsearch -- \"server closed the connection unexpectedly\"\n\n\n\n\n",
"msg_date": "Fri, 13 Sep 2002 20:27:32 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "Re: make installcheck in contrib"
}
] |
[
{
"msg_contents": "Dear Shridhar,\n\n> One of my friends is evaluating postgres for large databases. This is a select \n> intensive application which is something similar to data-warehousing as far as \n> I can see.\n> \n> The data is 150GB in flat files so would swell to 200GB+ with indexes.\n> \n> Is anybody running that kind of site? Any url? Any performance numbers/tuning \n> tips for random selects?\n\nI work for Bristol-Myers Squibb in their Bioinformatics department,\nand I have about 300GB in PostgreSQL databases for DNA sequence\nanalysis. Some of my tables are approaching 100 million rows. You\nhave to watch and adjust how PostgreSQL plans queries in order to get\ngood application performance.\n\n> \n> I would hate to put mysql there but we are evaluating that too. I would hate if \n> postgres loses this to mysql because I didn't know few things about postgres.\n> \n> Secondly would it make a difference if I host that database on say, an HP-UX \n> box? From some tests I have done for my job, single CPU HP-UX box trounces 4 \n> way xeon box. Any suggestions in this directions?\n\nWe use an SGI Origin 3000 with Fibre Channel RAID. However, an SGI Origin 2000\nworks well too, and those systems are available cheaply on the used market.\nLots of RAM helps performance -- we run with big buffer caches.\n\n--Bob\n\n+-----------------------------+------------------------------------+\n| Robert E. Bruccoleri, Ph.D. | email: bruc@acm.org |\n| P.O. Box 314 | URL: http://www.congen.com/~bruc |\n| Pennington, NJ 08534 | |\n+-----------------------------+------------------------------------+\n",
"msg_date": "Fri, 13 Sep 2002 20:06:45 -0400 (EDT)",
"msg_from": "\"Robert E. Bruccoleri\" <bruc@stone.congenomics.com>",
"msg_from_op": true,
"msg_subject": "Large PostgreSQL databases"
}
] |
[
{
"msg_contents": "Hackers,\n\nSuppose I do\n\nCREATE TABLE a (a int);\nCREATE TABLE b () INHERITS (a);\n\nAnd then I want to drop both tables:\n\nregression=# DROP TABLE a, b;\nNOTICE: table b depends on table a\nERROR: Cannot drop table a because other objects depend on it\n Use DROP ... CASCADE to drop the dependent objects too\n\n\nOh, so I use CASCADE:\n\nregression=# DROP TABLE a, b CASCADE;\nNOTICE: Drop cascades to table b\nERROR: table \"b\" does not exist\n\nI understand what's going on and how to get the desired behavior, but\nit's weird and I think it should be fixed if possible.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Endurecerse, pero jamas perder la ternura\" (E. Guevara)\n\n",
"msg_date": "Fri, 13 Sep 2002 21:01:23 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": true,
"msg_subject": "DROP TABLE... CASCADE weirdness"
},
{
"msg_contents": "Alvaro Herrera <alvherre@atentus.com> writes:\n> I understand what's going on and how to get the desired behavior, but\n> it's weird and I think it should be fixed if possible.\n\nDefine why you consider this broken and what you would consider fixed.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Sep 2002 22:50:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: DROP TABLE... CASCADE weirdness "
},
{
"msg_contents": "Tom Lane dijo: \n\n> Alvaro Herrera <alvherre@atentus.com> writes:\n> > I understand what's going on and how to get the desired behavior, but\n> > it's weird and I think it should be fixed if possible.\n> \n> Define why you consider this broken\n\nOn the first case, if I'm specifying to drop both tables, I don't want\nto be bothered telling me that the second depends on the first: I have\nalready specified that I want it dropped.\n\nOn the second case (CASCADE), I'm trying to drop the second table, so I\ndo not want to be bothered telling me that it doesn't exist, because\nthat is exactly what I want.\n\n> and what you would consider fixed.\n\nIn both cases (CASCADE and RESTRICT), both tables should be dropped\n(after all, that's what I'm trying to do).\n\nIt's only an annoyance, and I suppose it's very difficult to \"fix\".\nMy solution would be first to fetch the whole list of OIDs to be dropped\nand only then do the deletion.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Tiene valor aquel que admite que es un cobarde\" (Fernandel)\n\n",
"msg_date": "Fri, 13 Sep 2002 23:06:48 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": true,
"msg_subject": "Re: DROP TABLE... CASCADE weirdness "
},
{
"msg_contents": "Alvaro Herrera <alvherre@atentus.com> writes:\n> Tom Lane dijo: \n>> Alvaro Herrera <alvherre@atentus.com> writes:\n> I understand what's going on and how to get the desired behavior, but\n> it's weird and I think it should be fixed if possible.\n>> \n>> Define why you consider this broken\n\n> On the first case, if I'm specifying to drop both tables, I don't want\n> to be bothered telling me that the second depends on the first: I have\n> already specified that I want it dropped.\n\nI believe that \"DROP TABLE a, b CASCADE\" is (and should be) equivalent\nto\n\tDROP TABLE a CASCADE;\n\tDROP TABLE b CASCADE;\n\nIt would be really hard to make the case that the latter pair of\ncommands should work in the scenario you give. Perhaps you should\ntry to make the case that this equivalence is wrong ... but I don't\nmuch care for that idea either. If it is wrong, exactly how will\nyou define the command to work instead?\n\n> My solution would be first to fetch the whole list of OIDs to be dropped\n> and only then do the deletion.\n\nI don't think that will get you anywhere in terms of avoiding failures;\nyou'd still find yourself trying to drop already-dropped tables, only by\nOID instead of name.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Sep 2002 23:17:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: DROP TABLE... CASCADE weirdness "
},
{
"msg_contents": "On Sat, 2002-09-14 at 05:17, Tom Lane wrote:\n> Alvaro Herrera <alvherre@atentus.com> writes:\n> > Tom Lane dijo: \n> >> Alvaro Herrera <alvherre@atentus.com> writes:\n> > I understand what's going on and how to get the desired behavior, but\n> > it's weird and I think it should be fixed if possible.\n> >> \n> >> Define why you consider this broken\n> \n> > On the first case, if I'm specifying to drop both tables, I don't want\n> > to be bothered telling me that the second depends on the first: I have\n> > already specified that I want it dropped.\n> \n> I believe that \"DROP TABLE a, b CASCADE\" is (and should be) equivalent\n> to\n> \tDROP TABLE a CASCADE;\n> \tDROP TABLE b CASCADE;\n> \n> It would be really hard to make the case that the latter pair of\n> commands should work in the scenario you give. Perhaps you should\n> try to make the case that this equivalence is wrong ... but I don't\n> much care for that idea either. If it is wrong, exactly how will\n> you define the command to work instead?\n> \n> > My solution would be first to fetch the whole list of OIDs to be dropped\n> > and only then do the deletion.\n> \n> I don't think that will get you anywhere in terms of avoiding failures;\n> you'd still find yourself trying to drop already-dropped tables, only by\n> OID instead of name.\n\nThis seems to be a problem that is of similar nature to our UNIQUE\nconstraints not working in all cases (depending on the _physical_ order\nof tuples, which should never affect any user-visible behaviour).\n\nThe two DROP TABLE cases are not equivalent in the sense that the first\nis _one_ command and the other is _two_ separate commands.\n\nOTOH, I don't think that fixing DROP TABLE is as urgent as the UNIQUE\nbecause\n * the UNIQUE bug can come to haunt as at random times\n * DROP TABLE is usually done by more qualified people\n (DBAs and programmers)\n * our whole inheritance stuff is still somewhat a moving target.\n\n-------------\nHannu\n\n\n",
"msg_date": "14 Sep 2002 11:54:06 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: DROP TABLE... CASCADE weirdness"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> This seems to be a problem that is of similar nature to our UNIQUE\n> constraints not working in all cases (depending on the _physical_ order\n> of tuples, which should never affect any user-visible behaviour).\n\nNo, I don't see any similarity at all. The behavior Alvaro is unhappy\nwith is perfectly deterministic and repeatable. He's basically saying\nthat DROP should be forgiving of redundant DROP operations, so long as\nthey are packaged into a single command. I don't really agree with\nthat, but it doesn't seem related to the UNIQUE issues.\n\n> The two DROP TABLE cases are not equivalent in the sense that the first\n> is _one_ command and the other is _two_ separate commands.\n\nAs long as they are wrapped into a single transaction block, there is no\ndifference.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 14 Sep 2002 10:43:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: DROP TABLE... CASCADE weirdness "
},
{
"msg_contents": "On Sat, 2002-09-14 at 19:43, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > This seems to be a problem that is of similar nature to our UNIQUE\n> > constraints not working in all cases (depending on the _physical_ order\n> > of tuples, which should never affect any user-visible behaviour).\n> \n> No, I don't see any similarity at all. The behavior Alvaro is unhappy\n> with is perfectly deterministic and repeatable. He's basically saying\n> that DROP should be forgiving of redundant DROP operations, so long as\n> they are packaged into a single command.\n\n> I don't really agree with that, but it doesn't seem related to the UNIQUE issues.\n\nThe similarity is in COMMAND vs. TRANCACTION level checks.\n\nBoth other databases I tested (Oracle and DB2) disallow commands that\nviolate unique even inside a transaction, but they both allow commands\nthat must have some point of internal violation _during_ any serial\nexecution of the command:\n\nie. for table \n\nt(i int not null unique)\n\nhaving values 1 and 2\n\nthe command \n\nupdate t set i=2 where i=1;\n\nis not allowed on either of then, even inside transaction, but both \n\nupdate t set i=i+1;\n\nand\n\nupdate t set i=i-1;\n\nare allowed.\n\n> > The two DROP TABLE cases are not equivalent in the sense that the first\n> > is _one_ command and the other is _two_ separate commands.\n> \n> As long as they are wrapped into a single transaction block, there is no\n> difference.\n\nOnce we will be able to continue after errors it may become a\nsignificant difference -\n\nDROP TABLE a,b CASCADE; \nCOMMIT;\n\nwill leave the tables untouched whereas \n\nDROP TABLE b CASCADE;\nDROP TABLE a CASCADE;\nCOMMIT;\n\nWill delete both tables but issue an error;\n\n-----\nHannu\n\n\n\n",
"msg_date": "14 Sep 2002 23:07:05 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: DROP TABLE... CASCADE weirdness"
}
] |
[
{
"msg_contents": "Hi all,\n\nthis is my first mail to pgsql-hackers, so first I want to thank you all for\nyour great work. PostgreSQL is an amazing database management system and\nwonderful to use.\n\nConcerning this TODO entry:\n\n> Allow SELECT * FROM tab WHERE int2col = 4 to use int2col index,\n> int8, float4, numeric/decimal too [optimizer]\n\nWhat about the case of doing a join on columns that don't have the same\ntype? Sometimes the index will be used, e.g. on this simple query:\n\nSELECT * FROM a, b WHERE a.int4col = b.int8col;\n\nHere the index will be used. But there are other queries where it's\nnecessary to do explicit type casting. I could provide examples.\nIs this a known problem?\n\nBest Regards,\nMichael Paesold\n\n",
"msg_date": "Sun, 15 Sep 2002 00:25:56 +0200",
"msg_from": "\"Michael Paesold\" <mpaesold@gmx.at>",
"msg_from_op": true,
"msg_subject": "Indexes and differing column types"
},
{
"msg_contents": "I wrote:\n\n> Concerning this TODO entry:\n>\n> > Allow SELECT * FROM tab WHERE int2col = 4 to use int2col index,\n> > int8, float4, numeric/decimal too [optimizer]\n>\n> What about the case of doing a join on columns that don't have the same\n> type? Sometimes the index will be used, e.g. on this simple query:\n>\n> SELECT * FROM a, b WHERE a.int4col = b.int8col;\n>\n> Here the index will be used. But there are other queries where it's\n> necessary to do explicit type casting. I could provide examples.\n> Is this a known problem?\n\nI am sorry, the problem is not with joins but with subqueries.\n\n\nSELECT a.*, (SELECT max(b.someval) FROM b WHERE b.int8val = a.int4val) FROM\na;\n-->\nQUERY PLAN:\nSeq Scan on a (cost=0.00..60.37 rows=2237 width=128)\n SubPlan\n -> Seq Scan on b (cost=0.00..373.76 rows=1 width=4)\n\n\nSELECT a.*, (SELECT max(b.someval) FROM b WHERE b.int8val = a.int4val::int8)\nFROM a;\n-->\nQUERY PLAN:\nSeq Scan on a (cost=0.00..60.37 rows=2237 width=128)\n SubPlan\n -> Index Scan using b_pkey on b (cost=0.00..2.04 rows=1 width=4)\n\nRegards, Michael\n\n\n\n\n",
"msg_date": "Sun, 15 Sep 2002 00:45:24 +0200",
"msg_from": "\"Michael Paesold\" <mpaesold@gmx.at>",
"msg_from_op": true,
"msg_subject": "Re: Indexes and differing column types"
}
] |
[
{
"msg_contents": "I wonder how hard it would be to run a database server against a database \nthat is already being run. The idea is to be able to do read only queries \nagainst the database from a different server on a shared NFS mounted \ndatabase. The second server would need to be able to start in a mode that \nignored the lock and only allowed queries that read the database. This would \nallow many intensive report queries against a busy transaction database.\n\nPossible? Possible with a little work? A lot of work?\n\nAnother question, can a database server for one system (e.g. NetBSD on i386) \nrun a database originally created on another (e.g. AIX on RS6000) or are \nthere binary incompatibilities?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Sat, 14 Sep 2002 18:39:49 -0400",
"msg_from": "\"D'Arcy J.M. Cain\" <darcy@druid.net>",
"msg_from_op": true,
"msg_subject": "Reading a live database"
},
{
"msg_contents": "\"D'Arcy J.M. Cain\" <darcy@druid.net> writes:\n> I wonder how hard it would be to run a database server against a database \n> that is already being run. The idea is to be able to do read only queries \n> against the database from a different server on a shared NFS mounted \n> database.\n\nThe odds of this are nil, unless maybe *all* the servers treat the\ndatabase as read-only, which doesn't seem very interesting.\n\n> Another question, can a database server for one system (e.g. NetBSD on i386) \n> run a database originally created on another (e.g. AIX on RS6000) or are \n> there binary incompatibilities?\n\nThere are binary incompatibilities if the platforms have differences in\nendianness, alignment, or floating-point formats.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 14 Sep 2002 18:53:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reading a live database "
},
{
"msg_contents": "On September 14, 2002 06:53 pm, Tom Lane wrote:\n> \"D'Arcy J.M. Cain\" <darcy@druid.net> writes:\n> > I wonder how hard it would be to run a database server against a database\n> > that is already being run. The idea is to be able to do read only\n> > queries against the database from a different server on a shared NFS\n> > mounted database.\n>\n> The odds of this are nil, unless maybe *all* the servers treat the\n> database as read-only, which doesn't seem very interesting.\n\nYah, wishful thinking on my part. :-(\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Sat, 14 Sep 2002 22:42:52 -0400",
"msg_from": "\"D'Arcy J.M. Cain\" <darcy@druid.net>",
"msg_from_op": true,
"msg_subject": "Re: Reading a live database"
},
{
"msg_contents": "On 14 Sep 2002 at 18:39, D'Arcy J.M. Cain wrote:\n\n> I wonder how hard it would be to run a database server against a database \n> that is already being run. The idea is to be able to do read only queries \n> against the database from a different server on a shared NFS mounted \n> database. The second server would need to be able to start in a mode that \n> ignored the lock and only allowed queries that read the database. This would \n> allow many intensive report queries against a busy transaction database.\n> \n> Possible? Possible with a little work? A lot of work?\n\nI think it should be possible with the help of application.\n\nSay you installation real time replication like usogres and replicate your \ndatabase and connect to either of them for data selection, it should be \npossbile but it would need some code on application side to switch connections. \n\nSay you connect to master database for critical queries and to slave database \nfor queries that are huge in data sets but used in statistical analysis where \ncouple of rows here and there, still unsynced, would not matter much..Depends \nupon the application though...\n\nNever used usogres so no idea how good that is. But if it does what it says, \nthen I guess it should be possible.\n\nOf course this is not exactly same is what you are asking for i.e. using same \nstorage area. But this is an alternate approach where you can load balance the \nthings.\n\nCombined with HA-postgresql(http://www.taygeta.com/ha-postgresql.html), you \nshould get a good redundancy with this approach. The technique described there \ndoes not use real time replication but I would prefer that if you are going to \nload balance your queries against multiple servers. Only thing is you need \nredundant storage in this scheme..\n\nHTH\n\nBye\n Shridhar\n\n--\nQOTD:\t\"I'd never marry a woman who didn't like pizza... I might play\tgolf with \nher, but I wouldn't marry her!\"\n\n",
"msg_date": "Sun, 15 Sep 2002 14:53:58 +0530",
"msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>",
"msg_from_op": false,
"msg_subject": "Re: Reading a live database"
}
] |
[
{
"msg_contents": "I'm trying to run psql on Windows 2000, but it prints an error message\nsaying:\n\npsql: could not create socket: An address incompatible with the requested\nprotocol was used.\n (0x0000273F)\n\nDebugging a small application I have written myself, I have discovered that\nthe function 'socket' inside the database connection function fails with the\nerror: \"The specified address family is not supported\".\n\n From the cygwin environment, psql is running fine, and I have properly\ninstalled cygipc.\n\nDoes anyone know that problem?\n\nThank you,\nAvishay Orpaz\n\n\n",
"msg_date": "Sun, 15 Sep 2002 00:50:28 +0200",
"msg_from": "\"Avishay Orpaz\" <avishorp@walla.co.il>",
"msg_from_op": true,
"msg_subject": "Problem with psql on Win32"
},
{
"msg_contents": "\"Avishay Orpaz\" <avishorp@walla.co.il> writes:\n> I'm trying to run psql on Windows 2000, but it prints an error message\n> saying:\n\n> psql: could not create socket: An address incompatible with the requested\n> protocol was used.\n> (0x0000273F)\n\n> Debugging a small application I have written myself, I have discovered that\n> the function 'socket' inside the database connection function fails with the\n> error: \"The specified address family is not supported\".\n\nTry \"psql -h localhost\" (or set PGHOST environment variable). I don't\nthink Unix-socket connections exist in Windows.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Sep 2002 11:28:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Problem with psql on Win32 "
}
] |
[
{
"msg_contents": "We've been discussing this stuff in fits and starts for months now, but\nnothing satisfactory has been arrived at. I've concluded that part of\nthe problem is that we are trying to force the system's behavior into\na model that is too limiting: we need more than an implicit/explicit cast\ndistinction. Accordingly, I suggest we bite the bullet and make it happen.\n(Note that I've resigned myself to having to do an initdb for 7.3beta2.)\n\nI think we must extend pg_cast's castimplicit column to a three-way value:\n\t* okay as implicit cast in expression (or in assignment)\n\t* okay as implicit cast in assignment only\n\t* okay only as explicit cast\n\n\"In expression\" refers to cases where we have (or potentially have) multiple\npossible interpretations; essentially, anytime a value is being fed to a\nfunction or operator, there can be ambiguity due to overloading, and so we\nneed to restrict the set of possible implicit casts to limit ambiguity and\nensure a reasonable choice of function is made.\n\n\"In assignment only\" actually means any case where the destination datatype\nis known with certainty. For example CoerceTargetExpr is currently used to\ncoerce an array subscript expression to integer, and I think it's okay to\ntreat that context like store assignment.\n\nQuestion: what shall we call these alternatives in CREATE CAST? The SQL99\nphrase AS ASSIGNMENT looks like it should mean the second, but I think\nthe spec semantics require it to mean the first. Ugh. Perhaps AS\nASSIGNMENT ONLY for the second case?\n\nAlso, I think we should allow cast functions to take an optional boolean\nsecond argument \"isExplicit\", so that explicit casts can be distinguished\nfrom implicit at runtime. We'll use this to get spec-compliant semantics\nfor char/varchar truncation (there shouldn't be an error if you explicitly\ncast to a shorter length).\n\nWe'll need to add fields to Func and RelabelType nodes so that we can tell\nwhether a node was generated due to an explicit function call, implicit\ncast, or explicit cast; we'll use these for better reverse-listing. (In\nparticular this will let us hide the boolean second argument from being\nreverse-listed, when present.)\n\nNow, as to just what to do with it --- Peter posted a list of questions\nawhile back that weren't ever resolved, but I think we can make some\nprogress with this scheme in mind:\n\n> From looking at the set of implicit or not casts, I think there are two\n> major issues to discuss:\n> \n> 1. Should truncating/rounding casts be implicit? (e.g., float4 -> int4)\n> \n> I think there's a good argument for \"no\", but for some reason SQL99 says\n> \"yes\", at least for the family of numerical types.\n\nWe can make this work cleanly if \"down\" casts are assignment-only while\n\"up\" casts are fully implicit. I think that the spec requires implicit\ncasting only in the context of store assignment.\n\n> 2. Should casts from non-character types to text be implicit? (e.g., date\n> -> text)\n> \n> I think this should be \"no\", for the same reason that the other direction\n> is already disallowed. It's just sloppy programming.\n\nI agree with this in principle, but in practice we probably have to allow\nimplicit casts to text, at least for awhile yet. Seems that too many\npeople depend on stuff like\n\tSELECT 'Meeting time is ' || timestamp_var\nSince this is an expression context we don't get any help from the notion\nof store assignment :-(\n\n> I also have a few individual cases that look worthy of consideration:\n> \n> abstime <-> int4: I think these should not be implicit because they\n> represent different \"kinds\" of data. (These are binary compatible casts,\n> so changing them to not implicit probably won't have any effect. I'd have\n> to check this.)\n\nI believe that as of current sources we can mark a binary cast non-implicit,\nand I agree with marking these two explicit-only.\n\n> date -> timestamp[tz]: I'm suspicious of this one, but it's hard to\n> explain. The definition to fill in the time component with zeros is\n> reasonable, but it's not the same thing as casting integers to floats\n> because dates really represent a time span of 24 hours and timestamps an\n> indivisible point in time. I suggest making this non-implicit, for\n> conformance with SQL and for general consistency between the date/time\n> types.\n\nI disagree here; promoting date to timestamp seems perfectly reasonable,\nand I think it's something a lot of people rely on.\n\n> time -> interval: I'm not even sure this cast should exist at all.\n> Proper arithmetic would be IntervalValue = TimeValue - TIME 'midnight'.\n> At least make it non-implicit.\n\nI'd go along with marking it assignment-only.\n\n> timestamp -> abstime: This can be implicit AFAICS.\n\nThis is lossy (abstime doesn't preserve fractional seconds) so I'd vote\nfor making it assignment-only.\n\n\nIn a later message Peter wrote:\n\n> Since almost every cast to \"text\" is implicit, then I believe so should\n> inet -> text\n> macaddr -> text\n> int4 -> varchar\n> int8 -> varchar\n> which are currently not.\n\nI'd go along with making the inet->text and macaddr->text cases implicit,\nsince as you note all the other casts to text are. However, those two\ncasts to varchar must not be implicit (or at most assignment-only) else\nthey will create ambiguity against the implicit casts to text for the same\nsource datatype.\n\n\nIn summary: I haven't yet gone through the existing casts in detail, but\nI propose the following general rules for deciding how to mark casts:\n\n* Casts across datatype categories should be explicit-only, with the\nexception of casts to text, which we will allow implicitly for backward\ncompatibility's sake.\n\n* Within a category, \"up\" (lossless) conversions are implicit, \"down\"\n(potentially lossy) conversions should be assignment-only.\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 15 Sep 2002 13:09:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Proposal for resolving casting issues"
},
{
"msg_contents": "> > abstime <-> int4: I think these should not be implicit because they\n> > represent different \"kinds\" of data. (These are binary\n> compatible casts,\n> > so changing them to not implicit probably won't have any\n> effect. I'd have\n> > to check this.)\n>\n> I believe that as of current sources we can mark a binary cast\n> non-implicit,\n> and I agree with marking these two explicit-only.\n\nEverything in this proposal looks pretty good. With regards to the above\nabstime<->int4 thing - what about the 'magic' values in that conversion.\n(eg. -infinity, etc.)\n\nChris\n\n",
"msg_date": "Mon, 16 Sep 2002 10:10:05 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for resolving casting issues"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Everything in this proposal looks pretty good. With regards to the above\n> abstime<->int4 thing - what about the 'magic' values in that conversion.\n> (eg. -infinity, etc.)\n\nThey map to some magic int4 values, same as it ever was. I'm not\ninterested in trying to improve the semantics of any specific conversion\nat the moment...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 15 Sep 2002 23:48:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposal for resolving casting issues "
},
{
"msg_contents": "I said:\n> Also, I think we should allow cast functions to take an optional boolean\n> second argument \"isExplicit\", so that explicit casts can be distinguished\n> from implicit at runtime. We'll use this to get spec-compliant semantics\n> for char/varchar truncation (there shouldn't be an error if you explicitly\n> cast to a shorter length).\n\nAfter looking closely at SQL92 sections 6.10 (cast specification) and\n9.2 (store assignment), it seems that the only places where the spec\ndemands different behavior for an explicit cast than for an implicit\nassignment cast are for length coercions of char, varchar, bit, and\nvarbit types.\n\nAccordingly, the places where we actually *need* the extra isExplicit\nargument are not in the type-coercion functions per se, but in the\nlength-coercion functions associated with these four datatypes.\n\nWhile we could still add the extra argument for the type-coercion\nfunctions, I'm inclined not to do so; there is no need for it for spec\ncompliance of the standard types, and I don't think we should encourage\nuser-defined types to behave differently for explicit and implicit\ncasts.\n\nWhat I will do instead is adjust parse_coerce.c so that a\nlength-coercion function can have either of the signatures\n\tfoo(foo,int4) returns foo\nor\n\tfoo(foo,int4,bool) returns foo\nand then modify the above-mentioned length coercion functions to provide\nthe desired behavior. This has no direct impact on pg_cast because we\ndo not use pg_cast for length-coercion functions.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Sep 2002 12:44:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposal for resolving casting issues "
},
{
"msg_contents": "Tom Lane writes:\n\n> I think we must extend pg_cast's castimplicit column to a three-way value:\n> \t* okay as implicit cast in expression (or in assignment)\n> \t* okay as implicit cast in assignment only\n> \t* okay only as explicit cast\n\nViewed in isolation this looks entirely reasonable, but I think we would\nbe adding a lot of infrastructure for the benefit of a relatively small\nnumber of cases.\n\nAs the writer of a cast, this presents me with at least one more option\nthan I can really manage.\n\nAs the user of a cast, these options make the whole system nearly\nunpredictable because in any non-trivial expression each of these\nbehaviors could take effect somehow (possibly even depending on how the\ninner expressions turned out).\n\nI am not aware of any programming language that has more than three\ncastability levels (never/explicit/implicit).\n\nFinally, I believe this paints over the real problems, namely the\ninadequate and hardcoded type category preferences and the inadequate\nhandling of numerical constants. Both of these issues have had adequate\napproaches proposed in the past and would solve this an a number of other\nissues.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Mon, 16 Sep 2002 19:26:08 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for resolving casting issues"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> I think we must extend pg_cast's castimplicit column to a three-way value:\n>> * okay as implicit cast in expression (or in assignment)\n>> * okay as implicit cast in assignment only\n>> * okay only as explicit cast\n\n> As the user of a cast, these options make the whole system nearly\n> unpredictable because in any non-trivial expression each of these\n> behaviors could take effect somehow (possibly even depending on how the\n> inner expressions turned out).\n\nHow so? Only the first set of casts applies inside an expression.\n\nIt seems to me that this proposal actually *reduces* the number of casts\nthat might apply in any given context, and thus makes the behavior more\npredictable not less so. Certainly it is more predictable than\nany-cast-can-be-applied-implicitly, which I seem to remember you arguing\nfor (at least for the numeric types).\n\n> I am not aware of any programming language that has more than three\n> castability levels (never/explicit/implicit).\n\nActually I think that this scheme would allow us to model typical\nprogramming-language behavior quite accurately. C for example will let\nyou assign a float to an integer (with appropriate runtime behavior) ---\nbut if you add a float and an integer, you get a float addition; there's\nno possibility that the system will choose to coerce the float to int\nand do an int addition. So the set of available implicit casts is\ndifferent in an assignment context than it is in an expression context.\nSeems pretty close to what I'm suggesting.\n\n> Finally, I believe this paints over the real problems, namely the\n> inadequate and hardcoded type category preferences and the inadequate\n> handling of numerical constants. Both of these issues have had adequate\n> approaches proposed in the past and would solve this an a number of other\n> issues.\n\nIf they were adequate they would have gotten implemented; we had issues\nwith all the proposals so far. See my later response to Andreas for a\npossible solution to the numerical-constant issue based on this\nmechanism.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Sep 2002 13:42:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposal for resolving casting issues "
},
{
"msg_contents": "Tom Lane wrote:\n> We've been discussing this stuff in fits and starts for months now, but\n> nothing satisfactory has been arrived at. I've concluded that part of\n> the problem is that we are trying to force the system's behavior into\n> a model that is too limiting: we need more than an implicit/explicit cast\n> distinction. Accordingly, I suggest we bite the bullet and make it happen.\n> (Note that I've resigned myself to having to do an initdb for 7.3beta2.)\n\nI was reading my backlog of email and thinking, \"Oh, things are shaping\nup well\", then I hit this message. Let me try to collect open items\ntomorrow and get a plan together. I have caught up on my email. I am\nheading to bed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 17 Sep 2002 02:05:39 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for resolving casting issues"
},
{
"msg_contents": "I wrote:\n> I think we must extend pg_cast's castimplicit column to a three-way value:\n> \t* okay as implicit cast in expression (or in assignment)\n> \t* okay as implicit cast in assignment only\n> \t* okay only as explicit cast\n\n> Question: what shall we call these alternatives in CREATE CAST? The SQL99\n> phrase AS ASSIGNMENT looks like it should mean the second, but I think\n> the spec semantics require it to mean the first. Ugh. Perhaps AS\n> ASSIGNMENT ONLY for the second case?\n\nOn looking more closely, SQL99 appears to define user-defined casts as\ninvocable *only* in explicit cast and assignment contexts. Part 2 sez:\n\n 4.13 Data conversions\n\n Explicit data conversions can be specified by a CAST operator.\n A CAST operator defines how values of a source data type are\n converted into a value of a target data type according to\n the Syntax Rules and General Rules of Subclause 6.22, \"<cast\n specification>\". Data conversions between predefined data types\n and between constructed types are defined by the rules of this part\n of ISO/IEC 9075. Data conversions between one or more user-defined\n types are defined by a user-defined cast.\n\n A user-defined cast identifies an SQL-invoked function, called the\n cast function, that has one SQL parameter whose declared type is\n the same as the source data type and a result data type that is the\n target data type. A cast function may optionally be specified to\n be implicitly invoked whenever values are assigned to targets of\n its result data type. Such a cast function is called an implicitly\n invocable cast function.\n\nThis seems to mean that we can get away with defining AS ASSIGNMENT to\nmean my second category (implicit in assignment only), and then picking\nsome more natural term for my first category (implicit anywhere).\n\nI favor using IMPLICIT, which would make the syntax of CREATE CAST be\n\n CREATE CAST (sourcetype AS targettype)\n WITH FUNCTION funcname (argtype)\n [ AS ASSIGNMENT | IMPLICIT ]\n \n CREATE CAST (sourcetype AS targettype)\n WITHOUT FUNCTION\n [ AS ASSIGNMENT | IMPLICIT ]\n\nOr possibly it should be AS IMPLICIT?\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Sep 2002 11:36:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposal for resolving casting issues "
},
{
"msg_contents": "Tom Lane wrote:\n> I favor using IMPLICIT, which would make the syntax of CREATE CAST be\n> \n> CREATE CAST (sourcetype AS targettype)\n> WITH FUNCTION funcname (argtype)\n> [ AS ASSIGNMENT | IMPLICIT ]\n> \n> CREATE CAST (sourcetype AS targettype)\n> WITHOUT FUNCTION\n> [ AS ASSIGNMENT | IMPLICIT ]\n> \n> Or possibly it should be AS IMPLICIT?\n\nI think AS IMPLICIT would be better because we have other AS [var]\nclauses.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 17 Sep 2002 14:40:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for resolving casting issues"
},
{
"msg_contents": "I wrote:\n> [Peter wrote:]\n>> time -> interval: I'm not even sure this cast should exist at all.\n>> Proper arithmetic would be IntervalValue = TimeValue - TIME 'midnight'.\n>> At least make it non-implicit.\n\n> I'd go along with marking it assignment-only.\n\nI started to make this change, but have momentarily backed off after\nobserving that it causes a failure in the regression tests:\n\n*** ./expected/horology-no-DST-before-1970.out\tWed Sep 18 13:56:41 2002\n--- ./results/horology.out\tWed Sep 18 15:45:54 2002\n***************\n*** 277,287 ****\n \n -- subtract time from date should not make sense; use interval instead\n SELECT date '1991-02-03' - time '04:05:06' AS \"Subtract Time\";\n! Subtract Time \n! --------------------------\n! Sat Feb 02 19:54:54 1991\n! (1 row)\n! \n SELECT date '1991-02-03' - time with time zone '04:05:06 UTC' AS \"Subtract Time UTC\";\n ERROR: Unable to identify an operator '-' for types 'date' and 'time with time zone'\n \tYou will have to retype this query using an explicit cast\n--- 277,284 ----\n \n -- subtract time from date should not make sense; use interval instead\n SELECT date '1991-02-03' - time '04:05:06' AS \"Subtract Time\";\n! ERROR: Unable to identify an operator '-' for types 'date' and 'time without time zone'\n! \tYou will have to retype this query using an explicit cast\n SELECT date '1991-02-03' - time with time zone '04:05:06 UTC' AS \"Subtract Time UTC\";\n ERROR: Unable to identify an operator '-' for types 'date' and 'time with time zone'\n \tYou will have to retype this query using an explicit cast\n\n\nThe regression test is evidently relying on the implicit cast from time\nto interval to allow the date - interval operator to be used for this\nquery.\n\nNow, given that the regression test itself observes that 'date - time'\nis wrong, and should be 'date - interval', maybe this behavioral change\nis a Good Thing. Or maybe it will just break applications. Comments?\n\nI'm going to commit my pg_cast changes without this change later today,\nbut we can still go back and add this change if we decide it's good.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Sep 2002 16:00:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposal for resolving casting issues "
},
{
"msg_contents": "Bruce Momjian writes:\n\n> > Or possibly it should be AS IMPLICIT?\n>\n> I think AS IMPLICIT would be better because we have other AS [var]\n> clauses.\n\nBut IMPLICIT is not a variable.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 18 Sep 2002 22:09:20 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for resolving casting issues"
},
{
"msg_contents": "Tom Lane writes:\n\n> On looking more closely, SQL99 appears to define user-defined casts as\n> invocable *only* in explicit cast and assignment contexts.\n\n> This seems to mean that we can get away with defining AS ASSIGNMENT to\n> mean my second category (implicit in assignment only), and then picking\n> some more natural term for my first category (implicit anywhere).\n\nSounds good.\n\nHave you seen 9.4 \"Subject routine determination\" and 9.5 \"Type\nprecedence list determination\"? In essence, the SQL standard has a\nhard-coded precedence list much like we have. Since we support the\ncreation of non-structured user-defined types, the additional castability\nlevel effectively gives us a way to override the built-in precedence\nlists. In fact, now that we have given up in the numeric/float8\nprecedence, the other hard-coded categories should be easy to eliminate.\n\n> CREATE CAST (sourcetype AS targettype)\n> WITH FUNCTION funcname (argtype)\n> [ AS ASSIGNMENT | IMPLICIT ]\n\nFine with me.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 18 Sep 2002 22:10:21 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for resolving casting issues "
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > > Or possibly it should be AS IMPLICIT?\n> >\n> > I think AS IMPLICIT would be better because we have other AS [var]\n> > clauses.\n> \n> But IMPLICIT is not a variable.\n\nI meant we have cases where we do AS [ keyword1 | keyword2 ].\n\n CREATE OPERATOR CLASS any_name opt_default FOR TYPE_P Typename\n USING access_method AS opclass_item_list\n\nWhat I am saying is that is better to do AS [ keyword | keyword ] rather\nthan [ AS keyword | keyword ].\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 18 Sep 2002 17:13:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for resolving casting issues"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> What I am saying is that is better to do AS [ keyword | keyword ] rather\n> than [ AS keyword | keyword ].\n\nYeah, I thought the same after looking at it a little. Committed that\nway (of course it's still open to adjustment...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Sep 2002 17:37:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposal for resolving casting issues "
},
{
"msg_contents": "Hi PostgreSQL Folks,\n\n I would like to inquire how is the BLOB support in PostgreSQL is doing \nnow? Had there been some improvements? Can I have the blob support like in \nthis manner?\n\n\n create table myblob (\n\tblobid serial not null primary key,\n\tname varchar(50),\n\timage blob));\n\n for some doc,xls, and ppt files, can i do this operations?\n\n Insert into myblob (name,image) values (' personal data','personal.doc');\n Insert into myblob (name,image) values (' business data','business.xls');\n Insert into myblob (name,image) values (' presentation data','present.ppt');\n\n I would appreciate it very much for whatever comments you can give me \non this.\n\n Thank you and MORE POWER TO THE BEST OPENSOURCE DBMS!\n\nMr. Manny Cabido\nPhilippines\n\n\n",
"msg_date": "Thu, 19 Sep 2002 06:32:47 +0800 (PHT)",
"msg_from": "Manuel Cabido <manny@tinago.msuiit.edu.ph>",
"msg_from_op": false,
"msg_subject": "Re: BLOB"
},
{
"msg_contents": "On Wed, 2002-09-18 at 18:32, Manuel Cabido wrote:\n> Hi PostgreSQL Folks,\n> \n> I would like to inquire how is the BLOB support in PostgreSQL is doing \n> now? Had there been some improvements? Can I have the blob support like in \n\nI'm unsure about blob (didn't know we had a blob type), but bytea works\nperfectly fine for that.\n\n-- \n Rod Taylor\n\n",
"msg_date": "18 Sep 2002 20:12:27 -0400",
"msg_from": "Rod Taylor <rbt@rbt.ca>",
"msg_from_op": false,
"msg_subject": "Re: BLOB"
},
{
"msg_contents": "> I would like to inquire how is the BLOB support in\n> PostgreSQL is doing\n> > now? Had there been some improvements? Can I have the blob\n> support like in\n>\n> I'm unsure about blob (didn't know we had a blob type), but bytea works\n> perfectly fine for that.\n\nIs there some reason why we didn't call text 'clob' and bytea 'blob'? or at\nleast add aliases?\n\nChris\n\n",
"msg_date": "Thu, 19 Sep 2002 10:42:14 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: BLOB"
},
{
"msg_contents": "Christopher Kings-Lynne writes:\n\n> Is there some reason why we didn't call text 'clob' and bytea 'blob'?\n\nAt the time our types were created there was no standard defining the\nother types.\n\n> or at least add aliases?\n\nMapping clob to text might be OK, but blob and bytea have totally\ndifferent input formats.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 19 Sep 2002 23:38:07 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: BLOB"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nJust wondering if anyone here works for a company that supports\nPostgreSQL, and is \"Officially\" government approved? (by your applicable\ngovernment(s))\n\nAm looking to create another company section just of organisations whom\nare government approved, to put on techdocs.postgresql.org and similar\nsites.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Mon, 16 Sep 2002 18:41:44 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Government approved support companies?"
}
] |
[
{
"msg_contents": "Hi Everyone,\n\nThe phpPgAdmin project is currently getting stuck into the development of\nthe next-generation web admin software for Postgres. We are rewriting from\nscratch to remove the old, crufty phpMyAdmin coding style. We are also\nsuffering from a lack of active developers. Those of you currently using\nphpPgAdmin will not be able to use it with Postgres 7.3 - we'll have to\nfinish WebDB first! We can do it faster, and with the features you want if\nyou help us!\n\nCurrently, WebDB is getting towards the 'usable' stage, so it's not like\nyou'd have to work with bare-bones code.\n\nAims:\n\n* Nice OO model - no SQL in the interface scripts and supporting different\nversions of Postgres is trivial.\n* Old phpPgAdmin codebase is too hard to make 7.3 compliant\n* Works with register_globals off\n* ALL strings escaped, quoted, etc properly\n* Database independent. Althought currently Postgres oriented, it is\ndesigned in such a way that the same app can admin MySQL, Oracle, or\nwhatever - simultaneously.\n* All fonts, colours and images done with stylesheets and themes\n* Easy to read coding style, everything commented\n* Max error reporting\n* Security conscious\n\nYou can find us on SourceForge here:\n\nhttp://phppgadmin.sourceforge.net/?page=project\n\nIf you want to get into developing this new version, please checkout the\n'webdb' module, NOT the old phppgadmin module.\n\nDevelopers mailing list: phppgadmin-devel@lists.sourceforge.net\n\nRegards,\n\nChris\n\n",
"msg_date": "Mon, 16 Sep 2002 16:52:11 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "WebDB Developers Wanted"
},
{
"msg_contents": "Hey, Chris,\n\n Thanks for the info. It's Fort-Knox worth of knowledge for me!\nTwo things:\n\n1) I can't find the WebDB module... do you mean WebPG? Could you post\nthe link to that, if it isn't:\n\nhttp://sourceforge.net/forum/forum.php?forum_id=129210\n\n2) I'm no expert, and want to help. I _/use/_ php and postgresql. I do\nvery simple coding. Are bug reports enough from me?\n\nOn Mon, 16 Sep 2002, Christopher Kings-Lynne wrote:\n\n> Hi Everyone,\n> \n> The phpPgAdmin project is currently getting stuck into the development of\n> the next-generation web admin software for Postgres. We are rewriting from\n> scratch to remove the old, crufty phpMyAdmin coding style. We are also\n> suffering from a lack of active developers. Those of you currently using\n> phpPgAdmin will not be able to use it with Postgres 7.3 - we'll have to\n> finish WebDB first! We can do it faster, and with the features you want if\n> you help us!\n> \n> Currently, WebDB is getting towards the 'usable' stage, so it's not like\n> you'd have to work with bare-bones code.\n> \n> Aims:\n> \n> * Nice OO model - no SQL in the interface scripts and supporting different\n> versions of Postgres is trivial.\n> * Old phpPgAdmin codebase is too hard to make 7.3 compliant\n> * Works with register_globals off\n> * ALL strings escaped, quoted, etc properly\n> * Database independent. Althought currently Postgres oriented, it is\n> designed in such a way that the same app can admin MySQL, Oracle, or\n> whatever - simultaneously.\n> * All fonts, colours and images done with stylesheets and themes\n> * Easy to read coding style, everything commented\n> * Max error reporting\n> * Security conscious\n> \n> You can find us on SourceForge here:\n> \n> http://phppgadmin.sourceforge.net/?page=project\n> \n> If you want to get into developing this new version, please checkout the\n> 'webdb' module, NOT the old phppgadmin module.\n> \n> Developers mailing list: phppgadmin-devel@lists.sourceforge.net\n> \n> Regards,\n> \n> Chris\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-Chadwick\n\n",
"msg_date": "Mon, 16 Sep 2002 11:23:00 -0400 (EDT)",
"msg_from": "Chadwick Rolfs <cmr@shell.gis.net>",
"msg_from_op": false,
"msg_subject": "Re: WebDB Developers Wanted"
},
{
"msg_contents": "On Mon, 2002-09-16 at 10:52, Christopher Kings-Lynne wrote:\n> Developers mailing list: phppgadmin-devel@lists.sourceforge.net\n\nHmm, that list does not appear on the sourceforge Lists page. Why?\n\n-- \nMarkus Bertheau.\nBerlin, Berlin.\nGermany.\n\n",
"msg_date": "27 Sep 2002 08:48:40 +0200",
"msg_from": "Markus Bertheau <twanger@bluetwanger.de>",
"msg_from_op": false,
"msg_subject": "Re: [PHP] WebDB Developers Wanted"
},
{
"msg_contents": "AFAIK it's because it is a members-only list. It is archived and the\narchives are web viewable if you know where to look (not that I can\nremember, but I have done it before). \n\nRobert Treat\n\nOn Fri, 2002-09-27 at 02:48, Markus Bertheau wrote:\n> On Mon, 2002-09-16 at 10:52, Christopher Kings-Lynne wrote:\n> > Developers mailing list: phppgadmin-devel@lists.sourceforge.net\n> \n> Hmm, that list does not appear on the sourceforge Lists page. Why?\n> \n> -- \n> Markus Bertheau.\n> Berlin, Berlin.\n> Germany.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n\n\n",
"msg_date": "27 Sep 2002 09:31:41 -0400",
"msg_from": "Robert Treat <xzilla@users.sourceforge.net>",
"msg_from_op": false,
"msg_subject": "Re: [PHP] WebDB Developers Wanted"
}
] |
[
{
"msg_contents": "\nI have been playing with schema's under 7.3beta and now I am trying to use pg_dump. Is there a way to just dump 1 of my\nschema out? \n\n\nShould pg_dump -t \"A.*\" ... dump out just schema A's objects?\n\n\nThanks\nJim\n\n",
"msg_date": "Mon, 16 Sep 2002 08:12:25 -0400",
"msg_from": "\"Jim Buttafuoco\" <jim@contactbda.com>",
"msg_from_op": true,
"msg_subject": "7.3 Beta Schema and pg_dump"
},
{
"msg_contents": "\"Jim Buttafuoco\" <jim@contactbda.com> writes:\n> I have been playing with schema's under 7.3beta and now I am trying to\n> use pg_dump. Is there a way to just dump 1 of my schema out?\n\nNope; I intended to make that happen, but didn't get around to it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Sep 2002 09:16:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.3 Beta Schema and pg_dump "
},
{
"msg_contents": "This seems like a \"must\" have option for pg_dump, looking at the code, it seems that only a couple of queries would\nneed to be changed and a command line option (-N, --namespace=) would need to be added.\n\nIf I was to create a patch would it make it into 7.3?\n\nThanks\nJim\n\n\n\n> \"Jim Buttafuoco\" <jim@contactbda.com> writes:\n> > I have been playing with schema's under 7.3beta and now I am trying to\n> > use pg_dump. Is there a way to just dump 1 of my schema out?\n> \n> Nope; I intended to make that happen, but didn't get around to it.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n\n\n\n",
"msg_date": "Mon, 16 Sep 2002 09:35:26 -0400",
"msg_from": "\"Jim Buttafuoco\" <jim@contactbda.com>",
"msg_from_op": true,
"msg_subject": "Re: 7.3 Beta Schema and pg_dump "
},
{
"msg_contents": "\"Jim Buttafuoco\" <jim@contactbda.com> writes:\n> This seems like a \"must\" have option for pg_dump,\n> If I was to create a patch would it make it into 7.3?\n\nI dunno ... I'd like to have it too, but it would break our \"no new\nfeatures during beta\" rule. Comments anyone?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Sep 2002 10:02:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.3 Beta Schema and pg_dump "
},
{
"msg_contents": "On Mon, 16 Sep 2002, Tom Lane wrote:\n\n> \"Jim Buttafuoco\" <jim@contactbda.com> writes:\n> > This seems like a \"must\" have option for pg_dump,\n> > If I was to create a patch would it make it into 7.3?\n>\n> I dunno ... I'd like to have it too, but it would break our \"no new\n> features during beta\" rule. Comments anyone?\n>\n\nI'd love to have this feature. What's about contrib/ ?\n\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Mon, 16 Sep 2002 18:32:39 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: 7.3 Beta Schema and pg_dump "
},
{
"msg_contents": "Tom Lane writes:\n\n> \"Jim Buttafuoco\" <jim@contactbda.com> writes:\n> > This seems like a \"must\" have option for pg_dump,\n> > If I was to create a patch would it make it into 7.3?\n>\n> I dunno ... I'd like to have it too, but it would break our \"no new\n> features during beta\" rule. Comments anyone?\n\nI do not believe Jim's premise that it would only take a couple of changes\nto a couple of queries. Clearly, it would take at least some change to\nany query that reads information about schema-aware objects. And it needs\nto be made backward compatible. And what about cross-schema dependencies?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Mon, 16 Sep 2002 19:25:48 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: 7.3 Beta Schema and pg_dump "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> \"Jim Buttafuoco\" <jim@contactbda.com> writes:\n> This seems like a \"must\" have option for pg_dump,\n> If I was to create a patch would it make it into 7.3?\n>> \n>> I dunno ... I'd like to have it too, but it would break our \"no new\n>> features during beta\" rule. Comments anyone?\n\n> I do not believe Jim's premise that it would only take a couple of changes\n> to a couple of queries. Clearly, it would take at least some change to\n> any query that reads information about schema-aware objects.\n\nActually, my vision of this was not to change any queries at all, but\njust to modify the pg_dump code to include a \"select this schema\" option.\nThe infrastructure is already there in pg_dump to mark individual schemas\nas to be dumped or not --- we only need a command-line option that can\nmark them.\n\n> And what about cross-schema dependencies?\n\nGiven the lack of any dependency tracing in pg_dump at the moment, I\nthink we just leave that on the user's head for now. It's not any worse\nthan the behavior of -t, which is perfectly capable of giving you an\nincomplete dump.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Sep 2002 13:33:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.3 Beta Schema and pg_dump "
},
{
"msg_contents": "> \"Jim Buttafuoco\" <jim@contactbda.com> writes:\n> > This seems like a \"must\" have option for pg_dump,\n> > If I was to create a patch would it make it into 7.3?\n> \n> I dunno ... I'd like to have it too, but it would break our \"no new\n> features during beta\" rule. Comments anyone?\n\nI don't see a huge reason to stick to that rule in this instance...\n\nChris\n\n",
"msg_date": "Tue, 17 Sep 2002 09:30:47 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: 7.3 Beta Schema and pg_dump "
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> > \"Jim Buttafuoco\" <jim@contactbda.com> writes:\n> > > This seems like a \"must\" have option for pg_dump,\n> > > If I was to create a patch would it make it into 7.3?\n> > \n> > I dunno ... I'd like to have it too, but it would break our \"no new\n> > features during beta\" rule. Comments anyone?\n> \n> I don't see a huge reason to stick to that rule in this instance...\n\nReclassify it as a bug ... problem solved. ;-)\n\nActually, it is sort of a bug because it is something we should have\nadded for schemas but forgot.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 17 Sep 2002 01:29:00 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.3 Beta Schema and pg_dump"
}
] |
[
{
"msg_contents": "\n\nHello guys,\n\nAgain I'm having the same problem with postgres 7.2.1. Let me explain:\n\nWhen i do a:\n\n\\dt or \\dS i get:\n\nbelbonedb_v2=# \\dS\nERROR: AllocSetFree: cannot find block containing chunk 4da330\n\nsome things I checked:\n- \\d tablename works\n- pg_dump and pg_dumpall refuse ro work...\n\n\nAnyone knows what's going on?\n\nCheers!\n\nWim\n\n\n",
"msg_date": "Mon, 16 Sep 2002 14:29:57 +0200",
"msg_from": "Wim <wdh@belbone.be>",
"msg_from_op": true,
"msg_subject": "Urgent problem: AllocSetFree: cannot find block containing chunk ..."
}
] |
[
{
"msg_contents": "Greetings,\n\nI've found some problems to handle the directional marks (i.e. for arabic\ncharset in UNICODE 0x200e and 0x200f). As I've exported the db from Microsoft\nSQL 7.0 there were so many directional marks even inside words (i.e. \"foo\"\n-> \"f(200e)oo\"). This probably is due to an external program which was used\nto fill the db with values.\n\nDirectional marks are not shown in the client browser but for PostgreSQL\nis a character. This is a problem when I try to SELECT the db:\n\nSELECT * FROM table WHERE field = \"foo\";\n\nIf in \"table\" there is a record which contais \"foo\" as value for \"field\",\nbut it has a directional mark in it (i.e.: \"f(200e)oo\"), I can't get any\nresult.\n\nThe only way to fix the problem is to remove any directional mark occurrence,\nor to make PostgreSQL ignore that kind of characters during UNICODE queries.\n\nWhat do you think about it?\nThx.\n\n---\nNhan NGO DINH\n\n\n\n__________________________________________________________________\nTiscali Ricaricasa\nla prima prepagata per navigare in Internet a meno di un'urbana e\nrisparmiare su tutte le tue telefonate. Acquistala on line e non avrai\nnessun costo di attivazione né di ricarica!\nhttp://ricaricasaonline.tiscali.it/\n\n\n\n",
"msg_date": "Mon, 16 Sep 2002 14:46:03 +0200",
"msg_from": "nngodinh@tiscali.it",
"msg_from_op": true,
"msg_subject": "directional marks"
},
{
"msg_contents": "nngodinh@tiscali.it writes:\n\n> The only way to fix the problem is to remove any directional mark occurrence,\n> or to make PostgreSQL ignore that kind of characters during UNICODE queries.\n>\n> What do you think about it?\n\nEither remove the directional marks or consistently use them in all your\nqueries (or use wildcards to paint over the difference). The directional\nmark characters aren't just for amusement -- they contain real information\nso they cannot be ignored.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n\n",
"msg_date": "Mon, 16 Sep 2002 19:25:30 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: directional marks"
}
] |
[
{
"msg_contents": "I just got this. Since it should be discussed here I just forward it\n(with permission of Sebastian). \n\nMichael\n\n----- Forwarded message from Sebastian Hetze <s.hetze@linux-ag.de> -----\n\nDate: Mon, 16 Sep 2002 07:53:34 +0200\nFrom: Sebastian Hetze <s.hetze@linux-ag.de>\nTo: dpage@vale-housing.co.uk\nCc: pgsql-odbc@postgresql.org, michael.meskes@credativ.de\nSubject: SQLProcedureColumns\n\nHi *,\n\nafter spending several hours to get some basic functionality for the\nODBC SQLProcedureColumns call working, I see why we don't have it yet ;-)\n\nCurrently, only a very limited range of information about arguments\nand return values of PostgreSQL database functions are available\nfrom pg_proc: the basic type and the order of the arguments.\nOther important properties like precision, length, default values or\neven descriptive names for the arguments are missing.\nThings are getting even more complicated since pg_proc is not in the\nfirst normalized form. \n\nTo get most out of the ODBC capabilities and because I think it will\nbe a useful feature for other interface and application developers\nout there, I suggest to introduce a new system table, lets say pg_procattr\nto hold all the relevant information.\nFor the beginning, I would be willing to just fill in the data by hand.\nLater on I suggest to extend the CREATE FUNCTION call to manage the basic\nthings automaticaly.\n\nAs a first shot into the blue my suggestion for the table structure:\n\nCREATE TABLE pg_procattr (\n\tattrelid\toid,\t-- OID of the pg_proc entry\n\tattname\t\tname,\n\tattclass\tint2,\t-- SQL_PARAM_INPUT, SQL_RESUL_COL etc.\n\tatttypid\toid,\t-- this should replace the oidvector thing\n\tattlen\t\tint2,\n\tatttypmod\tint4,\n\tattnotnull\tbool,\n\tatthasdef\tbool,\n\tordinal\t\tint4,\n\tremarks\t\tvarchar\n)\n\nThis structure is very similar to pg_attribute, so if we dont break integrity\nhere we could as well use the existing table and add a column for the\nattribute 'direction' class. Adding a column for descriptive remarks for\neach column would not be too bad for ordinary table columns anyway...\n\nLet me know what you think before I start coding.\n\n\nBest regards,\n\n Sebastian\n-- \nSebastian Hetze Linux Information Systems AG\n Fon +49 (0)30 72 62 38-0 Ehrenbergstr. 19\nS.Hetze@Linux-AG.com Fax +49 (0)30 72 62 38-99 D-10245 Berlin\nLinux is our Business. ____________________________________ www.Linux-AG.com __\n\n----- End forwarded message -----\n\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Mon, 16 Sep 2002 14:46:22 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "[s.hetze@linux-ag.de: SQLProcedureColumns]"
},
{
"msg_contents": "Michael Meskes <meskes@postgresql.org> forwards:\n> Currently, only a very limited range of information about arguments\n> and return values of PostgreSQL database functions are available\n> from pg_proc: the basic type and the order of the arguments.\n> Other important properties like precision, length, default values or\n> even descriptive names for the arguments are missing.\n\nThis seems to be predicated on the assumption that these properties\nwould exist if only there were table columns to store them.\n\nIn point of fact, function arguments and results do not have precision\nor length, as a rule.\n\nAddition of default values sounds like a nice idea in principle but in\npractice it plays hob with the system's ability to choose a matching\nfunction --- if I write foo(1,2), that could match not only foo(int,int)\nbut foo(int,int,almost-anything) if there are defaults available for\narguments of the second version of foo. That needs very careful thought\nbefore we buy into it.\n\nNames for arguments would be nice, but they are not really worth a\nwholesale restructuring of pg_proc; a per-argument comment facility\nwould serve the need as well or better.\n\nFinally, the reason pg_proc is not normalized is that it's necessary to\nallow reasonable lookup of functions by signature. How would you\nenforce \"only one function named foo of arguments x,y,z\" in the proposed\nrestructured catalog? It's certainly not as easy as making a unique index.\n\n> Adding a column for descriptive remarks for\n> each column would not be too bad for ordinary table columns anyway...\n\nSee pg_description.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Sep 2002 10:19:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [s.hetze@linux-ag.de: SQLProcedureColumns] "
},
{
"msg_contents": "Thanks for the answer Tom. I will forward it to Sebastian.\n\n> function --- if I write foo(1,2), that could match not only foo(int,int)\n> but foo(int,int,almost-anything) if there are defaults available for\n> arguments of the second version of foo. That needs very careful thought\n> before we buy into it.\n\nThat's one of the things I talked to Sebastian about as well.\n\nAnd if you really need a Default you can code this in pretty easily.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Mon, 16 Sep 2002 17:54:25 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: [s.hetze@linux-ag.de: SQLProcedureColumns]"
}
] |
[
{
"msg_contents": "> I think we must extend pg_cast's castimplicit column to a three-way value:\n> \t* okay as implicit cast in expression (or in assignment)\n> \t* okay as implicit cast in assignment only\n> \t* okay only as explicit cast\n\n> In summary: I haven't yet gone through the existing casts in detail, but\n> I propose the following general rules for deciding how to mark casts:\n> \n> * Casts across datatype categories should be explicit-only, with the\n> exception of casts to text, which we will allow implicitly for backward\n> compatibility's sake.\n> \n> * Within a category, \"up\" (lossless) conversions are implicit, \"down\"\n> (potentially lossy) conversions should be assignment-only.\n\nFirst of all, thank you for taking this on !\n\nI think the following three states may enable a closer match to an actually \ndesired (Peter said mandated by SQL99) behavior.\n\n1. okay as implicit cast in expression or assignment\n2. okay as implicit cast in expression or assignment but needs runtime check\n\t(precision loss possible)\n3. okay only as explicit cast (precision loss possible)\n\nNow, since we prbbly can't do this all in beta I think it would be okay to \ninterpret my state 2 with yours for this release, and add expressions with \nruntime checks later.\n\nRegarding the \"isExplicit\": I think a closer match would be an output argument\n\"PrecisionInfo\" enum(ok, precision loss, [conversion failed ?]). With this, \nthe caller can differentiate whether to raise a warning (note that char truncation \nis actually supposed to raise a warning in sqlca.sqlwarn).\nMaybe make it in/out so you can tell the function to not abort on conversion error, \nsince what I think we need for constants is a \"try convert\" that does not even abort\non wrong input.\n\nFor numbers there is probably only the solution to invent an \"anynumber\" generic type. \n\nExamples that should imho succeed (and do succeed in other db's):\n\tselect int2col = 1000000;\tconversion fails (looses precision ?) --> return false\n\t\t\t\t\t\tthis case could probably behave better if it where defined,\n\t\t\t\t\t\tthat is_a_number but doesn't convert is a precision loss,\n\t\t\t\t\t\tmaybe with this anynumeric would not be necessary \n\tselect char6col = '123456789';\tconversion looses precision --> return false for eq\n\tselect int2col = 10.0;\t\tconversion ok --> use index on int2col (same for '10', '10.0')\n\nAndreas\n\nPS: pg snapshot 09/11 does not compile on AIX (large files (don't want _LARGE_FILES), and mb conversions\n(pg_ascii2mic and pg_mic2ascii not found in the postmaster and not included from elsewhere) \nare the culprit) (make check hangs on without_oid's vacuum when removing conversions from Makefile and \nundef _LARGE_FILES to make it compile) \nThere are also tons of \"unsigned char vs signed char\" warnings in current mb sources :-(\n",
"msg_date": "Mon, 16 Sep 2002 15:35:05 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Proposal for resolving casting issues"
},
{
"msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> I think the following three states may enable a closer match to an actually \n> desired (Peter said mandated by SQL99) behavior.\n\n> 1. okay as implicit cast in expression or assignment\n> 2. okay as implicit cast in expression or assignment but needs runtime check\n> \t(precision loss possible)\n> 3. okay only as explicit cast (precision loss possible)\n\nThe runtime checks are there already, eg\n\nregression=# select 123456789::int4::int2;\nERROR: i4toi2: '123456789' causes int2 overflow\n\nHowever this does not help us much; the critical point is that if we\nwant function overloading to work in a sane fashion, we have to prefer\nup-conversions to down-conversions *at parse time*, at least for the\noperands of functions and operators (which is what I meant by \"in\nexpressions\"). Runtime checks are irrelevant to this problem.\n\n> Regarding the \"isExplicit\": I think a closer match would be an output\n> argument \"PrecisionInfo\" enum(ok, precision loss, [conversion failed\n> ?]).\n\nI'm not planning to add output arguments to fix this problem ;-)\n\n> For numbers there is probably only the solution to invent an\n> \"anynumber\" generic type.\n\nActually, I had been toying with the notion of doing the following:\n\n1. A numeric literal is initially typed as the smallest type that will\nhold it in the series int2, int4, int8, numeric (notice NOT float8).\n\n2. Allow implicit up-coercion int2->int4->int8->numeric->float4->float8,\nbut down-coercions aren't implicit except for assignment.\n\n3. Eliminate most or all of the cross-numeric-type operators (eg, there\nis no reason to support int2+int4 as a separate operator).\n\nWith this approach, an expression like \"int4var = 42\" would be initially\ntyped as int4 and int2, but then the constant would be coerced to int4\nbecause int4=int4 is the closest-match operator. (int2=int2 would not\nbe considered because down-coercion isn't implicitly invokable.) Also\nwe get more nearly SQL-standard behavior in expressions that combine\nnumeric with float4/float8: the preferred type will be float, which\naccords with the spec's notions of exact numeric vs. approximate numeric.\n\nI think this solves most or all of our problems with poor type choices\nfor numeric literals, but it still needs thought --- I'm not suggesting\nwe shoehorn it into 7.3. In particular, I'm not sure whether the\ncurrent preferred-type arrangement (namely, numeric and float8 are both\npreferred types for the numeric category) would need to change.\n\n> There are also tons of \"unsigned char vs signed char\" warnings in\n> current mb sources :-( \n\nYeah, I know :-( ... I see 'em too when using HPUX' vendor compiler.\nWe ought to clean that up someday.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Sep 2002 10:01:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for resolving casting issues "
},
{
"msg_contents": "Tom Lane wrote:\n> \"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> > I think the following three states may enable a closer match to an actually \n> > desired (Peter said mandated by SQL99) behavior.\n> \n> > 1. okay as implicit cast in expression or assignment\n> > 2. okay as implicit cast in expression or assignment but needs runtime check\n> > \t(precision loss possible)\n> > 3. okay only as explicit cast (precision loss possible)\n> \n> The runtime checks are there already, eg\n> \n> regression=# select 123456789::int4::int2;\n> ERROR: i4toi2: '123456789' causes int2 overflow\n> \n> However this does not help us much; the critical point is that if we\n> want function overloading to work in a sane fashion, we have to prefer\n> up-conversions to down-conversions *at parse time*, at least for the\n> operands of functions and operators (which is what I meant by \"in\n> expressions\"). Runtime checks are irrelevant to this problem.\n\nI think there is some confusion here. The runtime checks Andreas was\ntalking about was allowing a double of 64.0 to cast to an int4 while\ndisallowing 64.1 from being cast to an int4 because it is not a hole\nnumber. \n\nI am not sure doubles have enough precision to make such comparisons\nfunctional (NUMERIC certainly does) but that was his proposal, and he\nstated he thought the standard required it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 17 Sep 2002 02:30:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for resolving casting issues"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I think there is some confusion here. The runtime checks Andreas was\n> talking about was allowing a double of 64.0 to cast to an int4 while\n> disallowing 64.1 from being cast to an int4 because it is not a hole\n> number. \n\n> I am not sure doubles have enough precision to make such comparisons\n> functional (NUMERIC certainly does) but that was his proposal, and he\n> stated he thought the standard required it.\n\nIt seems clear to me that the standard requires us NOT to reject that.\n\nIn the explicit-cast case, SQL92 6.10 <cast specification> saith:\n\n 3) If TD is exact numeric, then\n\n Case:\n\n a) If SD is exact numeric or approximate numeric, then\n\n Case:\n\n i) If there is a representation of SV in the data type TD\n that does not lose any leading significant digits after\n rounding or truncating if necessary, then TV is that rep-\n resentation. The choice of whether to round or truncate is\n implementation-defined.\n\n ii) Otherwise, an exception condition is raised: data exception-\n numeric value out of range.\n\nSo we are *only* allowed to throw an error for overflow; having to round\nis not an error condition.\n\nIn the implicit-cast case, section 9.2 Store assignment has\n\n k) If the data type of T is numeric and there is an approxi-\n mation obtained by rounding or truncation of the numerical\n value of V for the data type of T, then the value of T is set\n to such an approximation.\n\n If there is no such approximation, then an exception condi-\n tion is raised: data exception-numeric value out of range.\n\n If the data type of T is exact numeric, then it is implementation-\n defined whether the approximation is obtained by rounding or\n by truncation.\n\nwhich is different wording but seems to boil down to the same thing: the\nonly error condition is out-of-range.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Sep 2002 02:36:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for resolving casting issues "
},
{
"msg_contents": "Zeugswetter Andreas SB SD writes:\n\n> PS: pg snapshot 09/11 does not compile on AIX (large files (don't want\n> _LARGE_FILES),\n\nPlease provide details.\n\n> and mb conversions (pg_ascii2mic and pg_mic2ascii not\n> found in the postmaster and not included from elsewhere)\n\nAnd details here as well.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 18 Sep 2002 22:11:08 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "AIX compilation problems (was Re: Proposal ...)"
}
] |
[
{
"msg_contents": "Can I be removed from this distribution?\n\nThanks,\n Enzo\n\nEnzo Del Mistro * \n33 Yonge St, Suite 500, * Toronto, ON\nPhone: 001 416 814 1741\nE-Mail: enzo.delmistro@eds.com\n> Business Acceleration Services TOR, \n> Solutions Consulting\n> \n> \n",
"msg_date": "Mon, 16 Sep 2002 12:10:53 -0400",
"msg_from": "\"Del Mistro, Enzo\" <enzo.delmistro@eds.com>",
"msg_from_op": true,
"msg_subject": "removal"
}
] |
[
{
"msg_contents": "I'm speaking about directional marks that are ignored by - for instance\n- by Microsoft SQL 7.0 because they're unuseful in that position (like when\nthey're in a one way text either left-to-right or right-to-left). It may\nhappen that this kind of symbols are randomly inserted: for example...\n\nThe entry user types an english text like \"test\". At the end he switches\nthe keyboard layout to arabic and types something arabic but he realizes\nhe don't want to do that and erases the arabic text, switches again the\nkeyboard and inserts english text after \"test\". Some directional marks are\ninserted but they're unuseful.\n\nThe problem is that sometimes the directional mark is inside a word, not\njust at the ending, and after all if you try to index using txt2txtidx,\ndirectional marks are not recognized as delimiters (and they aren't) so\nthe txtidx array will contain the near word with an appended directional\nmark.\n\nMay be you can say that the source I've exported the db from is a malformed\none, and you are absolutely right. Anyway I know that some programs (expecially\nMicrosoft) does this mistake. I'm not speaking of PHP.\n\nBye.\n\n>-- Messaggio Originale --\n>Date: Mon, 16 Sep 2002 19:25:30 +0200 (CEST)\n>From: Peter Eisentraut <peter_e@gmx.net>\n>To: nngodinh@tiscali.it\n>cc: pgsql-hackers@postgresql.org\n>Subject: Re: [HACKERS] directional marks\n>\n>\n>nngodinh@tiscali.it writes:\n>\n>> The only way to fix the problem is to remove any directional mark occurrence,\n>> or to make PostgreSQL ignore that kind of characters during UNICODE queries.\n>>\n>> What do you think about it?\n>\n>Either remove the directional marks or consistently use them in all your\n>queries (or use wildcards to paint over the difference). The directional\n>mark characters aren't just for amusement -- they contain real information\n>so they cannot be ignored.\n>\n>--\n>Peter Eisentraut peter_e@gmx.net\n>\n>\n\n\n\n__________________________________________________________________\nTiscali Ricaricasa\nla prima prepagata per navigare in Internet a meno di un'urbana e\nrisparmiare su tutte le tue telefonate. Acquistala on line e non avrai\nnessun costo di attivazione né di ricarica!\nhttp://ricaricasaonline.tiscali.it/\n\n\n\n",
"msg_date": "Mon, 16 Sep 2002 19:41:58 +0200",
"msg_from": "nngodinh@tiscali.it",
"msg_from_op": true,
"msg_subject": "Re: directional marks"
},
{
"msg_contents": "nngodinh@tiscali.it writes:\n\n> I'm speaking about directional marks that are ignored by - for instance\n> - by Microsoft SQL 7.0 because they're unuseful in that position (like when\n> they're in a one way text either left-to-right or right-to-left). It may\n> happen that this kind of symbols are randomly inserted: for example...\n\nTo me this sounds analogous to inserting tons of <space><backspace>\nsequences into a string and expecting the software to automatically figure\nout that they cancel. It would be possible, but it would probably add a\nlot of overhead and it doesn't seem to be requested a lot. The best\nsolution is probably to fix your data. Unless you can point to a Unicode\nstandard that states that such cancellation should happen.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Tue, 17 Sep 2002 01:08:35 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: directional marks"
}
] |
[
{
"msg_contents": "In CVS tip:\n\nregression=# create domain nnint int not null;\nCREATE DOMAIN\nregression=# create table foo (f1 nnint);\nCREATE TABLE\nregression=# insert into foo values(null);\nERROR: Domain nnint does not allow NULL values\t\t-- okay\nregression=# \\copy foo from stdin\n123\n\\N\n\\.\nregression=# select * from foo;\n f1\n-----\n 123\n\t\t\t\t\t\t\t-- not okay\n(2 rows)\n\nregression=# create domain vc4 varchar(4);\nCREATE DOMAIN\nregression=# create table foot (f1 vc4);\nCREATE TABLE\nregression=# \\copy foot from stdin\n1234567890\n\\.\nregression=# select * from foot;\n f1\n------------\n 1234567890\t\t\t\t\t\t-- not okay\n(1 row)\n\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Sep 2002 17:54:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Bug: COPY IN doesn't test domain constraints"
},
{
"msg_contents": "On Mon, 2002-09-16 at 17:54, Tom Lane wrote:\n> In CVS tip:\n> \n> regression=# create domain nnint int not null;\n> CREATE DOMAIN\n\nOk, I'll take a look at this today.\n\nThanks\n\n\n-- \n Rod Taylor\n\n",
"msg_date": "17 Sep 2002 08:24:03 -0400",
"msg_from": "Rod Taylor <rbt@rbt.ca>",
"msg_from_op": false,
"msg_subject": "Re: Bug: COPY IN doesn't test domain constraints"
},
{
"msg_contents": "Fixed this problem and added regression tests in domain.sql.\n\nAlso:\n- Changed header file order (alphabetical)\n- Changed to m = attnum - 1 in binary copy code for consistency\n\nOn Mon, 2002-09-16 at 17:54, Tom Lane wrote:\n> In CVS tip:\n> \n> regression=# create domain nnint int not null;\n> CREATE DOMAIN\n> regression=# create table foo (f1 nnint);\n> CREATE TABLE\n> regression=# insert into foo values(null);\n> ERROR: Domain nnint does not allow NULL values\t\t-- okay\n> regression=# \\copy foo from stdin\n> 123\n> \\N\n> \\.\n> regression=# select * from foo;\n> f1\n> -----\n> 123\n> \t\t\t\t\t\t\t-- not okay\n> (2 rows)\n> \n> regression=# create domain vc4 varchar(4);\n> CREATE DOMAIN\n> regression=# create table foot (f1 vc4);\n> CREATE TABLE\n> regression=# \\copy foot from stdin\n> 1234567890\n> \\.\n> regression=# select * from foot;\n> f1\n> ------------\n> 1234567890\t\t\t\t\t\t-- not okay\n> (1 row)\n> \n> \n> \t\t\tregards, tom lane\n> \n-- \n Rod Taylor",
"msg_date": "17 Sep 2002 11:19:32 -0400",
"msg_from": "Rod Taylor <rbt@rbt.ca>",
"msg_from_op": false,
"msg_subject": "Re: Bug: COPY IN doesn't test domain constraints"
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nRod Taylor wrote:\n> Fixed this problem and added regression tests in domain.sql.\n> \n> Also:\n> - Changed header file order (alphabetical)\n> - Changed to m = attnum - 1 in binary copy code for consistency\n> \n> On Mon, 2002-09-16 at 17:54, Tom Lane wrote:\n> > In CVS tip:\n> > \n> > regression=# create domain nnint int not null;\n> > CREATE DOMAIN\n> > regression=# create table foo (f1 nnint);\n> > CREATE TABLE\n> > regression=# insert into foo values(null);\n> > ERROR: Domain nnint does not allow NULL values\t\t-- okay\n> > regression=# \\copy foo from stdin\n> > 123\n> > \\N\n> > \\.\n> > regression=# select * from foo;\n> > f1\n> > -----\n> > 123\n> > \t\t\t\t\t\t\t-- not okay\n> > (2 rows)\n> > \n> > regression=# create domain vc4 varchar(4);\n> > CREATE DOMAIN\n> > regression=# create table foot (f1 vc4);\n> > CREATE TABLE\n> > regression=# \\copy foot from stdin\n> > 1234567890\n> > \\.\n> > regression=# select * from foot;\n> > f1\n> > ------------\n> > 1234567890\t\t\t\t\t\t-- not okay\n> > (1 row)\n> > \n> > \n> > \t\t\tregards, tom lane\n> > \n> -- \n> Rod Taylor\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 18 Sep 2002 00:57:23 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug: COPY IN doesn't test domain constraints"
},
{
"msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\nRod Taylor wrote:\n> Fixed this problem and added regression tests in domain.sql.\n> \n> Also:\n> - Changed header file order (alphabetical)\n> - Changed to m = attnum - 1 in binary copy code for consistency\n> \n> On Mon, 2002-09-16 at 17:54, Tom Lane wrote:\n> > In CVS tip:\n> > \n> > regression=# create domain nnint int not null;\n> > CREATE DOMAIN\n> > regression=# create table foo (f1 nnint);\n> > CREATE TABLE\n> > regression=# insert into foo values(null);\n> > ERROR: Domain nnint does not allow NULL values\t\t-- okay\n> > regression=# \\copy foo from stdin\n> > 123\n> > \\N\n> > \\.\n> > regression=# select * from foo;\n> > f1\n> > -----\n> > 123\n> > \t\t\t\t\t\t\t-- not okay\n> > (2 rows)\n> > \n> > regression=# create domain vc4 varchar(4);\n> > CREATE DOMAIN\n> > regression=# create table foot (f1 vc4);\n> > CREATE TABLE\n> > regression=# \\copy foot from stdin\n> > 1234567890\n> > \\.\n> > regression=# select * from foot;\n> > f1\n> > ------------\n> > 1234567890\t\t\t\t\t\t-- not okay\n> > (1 row)\n> > \n> > \n> > \t\t\tregards, tom lane\n> > \n> -- \n> Rod Taylor\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 19 Sep 2002 23:48:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug: COPY IN doesn't test domain constraints"
}
] |
[
{
"msg_contents": "I've been thinking about the 'attlognum' idea of a logical column number in\npg_attribute.\n\nBack when I think Tom said 'we should do it now or never', I didn't think\ntoo hard about it because I didn't realise the problem it would solve with\nchanging column type, and also how easy it would be to do.\n\nIt occurs to me now that it would be absolutely trivial to implement, as\nopposed to being lots of work as I previously thought. For 7.3, all we\nwould have had to do is add the column and then just make it equal attnum at\ncolumn creation time. No other action is required except to tell ppl who\nmake admin software to ORDER BY attlognum rather than attnum.\n\nThen, when we actually implement changing column type (a very useful\nfeature) we could start working with attlognums then and make sure '*'\nexpands in attlognum order rather than attnum order. *sigh*\n\nSo now, I think we'll need to add attlognum in 7.4 and tell admin writers to\nchange their ORDER BY clauses...\n\nChris\n\n",
"msg_date": "Tue, 17 Sep 2002 12:03:12 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "attlognum"
}
] |
[
{
"msg_contents": "\n> What I will do instead is adjust parse_coerce.c so that a\n> length-coercion function can have either of the signatures\n> \tfoo(foo,int4) returns foo\n> or\n> \tfoo(foo,int4,bool) returns foo\n> and then modify the above-mentioned length coercion functions to provide\n> the desired behavior. This has no direct impact on pg_cast because we\n> do not use pg_cast for length-coercion functions.\n\nSounds good to me. \n\nWhen those are really truncated ESQL/C needs to set a warning in sqlca.sqlwarn\nthough, thus I think the second signature should also have an output flag to tell \nwhether truncation actually occurred.\nMaybe this should be kept for a protocol change though, since I would not think\na NOTICE would be suitable here. \n\nAndreas\n",
"msg_date": "Tue, 17 Sep 2002 09:33:34 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Proposal for resolving casting issues "
},
{
"msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> When those are really truncated ESQL/C needs to set a warning in sqlca.sqlwarn\n> though, thus I think the second signature should also have an output flag to tell \n> whether truncation actually occurred.\n> Maybe this should be kept for a protocol change though, since I would not think\n> a NOTICE would be suitable here. \n\nAgain, I don't want to invent output arguments for functions today ;-).\n\nI agree that a NOTICE would be overkill, and that we need a protocol\nchange to implement completion conditions (sqlca.sqlwarn) properly.\nWhen that happens, I think the explicit-cast paths in the coercion\nroutines can easily call the \"set a completion condition\" routine for\nthemselves; I see no reason to pass back the condition one level\nbefore doing so.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Sep 2002 09:21:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for resolving casting issues "
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us] \n> Sent: 17 September 2002 06:36\n> To: Christopher Kings-Lynne\n> Cc: Robert Treat; Justin Clift; Peter Eisentraut; Tom Lane; \n> Curt Sampson; PostgreSQL Hackers Mailing List\n> Subject: Re: [HACKERS] PGXLOG variable worthwhile?\n> \n>\n> Well, let's see if we ever run on native NT4.X and we can \n> decide then. \n> Actually, don't our Cygnus folks have a problem with moving \n> pg_xlog already?\n\nNo, because Cygwin knows about shell links.\n\nWhilst I'm here, I'll chuck my $0.02 in:\n\nI use PostgreSQL on Linux for production and XP for development, and am\nlikely to continue that way. I've been beta testing the native Win32\nport of PostgreSQL as Justin has and the latest version is fantastic -\nit runs as a service, osdb shows impressive results compared to Cygwin\nPostgreSQL on the same system and it's a breeze to install, despite\nthere being no installer yet.\n\nWhat I can't understand is the attitude of some people here. Yes,\nMicrosoft are evil, but the bottom line is, millions of people use\nWindows. Just look at the number of downloads for pgAdmin (shown at\nhttp://www.pgadmin.org/downloads/) - the last stable version has clocked\nup over 38,000 downloads, the preview I released just a couple of weeks\nago, 2230 at the time of writing. I know from talking to some of the\nusers that often people download copies for themselves and their\ncolleagues, so we can probably assume there are actually 40,000+\nPostgreSQL users that use Windows reguarly enough to want pgAdmin. What\nhappens if you add in the pgAccess/Windows users, Tora, or pgExplorer?\nHow many of these people would want to run PostgreSQL on Windows as\nwell?\n\nWhat about the companies out there that have good sysadmins who want to\nuse PostgreSQL, but manglement that insist on using Windows?\n\nWhat about situations where a single server is running SQL Server and\nother software (such as a middle tier server - as I have on one box\nhere), and that other software cannot be changed, but SQL could?\n\nI think that ignoring the huge number of people that use windows because\nsome of us consider it a Mickey Mouse OS is a particuarly bad strategy\nif we want to expand our userbase. Windows is not going anywhere soon,\nand like it or not, it *is* getting better and better. Our Windows 2000\n(and our Beta3/RC1 .Net test Servers) are rock solid and haven't been\nrebooted in months) - we get more hardware faults these days, and those\ncan occur on our Linux or HP-UX boxes just as easily.\n\nAnyway, enough of my rant :-)\n\nRegards, Dave.\n\n",
"msg_date": "Tue, 17 Sep 2002 08:33:35 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "> I use PostgreSQL on Linux for production and XP for development, and am\n> likely to continue that way. I've been beta testing the native Win32\n> port of PostgreSQL as Justin has and the latest version is fantastic -\n> it runs as a service, osdb shows impressive results compared to Cygwin\n> PostgreSQL on the same system and it's a breeze to install, despite\n> there being no installer yet.\n\n>From where do we get this fabled Win32 port?\n\nChris\n\n",
"msg_date": "Tue, 17 Sep 2002 16:04:38 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "> What I can't understand is the attitude of some people here. Yes,\n> Microsoft are evil, but the bottom line is, millions of people use\n> Windows. Just look at the number of downloads for pgAdmin (shown at\n> http://www.pgadmin.org/downloads/) - the last stable version has clocked\n> up over 38,000 downloads, the preview I released just a couple of weeks\n> ago, 2230 at the time of writing. I know from talking to some of the\n> users that often people download copies for themselves and their\n> colleagues, so we can probably assume there are actually 40,000+\n> PostgreSQL users that use Windows reguarly enough to want pgAdmin. What\n> happens if you add in the pgAccess/Windows users, Tora, or pgExplorer?\n> How many of these people would want to run PostgreSQL on Windows as\n> well?\n\nI actually think that the long-term survival of Postgres DEPENDS on our\nWin32 support. Otherwise, we'll just get massacred by MySQL, MSSQL, Oracle\nand Firebird who do support Win32.\n\nUsers of Postgres are our lifeblood. The more users we have the more\ndevelopers we get, the more testing we get and the more likely we are to get\nmoney, corporate support, etc. Our ODBC driver will also be improved.\n\nChris\n\n",
"msg_date": "Tue, 17 Sep 2002 16:11:47 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "On 17 Sep 2002 at 16:11, Christopher Kings-Lynne wrote:\n\n> > What I can't understand is the attitude of some people here. Yes,\n> > Microsoft are evil, but the bottom line is, millions of people use\n> > Windows. Just look at the number of downloads for pgAdmin (shown at\n> > http://www.pgadmin.org/downloads/) - the last stable version has clocked\n> > up over 38,000 downloads, the preview I released just a couple of weeks\n> > ago, 2230 at the time of writing. I know from talking to some of the\n> > users that often people download copies for themselves and their\n> > colleagues, so we can probably assume there are actually 40,000+\n> > PostgreSQL users that use Windows reguarly enough to want pgAdmin. What\n> > happens if you add in the pgAccess/Windows users, Tora, or pgExplorer?\n> > How many of these people would want to run PostgreSQL on Windows as\n> > well?\n> I actually think that the long-term survival of Postgres DEPENDS on our\n> Win32 support. Otherwise, we'll just get massacred by MySQL, MSSQL, Oracle\n> and Firebird who do support Win32.\n\nLet's move this to general.\n\nBut I disagree. History says that nobody can compete with microsoft on \nmicrosoft platform. Postgres will not be competing with either SQL Server or \naccess. It would remain as toy database..\n\nAs far as people using mysql on windows, I have couple of colleages here who \ngot things crowling for some heavy load, something like 60GB database with \n512MB compq workstations..\n\nLet's leave it. The main point to focus postgres on unix is not only because \nunix is proven/known as robust and scalable, but unix is much more standard to \nsupport across multiple OS. The amount with which windows differs from unices \non API level, any serious efforts to make postgresql good enough on windows \nwhould be a mammoth task.\n\nI haven't tried either port of postgres on windows but I would not bet on any \nof them.\n\n> Users of Postgres are our lifeblood. The more users we have the more\n\nI agree but even as of now, not even 1% users comes on any of postgres lists, \nin my estimate.\n\nSo if users are not providing their feedback, what's the point in open source? \n(Actually all those people do help postgres by publicising it but still \nfeedback remains an important phase of open source software engineering..)\n\n> developers we get, the more testing we get and the more likely we are to get\n> money, corporate support, etc. Our ODBC driver will also be improved.\n\nI agree for ODBC but that can be done without giving much to postgresql windows \nport as well.\n\nI understand windows port of postgresql remains very much important for people \nwho want to evaluate it. But for some good evaluation, I would rather recommend \nthem trying postgresql on linux rather than windows.\n\nThere are limits as what postgresql can do on windows and probably postgresql \ndevelopment team can't do much about many of them..\n\nNo offense to anybody.. just some opinions..\n\nBye\n Shridhar\n\n--\nAlbrecht's Law:\tSocial innovations tend to the level of minimum tolerable well-\nbeing.\n\n",
"msg_date": "Tue, 17 Sep 2002 14:00:15 +0530",
"msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PGXLOG variable worthwhile?"
},
{
"msg_contents": "> Let's leave it. The main point to focus postgres on unix is not\n> only because\n> unix is proven/known as robust and scalable, but unix is much\n> more standard to\n> support across multiple OS. The amount with which windows differs\n> from unices\n> on API level, any serious efforts to make postgresql good enough\n> on windows\n> whould be a mammoth task.\n\nIt's already been done - that's the whole point.\n\n> So if users are not providing their feedback, what's the point in\n> open source?\n\nUsers HAVE provided their feedback - they want Postgres on Windows. What's\nthe point of open source if we can't accomodate them? There's no problems\nwith economics, marketing, schedules, deadlines, nothing. The reason that\npeople like Open Source is because they don't have to deal with some\nmonolithic company refusing to port to their platform just because it's \"too\nhard\".\n\nChris\n\n",
"msg_date": "Tue, 17 Sep 2002 16:49:22 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PGXLOG variable worthwhile?"
}
] |
[
{
"msg_contents": "\n\nHello guys,\n\nI have still problems with dumping my database....\n\nI have postgres 7.2.1 running on a solaris 8 server. When I try to do a \npg_dump of my database, I get the following message:\npg_dump: query to obtain list of tables failed: server closed the \nconnection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\n\nI connect to the database and try to view the tables with \\dt and \\dS. \nNow I get:\nERROR: AllocSetFree: cannot find block containing chunk 4c5ad0\nI retry:\nERROR: AllocSetFree: cannot find block containing chunk 4860d0\n\nI can view the tables with: \\d tablename\n\nSome people suggest a drive failure, but I checked that and found no \nproblems...\nI REINDEXED the whole database... problem still the same...\nTried a VACUUM... still not working...\n\n\nI must say that one of the table contains more than 3.000.000 rows, \nanother more than 1.400.000...\nSelect, update, delete, insert works, just the pg_dump(all) and the \\dt \n\\dS commands...\nI must say that I had this problem a few months before, I got some help \nthen, but that couldn't solve my problem,\nI recreated the database from scratch and copied the data, to fix thing \nquickly. Thing went well for about two months :-(\nNow the problem raises again, and I'm trying to find a solution without \nreinstalling the whole thing allover again.\n\n\nSome advice/help from the specialists?\n\nCheers!\n\nWim.\n\nSome info from the debug logfile:\n---------------------------------\n\nDEBUG: StartTransactionCommand\nDEBUG: query: SELECT c.relname as \"Name\",\n CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'i' \nTHEN 'index' WHEN 'S' THEN 'sequence' WHEN 's' THEN 'special' END as \"Type\",\n u.usename as \"Owner\"\nFROM pg_class c LEFT JOIN pg_user u ON c.relowner = u.usesysid\nWHERE c.relkind IN ('r','')\n AND c.relname !~ '^pg_'\nORDER BY 1;\nDEBUG: parse tree: { QUERY :command 1 :utility <> :resultRelation 0 \n:into <> :isPortal false :isBinary false :isTemp false :hasAggs false \n:hasSubLinks false :rtable ({ RTE :relname pg_class :relid 1259 \n:subquery <> :alias { ATTR :relname c :attrs <>} :eref { ATTR :relname c \n:attrs ( \"relname\" \"reltype\" \"relowner\" \"relam\" \"relfilenode\" \n\"relpages\" \"reltuples\" \"reltoastrelid\" \"reltoastidxid\" \n\"relhasindex\" \"relisshared\" \"relkind\" \"relnatts\" \"relchecks\" \n\"reltriggers\" \"relukeys\" \"relfkeys\" \"relrefs\" \"relhasoids\" \n\"relhaspkey\" \"relhasrules\" \"relhassubclass\" \"relacl\" )} :inh true \n:inFromCl true :checkForRead true :checkForWrite false :checkAsUser 0} { \nRTE :relname pg_user :relid 16478 :subquery <> :alias { ATTR :relname u \n:attrs <>} :eref { ATTR :relname u :attrs ( \"usename\" \"usesysid\" \n\"usecreatedb\" \"usetrace\" \"usesuper\" \"usecatupd\" \"passwd\" \n\"valuntil\" )} :inh true :inFromCl true :checkForRead true :checkForWrite \nfalse :checkAsUser 0}) :jointree { FROMEXPR :fromlist ({ JOINEXPR \n:jointype 1 :isNatural false :larg { RANGETBLREF 1 } :rarg { RANGETBLREF \n2 } :using <> :quals { EXPR :typeOid 16 :opType op :oper { OPER :opno \n96 :opid 0 :opresulttype 16 } :args ({ VAR :varno 1 :varattno 3 :vartype \n23 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 3} { VAR :varno \n2 :varattno 2 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 2 \n:varoattno 2})} :alias <> :colnames ( \"relname\" \"reltype\" \n\"relowner\" \"relam\" \"relfilenode\" \"relpages\" \"reltuples\" \n\"reltoastrelid\" \"reltoastidxid\" \"relhasindex\" \"relisshared\" \n\"relkind\" \"relnatts\" \"relchecks\" \"reltriggers\" \"relukeys\" \n\"relfkeys\" \"relrefs\" \"relhasoids\" \"relhaspkey\" \"relhasrules\" \n\"relhassubclass\" \"relacl\" \"usename\" \"usesysid\" \"usecreatedb\" \n\"usetrace\" \"usesuper\" \"usecatupd\" \"passwd\" \"valuntil\" ) :colvars \n({ VAR :varno 1 :varattno 1 :vartype 19 :vartypmod -1 :varlevelsup 0 \n:varnoold 1 :varoattno 1} { VAR :varno 1 :varattno 2 :vartype 26 \n:vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 2} { VAR :varno 1 \n:varattno 3 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 1 \n:varoattno 3} { VAR :varno 1 :varattno 4 :vartype 26 :vartypmod -1 \n:varlevelsup 0 :varnoold 1 :varoattno 4} { VAR :varno 1 :varattno 5 \n:vartype 26 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 5} { \nVAR :varno 1 :varattno 6 :vartype 23 :vartypmod -1 :varlevelsup 0 \n:varnoold 1 :varoattno 6} { VAR :varno 1 :varattno 7 :vartype 700 \n:vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 7} { VAR :varno 1 \n:varattno 8 :vartype 26 :vartypmod -1 :varlevelsup 0 :varnoold 1 \n:varoattno 8} { VAR :varno 1 :varattno 9 :vartype 26 :vartypmod -1 \n:varlevelsup 0 :varnoold 1 :varoattno 9} { VAR :varno 1 :varattno 10 \n:vartype 16 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 10} { \nVAR :varno 1 :varattno 11 :vartype 16 :vartypmod -1 :varlevelsup 0 \n:varnoold 1 :varoattno 11} { VAR :varno 1 :varattno 12 :vartype 18 \n:vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 12} { VAR :varno 1 \n:varattno 13 :vartype 21 :vartypmod -1 :varlevelsup 0 :varnoold 1 \n:varoattno 13} { VAR :varno 1 :varattno 14 :vartype 21 :vartypmod -1 \n:varlevelsup 0 :varnoold 1 :varoattno 14} { VAR :varno 1 :varattno 15 \n:vartype 21 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 15} { \nVAR :varno 1 :varattno 16 :vartype 21 :vartypmod -1 :varlevelsup 0 \n:varnoold 1 :varoattno 16} { VAR :varno 1 :varattno 17 :vartype 21 \n:vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 17} { VAR :varno 1 \n:varattno 18 :vartype 21 :vartypmod -1 :varlevelsup 0 :varnoold 1 \n:varoattno 18} { VAR :varno 1 :varattno 19 :vartype 16 :vartypmod -1 \n:varlevelsup 0 :varnoold 1 :varoattno 19} { VAR :varno 1 :varattno 20 \n:vartype 16 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 20} { \nVAR :varno 1 :varattno 21 :vartype 16 :vartypmod -1 :varlevelsup 0 \n:varnoold 1 :varoattno 21} { VAR :varno 1 :varattno 22 :vartype 16 \n:vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 22} { VAR :varno 1 \n:varattno 23 :vartype 1034 :vartypmod -1 :varlevelsup 0 :varnoold 1 \n:varoattno 23} { VAR :varno 2 :varattno 1 :vartype 19 :vartypmod -1 \n:varlevelsup 0 :varnoold 2 :varoattno 1} { VAR :varno 2 :varattno 2 \n:vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 2 :varoattno 2} { \nVAR :varno 2 :varattno 3 :vartype 16 :vartypmod -1 :varlevelsup 0 \n:varnoold 2 :varoattno 3} { VAR :varno 2 :varattno 4 :vartype 16 \n:vartypmod -1 :varlevelsup 0 :varnoold 2 :varoattno 4} { VAR :varno 2 \n:varattno 5 :vartype 16 :vartypmod -1 :varlevelsup 0 :varnoold 2 \n:varoattno 5} { VAR :varno 2 :varattno 6 :vartype 16 :vartypmod -1 \n:varlevelsup 0 :varnoold 2 :varoattno 6} { VAR :varno 2 :varattno 7 \n:vartype 25 :vartypmod -1 :varlevelsup 0 :varnoold 2 :varoattno 7} { \nVAR :varno 2 :varattno 8 :vartype 702 :vartypmod -1 :varlevelsup 0 \n:varnoold 2 :varoattno 8})}) :quals { EXPR :typeOid 16 :opType and \n:oper <> :args ({ EXPR :typeOid 16 :opType or :oper <> :args ({ EXPR \n:typeOid 16 :opType op :oper { OPER :opno 92 :opid 0 :opresulttype 16 } \n:args ({ VAR :varno 1 :varattno 12 :vartype 18 :vartypmod -1 \n:varlevelsup 0 :varnoold 1 :varoattno 12} { CONST :consttype 18 \n:constlen 1 :constbyval true :constisnull false :constvalue 1 [ 0 0 0 \n114 ] })} { EXPR :typeOid 16 :opType op :oper { OPER :opno 92 :opid 0 \n:opresulttype 16 } :args ({ VAR :varno 1 :varattno 12 :vartype 18 \n:vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 12} { CONST \n:consttype 18 :constlen 1 :constbyval true :constisnull false \n:constvalue 1 [ 0 0 0 0 ] })})} { EXPR :typeOid 16 :opType op :oper { \nOPER :opno 640 :opid 0 :opresulttype 16 } :args ({ VAR :varno 1 \n:varattno 1 :vartype 19 :vartypmod -1 :varlevelsup 0 :varnoold 1 \n:varoattno 1} { CONST :consttype 25 :constlen -1 :constbyval false \n:constisnull false :constvalue 8 [ 0 0 0 8 94 112 103 95 ] })})}} \n:rowMarks () :targetList ({ TARGETENTRY :resdom { RESDOM :resno 1 \n:restype 19 :restypmod -1 :resname Name :reskey 0 :reskeyop 0 \n:ressortgroupref 1 :resjunk false } :expr { VAR :varno 1 :varattno 1 \n:vartype 19 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 1}} { \nTARGETENTRY :resdom { RESDOM :resno 2 :restype 25 :restypmod -1 :resname \nType :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { \nCASE :casetype 25 :arg <> :args ({ WHEN { EXPR :typeOid 16 :opType op \n:oper { OPER :opno 92 :opid 0 :opresulttype 16 } :args ({ VAR :varno 1 \n:varattno 12 :vartype 18 :vartypmod -1 :varlevelsup 0 :varnoold 1 \n:varoattno 12} { CONST :consttype 18 :constlen 1 :constbyval true \n:constisnull false :constvalue 1 [ 0 0 0 114 ] })} :then { CONST \n:consttype 25 :constlen -1 :constbyval false :constisnull false \n:constvalue 9 [ 0 0 0 9 116 97 98 108 101 ] }} { WHEN { EXPR :typeOid \n16 :opType op :oper { OPER :opno 92 :opid 0 :opresulttype 16 } :args ({ \nVAR :varno 1 :varattno 12 :vartype 18 :vartypmod -1 :varlevelsup 0 \n:varnoold 1 :varoattno 12} { CONST :consttype 18 :constlen 1 :constbyval \ntrue :constisnull false :constvalue 1 [ 0 0 0 118 ] })} :then { CONST \n:consttype 25 :constlen -1 :constbyval false :constisnull false \n:constvalue 8 [ 0 0 0 8 118 105 101 119 ] }} { WHEN { EXPR :typeOid 16 \n:opType op :oper { OPER :opno 92 :opid 0 :opresulttype 16 } :args ({ VAR \n:varno 1 :varattno 12 :vartype 18 :vartypmod -1 :varlevelsup 0 \n:varnoold 1 :varoattno 12} { CONST :consttype 18 :constlen 1 :constbyval \ntrue :constisnull false :constvalue 1 [ 0 0 0 105 ] })} :then { CONST \n:consttype 25 :constlen -1 :constbyval false :constisnull false \n:constvalue 9 [ 0 0 0 9 105 110 100 101 120 ] }} { WHEN { EXPR :typeOid \n16 :opType op :oper { OPER :opno 92 :opid 0 :opresulttype 16 } :args ({ \nVAR :varno 1 :varattno 12 :vartype 18 :vartypmod -1 :varlevelsup 0 \n:varnoold 1 :varoattno 12} { CONST :consttype 18 :constlen 1 :constbyval \ntrue :constisnull false :constvalue 1 [ 0 0 0 83 ] })} :then { CONST \n:consttype 25 :constlen -1 :constbyval false :constisnull false \n:constvalue 12 [ 0 0 0 12 115 101 113 117 101 110 99 101 ] }} { WHEN { \nEXPR :typeOid 16 :opType op :oper { OPER :opno 92 :opid 0 :opresulttype \n16 } :args ({ VAR :varno 1 :varattno 12 :vartype 18 :vartypmod -1 \n:varlevelsup 0 :varnoold 1 :varoattno 12} { CONST :consttype 18 \n:constlen 1 :constbyval true :constisnull false :constvalue 1 [ 0 0 0 \n115 ] })} :then { CONST :consttype 25 :constlen -1 :constbyval false \n:constisnull false :constvalue 11 [ 0 0 0 11 115 112 101 99 105 97 108 \n] }}) :defresult { CONST :consttype 25 :constlen -1 :constbyval false \n:constisnull true :constvalue <>}}} { TARGETENTRY :resdom { RESDOM \n:resno 3 :restype 19 :restypmod -1 :resname Owner :reskey 0 :reskeyop 0 \n:ressortgroupref 0 :resjunk false } :expr { VAR :varno 2 :varattno 1 \n:vartype 19 :vartypmod -1 :varlevelsup 0 :varnoold 2 :varoattno 1}}) \n:groupClause <> :havingQual <> :distinctClause <> :sortClause ({ \nSORTCLAUSE :tleSortGroupRef 1 :sortop 660 }) :limitOffset <> :limitCount \n<> :setOperations <> :resultRelations ()}\nERROR: AllocSetFree: cannot find block containing chunk 48a6d8\nDEBUG: AbortCurrentTransaction\nDEBUG: BackendStartup: forked pid=9789 socket=9\npostmaster child[9789]: starting with (postgres -d3 -v131072 -p \nbelbonedb_v2 )\nDEBUG: InitPostgres\nDEBUG: StartTransactionCommand\nDEBUG: query: set DateStyle to 'ISO'\nDEBUG: parse tree: { QUERY :command 5 :utility ? :resultRelation 0 \n:into <> :isPortal false :isBinary false :isTemp false :hasAggs false \n:hasSubLinks false :rtable <> :jointree <> :rowMarks () :targetList <> \n:groupClause <> :havingQual <> :distinctClause <> :sortClause <> \n:limitOffset <> :limitCount <> :setOperations <> :resultRelations ()}\nDEBUG: ProcessUtility: set DateStyle to 'ISO'\nDEBUG: CommitTransactionCommand\nDEBUG: StartTransactionCommand\nDEBUG: query: set geqo to 'OFF'\nDEBUG: parse tree: { QUERY :command 5 :utility ? :resultRelation 0 \n:into <> :isPortal false :isBinary false :isTemp false :hasAggs false \n:hasSubLinks false :rtable <> :jointree <> :rowMarks () :targetList <> \n:groupClause <> :havingQual <> :distinctClause <> :sortClause <> \n:limitOffset <> :limitCount <> :setOperations <> :resultRelations ()}\nDEBUG: ProcessUtility: set geqo to 'OFF'\nDEBUG: CommitTransactionCommand\nDEBUG: StartTransactionCommand\nDEBUG: query: set ksqo to 'ON'\nDEBUG: parse tree: { QUERY :command 5 :utility ? :resultRelation 0 \n:into <> :isPortal false :isBinary false :isTemp false :hasAggs false \n:hasSubLinks false :rtable <> :jointree <> :rowMarks () :targetList <> \n:groupClause <> :havingQual <> :distinctClause <> :sortClause <> \n:limitOffset <> :limitCount <> :setOperations <> :resultRelations ()}\nDEBUG: ProcessUtility: set ksqo to 'ON'\nDEBUG: CommitTransactionCommand\nDEBUG: StartTransactionCommand\nDEBUG: query: select oid from pg_type where typname='lo'\nDEBUG: parse tree: { QUERY :command 1 :utility <> :resultRelation 0 \n:into <> :isPortal false :isBinary false :isTemp false :hasAggs false \n:hasSubLinks false :rtable ({ RTE :relname pg_type :relid 1247 \n:subquery <> :alias <> :eref { ATTR :relname pg_type :attrs ( \n\"typname\" \"typowner\" \"typlen\" \"typprtlen\" \"typbyval\" \n\"typtype\" \"typisdefined\" \"typdelim\" \"typrelid\" \"typelem\" \n\"typinput\" \"typoutput\" \"typreceive\" \"typsend\" \"typalign\" \n\"typstorage\" \"typdefault\" )} :inh true :inFromCl true :checkForRead \ntrue :checkForWrite false :checkAsUser 0}) :jointree { FROMEXPR \n:fromlist ({ RANGETBLREF 1 }) :quals { EXPR :typeOid 16 :opType op \n:oper { OPER :opno 93 :opid 0 :opresulttype 16 } :args ({ VAR :varno 1 \n:varattno 1 :vartype 19 :vartypmod -1 :varlevelsup 0 :varnoold 1 \n:varoattno 1} { CONST :consttype 19 :constlen 32 :constbyval false \n:constisnull false :constvalue 32 [ 108 111 0 0 0 0 0 0 0 0 0 0 0 0 0 0 \n0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ] })}} :rowMarks () :targetList ({ \nTARGETENTRY :resdom { RESDOM :resno 1 :restype 26 :restypmod -1 :resname \noid :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { \nVAR :varno 1 :varattno -2 :vartype 26 :vartypmod -1 :varlevelsup 0 \n:varnoold 1 :varoattno -2}}) :groupClause <> :havingQual <> \n:distinctClause <> :sortClause <> :limitOffset <> :limitCount <> \n:setOperations <> :resultRelations ()}\nDEBUG: ProcessQuery\nDEBUG: CommitTransactionCommand\nDEBUG: StartTransactionCommand\nDEBUG: query: select version()\nDEBUG: parse tree: { QUERY :command 1 :utility <> :resultRelation 0 \n:into <> :isPortal false :isBinary false :isTemp false :hasAggs false \n:hasSubLinks false :rtable <> :jointree { FROMEXPR :fromlist <> :quals \n<>} :rowMarks () :targetList ({ TARGETENTRY :resdom { RESDOM :resno 1 \n:restype 25 :restypmod -1 :resname version :reskey 0 :reskeyop 0 \n:ressortgroupref 0 :resjunk false } :expr { EXPR :typeOid 25 :opType \nfunc :oper { FUNC :funcid 89 :functype 25 } :args <>}}) :groupClause <> \n:havingQual <> :distinctClause <> :sortClause <> :limitOffset <> \n:limitCount <> :setOperations <> :resultRelations ()}\nDEBUG: ProcessQuery\nDEBUG: CommitTransactionCommand\nDEBUG: StartTransactionCommand\nDEBUG: query: select pg_client_encoding()\nDEBUG: parse tree: { QUERY :command 1 :utility <> :resultRelation 0 \n:into <> :isPortal false :isBinary false :isTemp false :hasAggs false \n:hasSubLinks false :rtable <> :jointree { FROMEXPR :fromlist <> :quals \n<>} :rowMarks () :targetList ({ TARGETENTRY :resdom { RESDOM :resno 1 \n:restype 19 :restypmod -1 :resname pg_client_encoding :reskey 0 \n:reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { EXPR :typeOid \n19 :opType func :oper { FUNC :funcid 810 :functype 19 } :args <>}}) \n:groupClause <> :havingQual <> :distinctClause <> :sortClause <> \n:limitOffset <> :limitCount <> :setOperations <> :resultRelations ()}\nDEBUG: ProcessQuery\nDEBUG: CommitTransactionCommand\nDEBUG: StartTransactionCommand\nDEBUG: query: SELECT Config, nValue FROM MSysConf\nERROR: Relation \"msysconf\" does not exist\nDEBUG: AbortCurrentTransaction\nDEBUG: StartTransactionCommand\n\n\n\n\n",
"msg_date": "Tue, 17 Sep 2002 10:11:41 +0200",
"msg_from": "Wim <wdh@belbone.be>",
"msg_from_op": true,
"msg_subject": "Still big problems with pg_dump!"
},
{
"msg_contents": "On Tue, 17 Sep 2002, Wim wrote:\n\n> \n> \n> Hello guys,\n> \n> I have still problems with dumping my database....\n\nWim,\n\nThis kind of error is not generated as a result of on-disk data\ncorruption. It is probably a hardware error: memory, cache, or CPU. Can\nyou replace any of these components on the machine and attempt to\nre-create?\n\nGavin\n\n",
"msg_date": "Tue, 17 Sep 2002 23:50:49 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Still big problems with pg_dump!"
},
{
"msg_contents": "Hi Gavin,\n\nThnx for your response...\n\nMaybe you know how to check memory on an Ultrasparc-II???\n\nCheers!\n\nWim.\n\nGavin Sherry wrote:\n\n>On Tue, 17 Sep 2002, Wim wrote:\n>\n> \n>\n>>Hello guys,\n>>\n>>I have still problems with dumping my database....\n>> \n>>\n>\n>Wim,\n>\n>This kind of error is not generated as a result of on-disk data\n>corruption. It is probably a hardware error: memory, cache, or CPU. Can\n>you replace any of these components on the machine and attempt to\n>re-create?\n>\n>Gavin\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster\n>\n>\n> \n>\n\n\n",
"msg_date": "Tue, 17 Sep 2002 16:07:06 +0200",
"msg_from": "Wim <wdh@belbone.be>",
"msg_from_op": true,
"msg_subject": "Re: Still big problems with pg_dump!"
},
{
"msg_contents": "-hackers removed.\n\nOn Tue, Sep 17, 2002 at 10:11:41AM +0200, Wim wrote:\n\n> ERROR: AllocSetFree: cannot find block containing chunk 4c5ad0\n\nThis is definitely some sort of disk problem. Either you've written\nbad data to the disk in some way, or else the disk is corrupted or\ndamaged.\n\nIf it is a hardware problem, the obvious suspects are memory (I'd\ndiscount this idea unless everything else doesn't check out), a disk\nfailure, or a controller failure.\n\nIt could be OS related as well. Several of the 2.4 Linux kernel\nseries, for instance, had roblems with massive filesystem corruption.\n\n> Some people suggest a drive failure, but I checked that and found no \n> problems...\n\nHow did you check?\n\n> I must say that one of the table contains more than 3.000.000 rows, \n> another more than 1.400.000...\n\nWhen is your most recent backup? If you can't pg_dump, you will be\nneeding that backup.\n\n> I must say that I had this problem a few months before, I got some help \n> then, but that couldn't solve my problem,\n> I recreated the database from scratch and copied the data, to fix thing \n> quickly. Thing went well for about two months :-(\n\nSo you re-installed the data set on a machine that had somehow\nfailed, you don't know why, and hoped that the problem would\nsolve itself? Uh, that wasn't a good idea. In the future, if you\nhave a problem which people suggest might be, for instance, a bad\ndisk, it'd be a _very good_ idea to figure out precisely what the\nproblem is before relying on the identical hardware again.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Tue, 17 Sep 2002 10:13:05 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Still big problems with pg_dump!"
},
{
"msg_contents": "\n\nAndrew Sullivan wrote:\n\n>-hackers removed.\n>\n>On Tue, Sep 17, 2002 at 10:11:41AM +0200, Wim wrote:\n>\n> \n>\n>>ERROR: AllocSetFree: cannot find block containing chunk 4c5ad0\n>> \n>>\n>\n>This is definitely some sort of disk problem. Either you've written\n>bad data to the disk in some way, or else the disk is corrupted or\n>damaged.\n>\n>If it is a hardware problem, the obvious suspects are memory (I'd\n>discount this idea unless everything else doesn't check out), a disk\n>failure, or a controller failure.\n>\n>It could be OS related as well. Several of the 2.4 Linux kernel\n>series, for instance, had roblems with massive filesystem corruption.\n> \n>\nPostgres is running on solaris 8...\nIt is the same database as previous time that has the problem, but not \nthe same table.\nI think it's an error is the system tables.\n\n> \n>\n>>Some people suggest a drive failure, but I checked that and found no \n>>problems...\n>> \n>>\n>\n>How did you check?\n> \n>\nwith fsck.\n\n> \n>\n>>I must say that one of the table contains more than 3.000.000 rows, \n>>another more than 1.400.000...\n>> \n>>\n>\n>When is your most recent backup? If you can't pg_dump, you will be\n>needing that backup.\n> \n>\nHave backup... I can still SQL COPY to a text file, so that's no problem \nso far.\n\n> \n>\n>>I must say that I had this problem a few months before, I got some help \n>>then, but that couldn't solve my problem,\n>>I recreated the database from scratch and copied the data, to fix thing \n>>quickly. Thing went well for about two months :-(\n>> \n>>\n>\n>So you re-installed the data set on a machine that had somehow\n>failed, you don't know why, and hoped that the problem would\n>solve itself? Uh, that wasn't a good idea. In the future, if you\n>have a problem which people suggest might be, for instance, a bad\n>disk, it'd be a _very good_ idea to figure out precisely what the\n>problem is before relying on the identical hardware again.\n>\n>A\n>\n> \n>\nI know, I don't have much spare hardware, and the database had to work \nquickly, it was the\nonly solution then.\nChecked the disk, reinstalled the OS and still waiting for a CPU and \nmemory upgrade.\n\n\n",
"msg_date": "Tue, 17 Sep 2002 16:25:45 +0200",
"msg_from": "Wim <wdh@belbone.be>",
"msg_from_op": true,
"msg_subject": "Re: Still big problems with pg_dump!"
},
{
"msg_contents": "On Tue, Sep 17, 2002 at 04:25:45PM +0200, Wim wrote:\n> >\n> >>ERROR: AllocSetFree: cannot find block containing chunk 4c5ad0\n> >> \n> >>\n> >\n> >This is definitely some sort of disk problem. Either you've written\n> >bad data to the disk in some way, or else the disk is corrupted or\n> >damaged.\n> >\n> Postgres is running on solaris 8...\n> It is the same database as previous time that has the problem, but not \n> the same table.\n\nSomeone else suggested that this would not be the error when you have\nwritten bad data to the disk (I thought you could have this if the\ncontroller was flakey and wrote bad data in the past. Maybe I'm\nwrong. Probably).\n\n> >How did you check?\n> > \n> >\n> with fsck.\n\nThat won't help you if the controller is coming and going; you might\nfind that it works one time, and not another. Indeed, a disk on its\nway out can even pass fsck sometimes, although it's pretty unusual.\n\n> Have backup... I can still SQL COPY to a text file, so that's no problem \n> so far.\n\nWell, that's good. I'd suggest backing up _really often_ until you\nknow what the problem is, especially since this is production.\n\n> I know, I don't have much spare hardware, and the database had to work \n> quickly, it was the\n> only solution then.\n> Checked the disk, reinstalled the OS and still waiting for a CPU and \n> memory upgrade.\n\nDo you have another place to store the database in the meantime -- an\nIntel box with a cheap disk, or anything? At least you'd have\nanother copy of the database somewhere that way.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Tue, 17 Sep 2002 10:33:47 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Still big problems with pg_dump!"
},
{
"msg_contents": "On 17 Sep 2002 at 10:33, Andrew Sullivan wrote:\n\n> On Tue, Sep 17, 2002 at 04:25:45PM +0200, Wim wrote:\n> > I know, I don't have much spare hardware, and the database had to work \n> > quickly, it was the\n> > only solution then.\n> > Checked the disk, reinstalled the OS and still waiting for a CPU and \n> > memory upgrade.\n> \n> Do you have another place to store the database in the meantime -- an\n> Intel box with a cheap disk, or anything? At least you'd have\n> another copy of the database somewhere that way.\n\nYeah. NFS or SMB mounted database would work.. albeit slowly.. Hopefully it's \nnot your root disk..\n\nBye\n Shridhar\n\n--\nDavis's Dictum:\tProblems that go away by themselves, come back by themselves.\n\n",
"msg_date": "Tue, 17 Sep 2002 20:09:54 +0530",
"msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>",
"msg_from_op": false,
"msg_subject": "Re: Still big problems with pg_dump!"
},
{
"msg_contents": "\n\nAndrew Sullivan wrote:\n\n<snip>...\n\n>>>How did you check?\n>>> \n>>>\n>>> \n>>>\n>>with fsck.\n>> \n>>\n>\n>That won't help you if the controller is coming and going; you might\n>find that it works one time, and not another. Indeed, a disk on its\n>way out can even pass fsck sometimes, although it's pretty unusual.\n>\nThe DB is located on a RAID5 disk array...\n\n> \n>\n>>Have backup... I can still SQL COPY to a text file, so that's no problem \n>>so far.\n>> \n>>\n>\n>Well, that's good. I'd suggest backing up _really often_ until you\n>know what the problem is, especially since this is production.\n>\n> \n>\n>>I know, I don't have much spare hardware, and the database had to work \n>>quickly, it was the\n>>only solution then.\n>>Checked the disk, reinstalled the OS and still waiting for a CPU and \n>>memory upgrade.\n>> \n>>\n>\n>Do you have another place to store the database in the meantime -- an\n>Intel box with a cheap disk, or anything? At least you'd have\n>another copy of the database somewhere that way.\n>\n>A\n>\n> \n>\nDon't have a disk that can store my database...\n\n\nStill searching....\n\n\nCheers!\n\nWim\n\n",
"msg_date": "Tue, 17 Sep 2002 16:46:00 +0200",
"msg_from": "Wim <wdh@belbone.be>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Still big problems with pg_dump!"
},
{
"msg_contents": "On Tue, Sep 17, 2002 at 04:46:00PM +0200, Wim wrote:\n> >\n> >That won't help you if the controller is coming and going; you might\n> >find that it works one time, and not another. Indeed, a disk on its\n> >way out can even pass fsck sometimes, although it's pretty unusual.\n> >\n> The DB is located on a RAID5 disk array...\n\nHmm. _That's_ interesting. I'd bet on a flakey controller, then. \nIs it hardware RAID? (I assume so.)\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Tue, 17 Sep 2002 10:51:19 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Still big problems with pg_dump!"
},
{
"msg_contents": "Andrew Sullivan <andrew@libertyrms.info> writes:\n> On Tue, Sep 17, 2002 at 04:25:45PM +0200, Wim wrote:\n>> ERROR: AllocSetFree: cannot find block containing chunk 4c5ad0\n>> \n>> Postgres is running on solaris 8...\n\n> Someone else suggested that this would not be the error when you have\n> written bad data to the disk (I thought you could have this if the\n> controller was flakey and wrote bad data in the past. Maybe I'm\n> wrong. Probably).\n\nActually, what it looks like to me is a memory clobber; I don't think\nbad data on disk would be likely to lead to this particular type of\nfailure. But writing one byte too many into a string, and thereby\nzeroing the high-order byte of an adjacent pointer, could lead to\nexactly this message when we later try to pfree() the pointer.\n\nI am wondering if Wim is running into that same Solaris snprintf() bug\nthat we discovered awhile back --- it was not clear if the bug still\nexists in Solaris 8, but the symptoms sure match. See\nhttp://archives.postgresql.org/pgsql-bugs/2002-07/msg00059.php\n\nIt would be useful to see a stack traceback from the point of the error,\nif possible.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Sep 2002 10:55:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Still big problems with pg_dump! "
},
{
"msg_contents": "\n\nAndrew Sullivan wrote:\n\n>On Tue, Sep 17, 2002 at 04:46:00PM +0200, Wim wrote:\n> \n>\n>>>That won't help you if the controller is coming and going; you might\n>>>find that it works one time, and not another. Indeed, a disk on its\n>>>way out can even pass fsck sometimes, although it's pretty unusual.\n>>>\n>>> \n>>>\n>>The DB is located on a RAID5 disk array...\n>> \n>>\n>\n>Hmm. _That's_ interesting. I'd bet on a flakey controller, then. \n>Is it hardware RAID? (I assume so.)\n>\n>A\n>\n> \n>\nYep, hardware RAID, infact it's a SUN T3 disk array with 9*36GB SCSI \ndisks...\n\nCheers!\n\nWim\n\n",
"msg_date": "Tue, 17 Sep 2002 16:56:26 +0200",
"msg_from": "Wim <wdh@belbone.be>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Still big problems with pg_dump!"
},
{
"msg_contents": "On Tue, Sep 17, 2002 at 10:55:18AM -0400, Tom Lane wrote:\n> \n> I am wondering if Wim is running into that same Solaris snprintf() bug\n> that we discovered awhile back --- it was not clear if the bug still\n> exists in Solaris 8, but the symptoms sure match. See\n> http://archives.postgresql.org/pgsql-bugs/2002-07/msg00059.php\n\nHmm, good point. That was only a problem when compiled with the\n64-bit libraries, IIRC. Wim, what does 'file postmaster' say?\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Tue, 17 Sep 2002 10:58:01 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Still big problems with pg_dump!"
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n>Andrew Sullivan <andrew@libertyrms.info> writes:\n> \n>\n>>On Tue, Sep 17, 2002 at 04:25:45PM +0200, Wim wrote:\n>> \n>>\n>>>ERROR: AllocSetFree: cannot find block containing chunk 4c5ad0\n>>>\n>>>Postgres is running on solaris 8...\n>>> \n>>>\n>\n> \n>\n>>Someone else suggested that this would not be the error when you have\n>>written bad data to the disk (I thought you could have this if the\n>>controller was flakey and wrote bad data in the past. Maybe I'm\n>>wrong. Probably).\n>> \n>>\n>\n>Actually, what it looks like to me is a memory clobber; I don't think\n>bad data on disk would be likely to lead to this particular type of\n>failure. But writing one byte too many into a string, and thereby\n>zeroing the high-order byte of an adjacent pointer, could lead to\n>exactly this message when we later try to pfree() the pointer.\n>\n>I am wondering if Wim is running into that same Solaris snprintf() bug\n>that we discovered awhile back --- it was not clear if the bug still\n>exists in Solaris 8, but the symptoms sure match. See\n>http://archives.postgresql.org/pgsql-bugs/2002-07/msg00059.php\n>\n>It would be useful to see a stack traceback from the point of the error,\n>if possible.\n>\n>\t\t\tregards, tom lane\n>\nI Would like to send a stack traceback, but I need some halp on this \n(never done this before).\n\nsome add. info:\n\nSELECT relname FROM pg_class WHERE relname like 'pg_%' AND relkind = 'r';\ngives:\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n\nSELECT relname FROM pg_class;\nworks well...\n\nSELECT relname, relkind from pg_class WHERE relkind='r';\nworks also...\n\nSELECT relname, relkind from pg_class WHERE relname like 'pg_%';\nproduces the same error as above...\n\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n>http://www.postgresql.org/users-lounge/docs/faq.html\n>\n>\n> \n>\nCheers!\n\nWim\n\n",
"msg_date": "Tue, 17 Sep 2002 17:04:23 +0200",
"msg_from": "Wim <wdh@belbone.be>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Still big problems with pg_dump!"
},
{
"msg_contents": "Wim <wdh@belbone.be> writes:\n> Tom Lane wrote:\n>> It would be useful to see a stack traceback from the point of the error,\n>> if possible.\n\n> I Would like to send a stack traceback, but I need some halp on this \n> (never done this before).\n\n> some add. info:\n\n> SELECT relname FROM pg_class WHERE relname like 'pg_%' AND relkind = 'r';\n> gives:\n> server closed the connection unexpectedly\n\nThis should be producing a core file in your database directory\n($PGDATA/base/yourdboid/). With gdb you'd do\n\tgdb /path/to/postgres-executable /path/to/corefile\n\tgdb> bt\n\tgdb> quit\nI don't remember the equivalent incantations with Solaris' debugger.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Sep 2002 11:08:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Still big problems with pg_dump! "
},
{
"msg_contents": "On Tue, Sep 17, 2002 at 05:04:23PM +0200, Wim wrote:\n> \n> SELECT relname FROM pg_class WHERE relname like 'pg_%' AND relkind = 'r';\n> gives:\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> \n> SELECT relname, relkind from pg_class WHERE relname like 'pg_%';\n> produces the same error as above...\n\nThat does rather suggest a memory clobber, as Tom suggested. Wim's\n'file postmaster' shows it's a 32-bit binary, though, and I verified\nin my notes that the snprintf bug was only in the 64-bit library. \n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Tue, 17 Sep 2002 11:11:47 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Still big problems with pg_dump!"
},
{
"msg_contents": "On Tue, Sep 17, 2002 at 11:08:46AM -0400, Tom Lane wrote:\n\n> This should be producing a core file in your database directory\n> ($PGDATA/base/yourdboid/). With gdb you'd do\n> \tgdb /path/to/postgres-executable /path/to/corefile\n> \tgdb> bt\n> \tgdb> quit\n> I don't remember the equivalent incantations with Solaris' debugger.\n\nI think it's \n\nadb /path/to/postgres-executable /path/to/corefile\n$c\n\n[or]\n\n$C\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Tue, 17 Sep 2002 11:19:44 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Still big problems with pg_dump!"
},
{
"msg_contents": "gdb gives me this:\n\nbash-2.05$ adb /usr/local/pgsql/bin/postgres \n/data/postgres/base/17903709/core\ncore file = /data/postgres/base/17903709/core -- program \n``/usr/local/pgsql/bin/postgres'' on platform SUNW,Ultra-60\nSIGBUS: Bus Error\n$c\nAllocSetAlloc+0x18c(476120, 3, 226018, 0, 0, 13)\nMemoryContextAlloc+0x68(476120, 3, 70670000, 7efefeff, 81010100, ff00)\nMemoryContextStrdup+0x28(476120, 4c8568, ffffffff, fffffff8, 0, ffbfe541)\nmake_greater_string+0x1c(4c8568, 13, 297, 4c8730, 0, ffbfe5d1)\nprefix_selectivity+0xcc(4c0780, 4c7ab8, 4c8568, ffbfe670, ffbfe68b, \nffbfe6a0)\npatternsel+0x278(ffbfe7a0, 0, 476120, 0, 0, ffbfe728)\nlikesel+0x10(ffbfe7a0, ffbfe7a0, 1d7080, fffffff8, 0, ffbfe809)\nOidFunctionCall4+0x124(71b, 4c0780, 4b7, 4c7b90, 1, ffbfe7e1)\nrestriction_selectivity+0x64(4c0780, 4b7, 4c7b90, 1, 0, ffbfe8a0)\nclauselist_selectivity+0x164(4c0780, 4c8478, 1, 0, 0, 4c8295)\nrestrictlist_selectivity+0x2c(4c0780, 4c7c18, 1, 0, 0, ff0000)\nset_baserel_size_estimates+0x2c(4c0780, 4c7d70, ffffffff, fffffff8, 0, \n4c83f9)\nset_plain_rel_pathlist+0x18(4c0780, 4c7d70, 4c0808, 53, 4c0348, 20)\nset_base_rel_pathlists+0xf8(4c0780, 4c8460, 1, 0, 0, 4c7c00)\nmake_one_rel+0xc(4c0780, 0, ffbfecb8, ffbfecb0, 0, 0)\nsubplanner+0x148(4c0780, 4c7cb8, 0, 0, 0, 0)\nquery_planner+0x98(4c0780, 4c7a48, 0, 0, 0, 0)\ngrouping_planner+0x7cc(4c0780, bff00000, 0, ff13a000, 0, 0)\nsubquery_planner+0x260(4c0780, bff00000, 0, 7efefeff, 81010100, ff0000)\nplanner+0x54(4c0780, 4c1e00, 4c15e0, fffffff8, 0, ffbfffd5)\npg_plan_query+0x54(4c0780, 29b99c, 0, 53, 4c0348, 20)\npg_exec_query_string+0x388(4c0348, 2, 476010, 4c0330, 800000, 0)\nPostgresMain+0x1398(5, ffbff2d0, 40c1d9, 473, 0, ffbff1c8)\nDoBackend+0x7d8(40c0a8, 1, 22730, 1552dc, 0, 40c2d9)\nBackendStartup+0xb0(40c0a8, 5, ffbff800, ffbff5d8, ffbff658, 0)\nServerLoop+0x370(297024, 49a0, 0, 3f1f88, 297004, 2d560000)\nPostmasterMain+0xbe4(5, 3f2980, 65720000, 0, 65720000, 65720000)\nmain+0x294(5, ffbffd8c, ffbffda4, 3e64c0, 0, 0)\n_start+0x5c(0, 0, 0, 0, 0, 0)\n$C\nffbfe340 AllocSetAlloc+0x18c(476120, 3, 226018, 0, 0, 13)\nffbfe3e0 MemoryContextAlloc+0x68(476120, 3, 70670000, 7efefeff, \n81010100, ff00)\nffbfe450 MemoryContextStrdup+0x28(476120, 4c8568, ffffffff, fffffff8, 0, \nffbfe541)\nffbfe4c8 make_greater_string+0x1c(4c8568, 13, 297, 4c8730, 0, ffbfe5d1)\nffbfe548 prefix_selectivity+0xcc(4c0780, 4c7ab8, 4c8568, ffbfe670, \nffbfe68b, ffbfe6a0)\nffbfe5d8 patternsel+0x278(ffbfe7a0, 0, 476120, 0, 0, ffbfe728)\nffbfe6b0 likesel+0x10(ffbfe7a0, ffbfe7a0, 1d7080, fffffff8, 0, ffbfe809)\nffbfe728 OidFunctionCall4+0x124(71b, 4c0780, 4b7, 4c7b90, 1, ffbfe7e1)\nffbfe828 restriction_selectivity+0x64(4c0780, 4b7, 4c7b90, 1, 0, ffbfe8a0)\nffbfe8b0 clauselist_selectivity+0x164(4c0780, 4c8478, 1, 0, 0, 4c8295)\nffbfe958 restrictlist_selectivity+0x2c(4c0780, 4c7c18, 1, 0, 0, ff0000)\nffbfe9d8 set_baserel_size_estimates+0x2c(4c0780, 4c7d70, ffffffff, \nfffffff8, 0, 4c83f9)\nffbfea48 set_plain_rel_pathlist+0x18(4c0780, 4c7d70, 4c0808, 53, 4c0348, 20)\nffbfeab8 set_base_rel_pathlists+0xf8(4c0780, 4c8460, 1, 0, 0, 4c7c00)\nffbfeb40 make_one_rel+0xc(4c0780, 0, ffbfecb8, ffbfecb0, 0, 0)\nffbfebb8 subplanner+0x148(4c0780, 4c7cb8, 0, 0, 0, 0)\nffbfec70 query_planner+0x98(4c0780, 4c7a48, 0, 0, 0, 0)\nffbfecf8 grouping_planner+0x7cc(4c0780, bff00000, 0, ff13a000, 0, 0)\nffbfedc8 subquery_planner+0x260(4c0780, bff00000, 0, 7efefeff, 81010100, \nff0000)\nffbfee58 planner+0x54(4c0780, 4c1e00, 4c15e0, fffffff8, 0, ffbfffd5)\nffbfeed8 pg_plan_query+0x54(4c0780, 29b99c, 0, 53, 4c0348, 20)\nffbfef50 pg_exec_query_string+0x388(4c0348, 2, 476010, 4c0330, 800000, 0)\nffbff038 PostgresMain+0x1398(5, ffbff2d0, 40c1d9, 473, 0, ffbff1c8)\nffbff0f8 DoBackend+0x7d8(40c0a8, 1, 22730, 1552dc, 0, 40c2d9)\nffbff4e8 BackendStartup+0xb0(40c0a8, 5, ffbff800, ffbff5d8, ffbff658, 0)\nffbff568 ServerLoop+0x370(297024, 49a0, 0, 3f1f88, 297004, 2d560000)\nffbff810 PostmasterMain+0xbe4(5, 3f2980, 65720000, 0, 65720000, 65720000)\nffbffca0 main+0x294(5, ffbffd8c, ffbffda4, 3e64c0, 0, 0)\nffbffd28 _start+0x5c(0, 0, 0, 0, 0, 0)\n\n\n\nAndrew Sullivan wrote:\n\n>On Tue, Sep 17, 2002 at 11:08:46AM -0400, Tom Lane wrote:\n>\n>\n>>This should be producing a core file in your database directory\n>>($PGDATA/base/yourdboid/). With gdb you'd do\n>>\tgdb /path/to/postgres-executable /path/to/corefile\n>>\tgdb> bt\n>>\tgdb> quit\n>>I don't remember the equivalent incantations with Solaris' debugger.\n>>\n>\n>I think it's \n>\n>adb /path/to/postgres-executable /path/to/corefile\n>$c\n>\n>[or]\n>\n>$C\n>\n>A\n>\n>\n\n\n",
"msg_date": "Wed, 18 Sep 2002 10:36:53 +0200",
"msg_from": "Wim <wdh@belbone.be>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Still big problems with pg_dump!"
},
{
"msg_contents": "Wim <wdh@belbone.be> writes:\n> gdb gives me this:\n> bash-2.05$ adb /usr/local/pgsql/bin/postgres \n> /data/postgres/base/17903709/core\n> core file = /data/postgres/base/17903709/core -- program \n> ``/usr/local/pgsql/bin/postgres'' on platform SUNW,Ultra-60\n> SIGBUS: Bus Error\n> $c\n> AllocSetAlloc+0x18c(476120, 3, 226018, 0, 0, 13)\n> MemoryContextAlloc+0x68(476120, 3, 70670000, 7efefeff, 81010100, ff00)\n> MemoryContextStrdup+0x28(476120, 4c8568, ffffffff, fffffff8, 0, ffbfe541)\n> make_greater_string+0x1c(4c8568, 13, 297, 4c8730, 0, ffbfe5d1)\n> prefix_selectivity+0xcc(4c0780, 4c7ab8, 4c8568, ffbfe670, ffbfe68b, \n> ffbfe6a0)\n> patternsel+0x278(ffbfe7a0, 0, 476120, 0, 0, ffbfe728)\n> likesel+0x10(ffbfe7a0, ffbfe7a0, 1d7080, fffffff8, 0, ffbfe809)\n\nHm. Are you running in a multibyte character encoding? I had a note\nthat make_greater_string may have problems in the MULTIBYTE case.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Sep 2002 10:47:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Still big problems with pg_dump! "
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n>Wim <wdh@belbone.be> writes:\n> \n>\n>>gdb gives me this:\n>>bash-2.05$ adb /usr/local/pgsql/bin/postgres \n>>/data/postgres/base/17903709/core\n>>core file = /data/postgres/base/17903709/core -- program \n>>``/usr/local/pgsql/bin/postgres'' on platform SUNW,Ultra-60\n>>SIGBUS: Bus Error\n>>$c\n>>AllocSetAlloc+0x18c(476120, 3, 226018, 0, 0, 13)\n>>MemoryContextAlloc+0x68(476120, 3, 70670000, 7efefeff, 81010100, ff00)\n>>MemoryContextStrdup+0x28(476120, 4c8568, ffffffff, fffffff8, 0, ffbfe541)\n>>make_greater_string+0x1c(4c8568, 13, 297, 4c8730, 0, ffbfe5d1)\n>>prefix_selectivity+0xcc(4c0780, 4c7ab8, 4c8568, ffbfe670, ffbfe68b, \n>>ffbfe6a0)\n>>patternsel+0x278(ffbfe7a0, 0, 476120, 0, 0, ffbfe728)\n>>likesel+0x10(ffbfe7a0, ffbfe7a0, 1d7080, fffffff8, 0, ffbfe809)\n>> \n>>\n>\n>Hm. Are you running in a multibyte character encoding? I had a note\n>that make_greater_string may have problems in the MULTIBYTE case.\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n>\n> \n>\nYes, I compiled postgres with multibyte and ODBC support . Is there a \nworkaround possibel to fix my problem?\n\nCheers!\n\nWim\n\n",
"msg_date": "Wed, 18 Sep 2002 17:00:33 +0200",
"msg_from": "Wim <wdh@belbone.be>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Still big problems with pg_dump!"
},
{
"msg_contents": "Wim <wdh@belbone.be> writes:\n>> Hm. Are you running in a multibyte character encoding? I had a note\n>> that make_greater_string may have problems in the MULTIBYTE case.\n\n> Yes, I compiled postgres with multibyte and ODBC support .\n\nBut are you actually *using* the multibyte code? What does \"psql -l\"\nshow as the encoding for your database?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Sep 2002 11:06:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Still big problems with pg_dump! "
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n>Wim <wdh@belbone.be> writes:\n>\n>>>Hm. Are you running in a multibyte character encoding? I had a note\n>>>that make_greater_string may have problems in the MULTIBYTE case.\n>>>\n>\n>>Yes, I compiled postgres with multibyte and ODBC support .\n>>\n>\n>But are you actually *using* the multibyte code? What does \"psql -l\"\n>show as the encoding for your database?\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n>http://www.postgresql.org/users-lounge/docs/faq.html\n>\n>\n>\npsql -l shows:\n\n List of databases\n Name | Owner | Encoding\n---------------+----------+-----------\n addressIP | postgres | SQL_ASCII\n belbonedb_v2 | postgres | SQL_ASCII\n belbonedb_v21 | postgres | SQL_ASCII\n peering | postgres | SQL_ASCII\n peering_v2 | postgres | SQL_ASCII\n postgres | postgres | SQL_ASCII\n smsbilling | postgres | SQL_ASCII\n template0 | postgres | SQL_ASCII\n template1 | postgres | SQL_ASCII\n(9 rows)\n\nMaybe I should recompile Postgres without multibyte support...\n\n\nCheers!\n\nWim\n\n\n\n",
"msg_date": "Thu, 19 Sep 2002 07:57:31 +0200",
"msg_from": "Wim <wdh@belbone.be>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Still big problems with pg_dump!"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Christopher Kings-Lynne [mailto:chriskl@familyhealth.com.au] \n> Sent: 17 September 2002 09:05\n> To: Dave Page; Bruce Momjian\n> Cc: Robert Treat; Justin Clift; Peter Eisentraut; Tom Lane; \n> Curt Sampson; PostgreSQL Hackers Mailing List\n> Subject: RE: [HACKERS] PGXLOG variable worthwhile?\n> \n> \n> > I use PostgreSQL on Linux for production and XP for \n> development, and \n> > am likely to continue that way. I've been beta testing the native \n> > Win32 port of PostgreSQL as Justin has and the latest version is \n> > fantastic - it runs as a service, osdb shows impressive results \n> > compared to Cygwin PostgreSQL on the same system and it's a \n> breeze to \n> > install, despite there being no installer yet.\n> \n> >From where do we get this fabled Win32 port?\n> \n\nThe call for testers (below) was originally posted to the Cygwin list.\n\nRegards, Dave.\n\n===\n\nMy company is actively working on a Native Windows Port of Postgres\nbased on 7.2.1. This is the same group that Jan Wieck and Katie Ward\nwork with. We are now at the stage that we need community involvement\nto help work out the bugs.\n\nWe plan on contributing the code to the Postgres base, but\nwe want to make sure that most of the bugs have been worked \nout before doing so.\n\nWe are looking for people who have an application that currently runs on\nPostgres 7.2 and who also have a Windows environment. If you would like\nto get involved, please send me email at mailto:mikef@multera.com\n\nThanks...\n\n...MikeF\n-- \n------------------------------------------------------------------\nMike Furgal - mailto:mikef@multera.com - http://www.multera.com\n------------------------------------------------------------------\n",
"msg_date": "Tue, 17 Sep 2002 09:35:32 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: PGXLOG variable worthwhile?"
}
] |
[
{
"msg_contents": "\n> I think there is some confusion here. The runtime checks Andreas was\n> talking about was allowing a double of 64.0 to cast to an int4 while\n> disallowing 64.1 from being cast to an int4 because it is not a hole\n> number. \n\nYes, and Tom's proposal for numbers is sufficient for constants, since the 64.0\nwill initially be an int2 and thus do the correct thing together with an int4,\nand the 64.1 constant will be a numeric, and thus also do the correct thing with\nall other types.\n\nIt is not sufficient for the optimizer for joins though, since it cannot use the \nint4 index when confronted with \"where tab1.int4col = tab2.numericcol\".\nHere only a runtime (non aborting) check would help.\nMaybe this could be overcome if the index access (or something inbetween) would allow\na \"numeric\" constant for an int4 index (If the \"numeric\" value does not cleanly convert\nto int4, return no rows).\n\nAndreas\n",
"msg_date": "Tue, 17 Sep 2002 10:47:05 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Proposal for resolving casting issues"
},
{
"msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> It is not sufficient for the optimizer for joins though, since it\n> cannot use the int4 index when confronted with \"where tab1.int4col =\n> tab2.numericcol\".\n\nFor cross-datatype joins, the proposal as I sketched it would result in\nthe parser producing, eg,\n\twhere tab1.int4col::numeric = tab2.numericcol\nthat is, we'd have a single-datatype operator and a runtime cast in the\nexpression.\n\nThe optimizer is today capable of producing a nested loop with inner\nindexscan join from this --- so long as the inner indexscan is on the\nuncasted column (numericcol in this case). It won't consider an int4\nindex on int4col for this. This seems okay to me, actually. It's\nbetter than what you get now with a cross-datatype comparison operator\n(neither side can be indexscanned since the operator matches neither\nindex opclass).\n\nThe major failing that needs to be rectified is that merge and hash\njoins won't even be considered, because that code only works with\nquals that are unadorned \"Var = Var\". I don't believe there is any\nfundamental reason for this restriction. As long as the top operator\nis merge/hashjoinable, any expression should work on either side.\nIt's just a matter of cleaning up a few unwarranted shortcuts in the\nplanner.\n\nBut that work does need to be done before we can rip out all the\ncross-datatype operators ... so this is definitely not happening\nfor 7.3 ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Sep 2002 09:38:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for resolving casting issues "
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Shridhar Daithankar \n> [mailto:shridhar_daithankar@persistent.co.in] \n> Sent: 17 September 2002 09:30\n> To: Pgsql-hackers@postgresql.org\n> Cc: Pgsql-general@postgresql.org\n> Subject: Re: [HACKERS] PGXLOG variable worthwhile?\n> \n> \n> On 17 Sep 2002 at 16:11, Christopher Kings-Lynne wrote:\n> \n> But I disagree. History says that nobody can compete with \n> microsoft on \n> microsoft platform. Postgres will not be competing with \n> either SQL Server or \n> access. It would remain as toy database..\n\nLike Oracle?\n\n> Let's leave it. The main point to focus postgres on unix is \n> not only because \n> unix is proven/known as robust and scalable, but unix is much \n> more standard to \n> support across multiple OS. The amount with which windows \n> differs from unices \n> on API level, any serious efforts to make postgresql good \n> enough on windows \n> whould be a mammoth task.\n\nMaybe, but it's pretty much there now. The beta Win32 native port has\nbeen performing excellently in the tests I've been able to throw at it,\ncertainly better than the Cygwin port.\n\n> I haven't tried either port of postgres on windows but I \n> would not bet on any \n> of them.\n\nThe thing I wouldn't bet on is not the quality of the code produced by\nthe developers here, but Windows. Yes, it runs great here at the moment,\nand has done for a while now but there's no guarantee that a new release\nwon't have a nasty bug. But that applies to the SQL user as well though.\nOr for that matter the user of *any* other OS...\n\n> There are limits as what postgresql can do on windows and \n> probably postgresql \n> development team can't do much about many of them..\n\nThe only real issue afaik with the current beta is that you can only run\none instance on a single server. That is the case with SQL Server as\nwell of course.\n\n> No offense to anybody.. just some opinions..\n> \n\nLikewise.\n\nRegards, Dave.\n",
"msg_date": "Tue, 17 Sep 2002 09:48:19 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "On 17 Sep 2002 at 9:48, Dave Page wrote:\n> > -----Original Message-----\n> > From: Shridhar Daithankar \n> > [mailto:shridhar_daithankar@persistent.co.in] \n> > Sent: 17 September 2002 09:30\n> > To: Pgsql-hackers@postgresql.org\n> > Cc: Pgsql-general@postgresql.org\n> > Subject: Re: [HACKERS] PGXLOG variable worthwhile?\n> > \n> > \n> > On 17 Sep 2002 at 16:11, Christopher Kings-Lynne wrote:\n> > \n> > But I disagree. History says that nobody can compete with \n> > microsoft on \n> > microsoft platform. Postgres will not be competing with \n> > either SQL Server or \n> > access. It would remain as toy database..\n> \n> Like Oracle?\n\nOracle is a different story. It's has money but still from what I see around, \noracle on windows is nowhere in league of oracle on unix..\n\n> Maybe, but it's pretty much there now. The beta Win32 native port has\n> been performing excellently in the tests I've been able to throw at it,\n> certainly better than the Cygwin port.\n\nThat's a real good news..\n\n> The thing I wouldn't bet on is not the quality of the code produced by\n> the developers here, but Windows. Yes, it runs great here at the moment,\n\nExactly my thoughts. Problems with windows are more troublesome because nobody \ncan do a thing about it..\n\n> and has done for a while now but there's no guarantee that a new release\n> won't have a nasty bug. But that applies to the SQL user as well though.\n> Or for that matter the user of *any* other OS...\n\nOK. Very fine. The issue remains two folds..hope I stay with facts\n\n1)Postgres native port on windows will not be same as standard PG CVS head \nbecause it has come later. This phase lag may cause some troubles, technically \nand/or on expectations from users side.\n\n2)Most importantly, if postgres native ports needs some windows specific \ntweaking, standard PG wouldn't be able to accommodate it being a unix tree now. \nI think Tom and others does not want this situation, which is very correct.\n\n2) is more important here. Either we have branches in postgres with most code \ncommon if possible or some makefile hackery if large part of code is not \ncommon..\n\nI would say 7.4 may be a good candidate for merge(Or is it already done. \nHaven't checked out CVS lately..) The native windows stuff will be out of it's \nbeta by then as well..\n\nHTH\n\nBye\n Shridhar\n\n--\nIntel engineering seem to have misheard Intel marketing strategy. The phrasewas \n\"Divide and conquer\" not \"Divide and cock up\"(By iialan@www.linux.org.uk, Alan \nCox)\n\n",
"msg_date": "Tue, 17 Sep 2002 14:40:54 +0530",
"msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PGXLOG variable worthwhile?"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Christopher Kings-Lynne [mailto:chriskl@familyhealth.com.au] \n> Sent: 17 September 2002 09:49\n> To: shridhar_daithankar@persistent.co.in; Pgsql-hackers@postgresql.org\n> Cc: Pgsql-general@postgresql.org\n> Subject: Re: [HACKERS] PGXLOG variable worthwhile?\n> \n>\n> Users HAVE provided their feedback - they want Postgres on \n> Windows. What's the point of open source if we can't \n> accomodate them? There's no problems with economics, \n> marketing, schedules, deadlines, nothing. The reason that \n> people like Open Source is because they don't have to deal \n> with some monolithic company refusing to port to their \n> platform just because it's \"too hard\".\n\nWhich in this case is what puzzles me. We are only talking about a\nsimple GUC variable after all - I don't know for sure, but I'm guessing\nit's not a huge effort to add one?\n\nRegards, Dave.\n",
"msg_date": "Tue, 17 Sep 2002 09:52:51 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "Dave Page wrote:\n> Which in this case is what puzzles me. We are only talking about a\n> simple GUC variable after all - I don't know for sure, but I'm guessing\n> it's not a huge effort to add one?\n\nCan we get agreement on that? A GUC for pg_xlog location? Much cleaner\nthan -X, doesn't have the problems of possible accidental use, and does\nallow pg_xlog moving without symlinks, which some people don't like?\n\nIf I can get a few 'yes' votes I will add it to TODO and do it for 7.4.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 17 Sep 2002 17:07:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PGXLOG variable worthwhile?"
},
{
"msg_contents": "On Tue, 17 Sep 2002, Bruce Momjian wrote:\n\n> Dave Page wrote:\n> > Which in this case is what puzzles me. We are only talking about a\n> > simple GUC variable after all - I don't know for sure, but I'm guessing\n> > it's not a huge effort to add one?\n> \n> Can we get agreement on that? A GUC for pg_xlog location? Much cleaner\n> than -X, doesn't have the problems of possible accidental use, and does\n> allow pg_xlog moving without symlinks, which some people don't like?\n> \n> If I can get a few 'yes' votes I will add it to TODO and do it for 7.4.\n\nGUC instead of -X or PGXLOG : yes.\n\nHowever, how is that going to work if tablespaces are introduced in 7.4. Surely\nthe same mechanism for tablespaces would be used for pg_xlog. As the tablespace\nmechanism hasn't been determined yet, as far as I know, wouldn't it be best to\nsee what happens there before creating the TODO item for the log?\n\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants\n\n",
"msg_date": "Tue, 17 Sep 2002 23:03:58 +0100 (BST)",
"msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PGXLOG variable worthwhile?"
},
{
"msg_contents": "\"Nigel J. Andrews\" wrote:\n<snip>\n> However, how is that going to work if tablespaces are introduced in 7.4. Surely\n> the same mechanism for tablespaces would be used for pg_xlog. As the tablespace\n> mechanism hasn't been determined yet, as far as I know, wouldn't it be best to\n> see what happens there before creating the TODO item for the log?\n\nIt's a Yes from me of course.\n\nWould a TODO list entry of something like \"Add a GUC xlog_path variable\"\nbe broad enough that\npeople keep it in mind when tablespaces are created, but it doesn't get\nforgotten about\nby not being on the list?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> --\n> Nigel J. Andrews\n> Director\n> \n> ---\n> Logictree Systems Limited\n> Computer Consultants\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Wed, 18 Sep 2002 08:14:17 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PGXLOG variable worthwhile?"
},
{
"msg_contents": "Nigel J. Andrews wrote:\n> On Tue, 17 Sep 2002, Bruce Momjian wrote:\n> \n> > Dave Page wrote:\n> > > Which in this case is what puzzles me. We are only talking about a\n> > > simple GUC variable after all - I don't know for sure, but I'm guessing\n> > > it's not a huge effort to add one?\n> > \n> > Can we get agreement on that? A GUC for pg_xlog location? Much cleaner\n> > than -X, doesn't have the problems of possible accidental use, and does\n> > allow pg_xlog moving without symlinks, which some people don't like?\n> > \n> > If I can get a few 'yes' votes I will add it to TODO and do it for 7.4.\n> \n> GUC instead of -X or PGXLOG : yes.\n> \n> However, how is that going to work if tablespaces are introduced in 7.4. Surely\n> the same mechanism for tablespaces would be used for pg_xlog. As the tablespace\n> mechanism hasn't been determined yet, as far as I know, wouldn't it be best to\n> see what happens there before creating the TODO item for the log?\n\nGood point. How about:\n\n\tAllow pg_xlog to be moved without symlinks\n\nThat is vague enough. Added to TODO.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 17 Sep 2002 18:22:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PGXLOG variable worthwhile?"
},
{
"msg_contents": "I forget, is it possible to make a GUC that cannot be changed during\nruntime?\n\nIf so, then I vote yes, otherwise, there is a problem if someone tries.\n\n\nOn Tue, 2002-09-17 at 17:07, Bruce Momjian wrote:\n> Dave Page wrote:\n> > Which in this case is what puzzles me. We are only talking about a\n> > simple GUC variable after all - I don't know for sure, but I'm guessing\n> > it's not a huge effort to add one?\n> \n> Can we get agreement on that? A GUC for pg_xlog location? Much cleaner\n> than -X, doesn't have the problems of possible accidental use, and does\n> allow pg_xlog moving without symlinks, which some people don't like?\n\n-- \n Rod Taylor\n\n",
"msg_date": "17 Sep 2002 18:34:08 -0400",
"msg_from": "Rod Taylor <rbt@rbt.ca>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "Rod Taylor wrote:\n> I forget, is it possible to make a GUC that cannot be changed during\n> runtime?\n\nYes, you can set it to it only can be changed by the super-user and only\ntakes effect on restart.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 17 Sep 2002 18:36:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Dave Page wrote:\n> > Which in this case is what puzzles me. We are only talking about a\n> > simple GUC variable after all - I don't know for sure, but I'm guessing\n> > it's not a huge effort to add one?\n> \n> Can we get agreement on that? A GUC for pg_xlog location? Much cleaner\n> than -X, doesn't have the problems of possible accidental use, and does\n> allow pg_xlog moving without symlinks, which some people don't like?\n> \n> If I can get a few 'yes' votes I will add it to TODO and do it for 7.4.\n\n'yes' - make it one more GUC and done\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n",
"msg_date": "Wed, 18 Sep 2002 00:43:07 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PGXLOG variable worthwhile?"
},
{
"msg_contents": "\"Nigel J. Andrews\" wrote:\n> However, how is that going to work if tablespaces are introduced in 7.4. Surely\n> the same mechanism for tablespaces would be used for pg_xlog. As the tablespace\n> mechanism hasn't been determined yet, as far as I know, wouldn't it be best to\n> see what happens there before creating the TODO item for the log?\n\nNo, tablespaces would have to be something DB specific, while the Xlog\nis instance wide (instance == one postmaster == installation == whatever\nyou name that level).\n\nMy vision is that we start off with two tablespaces per database,\n\"default\" and \"default_idx\", which are subdirectories inside the\ndatabase directory. All (non-index-)objects created without explicitly\nsaying what tablespace they belong to automatically belong to default.\nIndexes ... bla.\n\nThe tablespace catalog will have a column telling the physical location\nof that directory. Moving it around will not be *that* easy, I guess,\nbecause the UPDATE of that entry has to go hand in hand with the move of\nall files in that damned directory. But that's another thing to sort out\nlater, IMHO.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n",
"msg_date": "Wed, 18 Sep 2002 00:52:58 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PGXLOG variable worthwhile?"
},
{
"msg_contents": "Jan Wieck wrote:\n> \"Nigel J. Andrews\" wrote:\n> > However, how is that going to work if tablespaces are introduced in 7.4. Surely\n> > the same mechanism for tablespaces would be used for pg_xlog. As the tablespace\n> > mechanism hasn't been determined yet, as far as I know, wouldn't it be best to\n> > see what happens there before creating the TODO item for the log?\n> \n> No, tablespaces would have to be something DB specific, while the Xlog\n> is instance wide (instance == one postmaster == installation == whatever\n> you name that level).\n> \n> My vision is that we start off with two tablespaces per database,\n> \"default\" and \"default_idx\", which are subdirectories inside the\n> database directory. All (non-index-)objects created without explicitly\n> saying what tablespace they belong to automatically belong to default.\n> Indexes ... bla.\n> \n> The tablespace catalog will have a column telling the physical location\n> of that directory. Moving it around will not be *that* easy, I guess,\n> because the UPDATE of that entry has to go hand in hand with the move of\n> all files in that damned directory. But that's another thing to sort out\n> later, IMHO.\n\nYes, the nifty trick was to use a lstat() from pg_dump to learn if it is a\nsymlink and if so, where it points to.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 18 Sep 2002 00:57:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PGXLOG variable worthwhile?"
},
{
"msg_contents": "On Tue, 17 Sep 2002, Bruce Momjian wrote:\n\n> Dave Page wrote:\n> > Which in this case is what puzzles me. We are only talking about a\n> > simple GUC variable after all - I don't know for sure, but I'm guessing\n> > it's not a huge effort to add one?\n>\n> Can we get agreement on that? A GUC for pg_xlog location? Much cleaner\n> than -X, doesn't have the problems of possible accidental use, and does\n> allow pg_xlog moving without symlinks, which some people don't like?\n>\n> If I can get a few 'yes' votes I will add it to TODO and do it for 7.4.\n\nPersonally, I like the ability to define such at a command line level ...\n*especially* as it pertains to pointing to various directories ... I am\nagainst pulling the -X functionality out ... if you don't like it, don't\nuse it ... add the GUC variable option to the mix, but don't take away\nfunctionality ...\n\nHell, take a look at what you are saying above: because someone might\nforget to set -X, let's get rid of it in favor of a setting in a file that\nsomeone might forget to edit?\n\nEither format has the possibility of an error ... if you are so\nincompetent as to make that sort of mistake on a production server, it\nwon't matter if its a GUC variable, environment variable or commnd line\nargument, you will still make that mistake ...\n\n\n\n",
"msg_date": "Wed, 18 Sep 2002 22:28:52 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PGXLOG variable worthwhile?"
},
{
"msg_contents": "Marc G. Fournier wrote:\n> On Tue, 17 Sep 2002, Bruce Momjian wrote:\n> \n> > Dave Page wrote:\n> > > Which in this case is what puzzles me. We are only talking about a\n> > > simple GUC variable after all - I don't know for sure, but I'm guessing\n> > > it's not a huge effort to add one?\n> >\n> > Can we get agreement on that? A GUC for pg_xlog location? Much cleaner\n> > than -X, doesn't have the problems of possible accidental use, and does\n> > allow pg_xlog moving without symlinks, which some people don't like?\n> >\n> > If I can get a few 'yes' votes I will add it to TODO and do it for 7.4.\n> \n> Personally, I like the ability to define such at a command line level ...\n> *especially* as it pertains to pointing to various directories ... I am\n> against pulling the -X functionality out ... if you don't like it, don't\n> use it ... add the GUC variable option to the mix, but don't take away\n> functionality ...\n> \n> Hell, take a look at what you are saying above: because someone might\n> forget to set -X, let's get rid of it in favor of a setting in a file that\n> someone might forget to edit?\n> \n> Either format has the possibility of an error ... if you are so\n> incompetent as to make that sort of mistake on a production server, it\n> won't matter if its a GUC variable, environment variable or commnd line\n> argument, you will still make that mistake ...\n\nSorry, I don't see the logic here. Using postgresql.conf, you set it\nonce and it remains set until you change it again. With -X, you have to\nuse it every time. I think that's where the votes came from.\n\nYou argued that -X and GUC make sense, but why add -X when can get it\ndone at once in postgresql.conf. Also, consider changing the location\ndoes require moving the WAL files, so you already have this extra step. \nAdding to postgresql.conf is easy. I don't think you can just point it\nat a random empty directory on startup. Our goal was to reduce params\nto postmaster/postgres in favor of GUC, not add to them.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 18 Sep 2002 21:36:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PGXLOG variable worthwhile?"
},
{
"msg_contents": "On Wed, 18 Sep 2002, Bruce Momjian wrote:\n\n> Sorry, I don't see the logic here. Using postgresql.conf, you set it\n> once and it remains set until you change it again. With -X, you have to\n> use it every time. I think that's where the votes came from.\n\nAh, so you are saying that you type out your full command line each and\nevery time you start up the server? I know, in my case, I have a shell\nscript setup that I edit my changes in so that I don't have to remember\n...\n\n> You argued that -X and GUC make sense, but why add -X when can get it\n> done at once in postgresql.conf. Also, consider changing the location\n> does require moving the WAL files, so you already have this extra step.\n> Adding to postgresql.conf is easy. I don't think you can just point it\n> at a random empty directory on startup. Our goal was to reduce params\n> to postmaster/postgres in favor of GUC, not add to them.\n\nI don't disagree that editing postgresql.conf is easy, but its not\nsomething that ppl would naturally thing of ... if I want to move a\ndirectory with most servers I run, I will generally do a man to find out\nwhat command options are required to do this change, and, if none are\nprovided, just create a god-forsaken symlink ...\n\nThe man page for postmaster should have something in it like:\n\n-X <directory> Specifies an alternate location for WAL files. Superseded\n by setting xlog_path in postmaster.conf\n\nHell, if you are going to remove -X because its 'easier to do it in\npostmaster.conf', you should be looking at removing *all* command line\nargs that are better represented in the postmaster.conf file ...\n\nThe only time that *I* use the postmaster.conf file is when I'm playing\nwith the various scan'ng options ... why?\n\nmars# ps aux | grep -- B\npgsql 133 0.0 0.0 77064 1512 con- S Mon10PM 3:21.15 /usr/local/bin/postmaster -B 8192 -N 512 -o -S 4096 -i -p 5432 -D/v1/pgsql (postgres)\npgsql 144 0.0 0.0 1097300 1372 ?? Is Mon10PM 0:06.04 /usr/local/pgsql/bin/postmaster -B 131072 -N 2048 -i -p 5433 -D/usr/local/pgsql/5433 -S (postgres)\n\nits nice to be able to do a simple ps to find out which process is which,\nand pointing where ... other then -D, I don't believe there is one option\nin there that I couldn't have set in the postmaster.conf file, but, then,\nto find out the various settings, I'd have to do ps to figure out where\nthe database files are stored, and then go look at the postmaster.conf\nfile to figure out what each are set to ...\n\nI have one server that has 10 instances running right now:\n\njupiter# ps ax | grep -- -B\n 373 ?? Ss 0:55.31 /usr/local/pgsql721/bin/postmaster -B 10240 -N 512 -i -p 5432 -D/v1/pgsql/5432 -S (postgres)\n 383 ?? Ss 0:11.78 /usr/local/pgsql/bin/postmaster -B 64 -N 16 -i -p 5434 -D/v1/pgsql/5434 -S (postgres)\n 394 ?? Ss 0:17.82 /usr/local/pgsql/bin/postmaster -B 1024 -N 256 -i -p 5437 -D/v1/pgsql/5437 -S (postgres)\n 405 ?? Ss 0:16.46 /usr/local/pgsql/bin/postmaster -B 256 -N 128 -i -p 5440 -D/v1/pgsql/5440 -S (postgres)\n 416 ?? Ss 0:10.93 /usr/local/pgsql/bin/postmaster -B 256 -N 128 -i -p 5449 -D/v1/pgsql/5449 -S (postgres)\n 427 ?? Ss 0:16.30 /usr/local/pgsql/bin/postmaster -B 2048 -N 256 -i -p 5443 -D/v1/pgsql/5443 -S (postgres)\n 438 ?? Ss 0:10.60 /usr/local/pgsql721/bin/postmaster -B 1024 -N 512 -i -p 5446 -D/v1/pgsql/5446 -S (postgres)\n88515 ?? Ss 0:10.05 /usr/local/pgsql/bin/postmaster -B 64 -N 16 -i -p 5433 -D/v1/pgsql/5433 -S (postgres)\n13029 pi S+ 0:00.00 grep -- -B\n 445 con- S 0:10.59 /usr/local/pgsql/mb/bin/postmaster -B 256 -N 128 -i -p 5448 -D/v1/pgsql/openacs4 (postgres)\n 460 con- S 0:10.40 /usr/local/pgsql/bin/postmaster -B 64 -N 16 -i -p 5436 -D/v1/pgsql/electrichands (postgres)\n\nAll the information for each are right there in front of me ... I don't\nhave to go through 10 postmaster.conf files to figure out anything ...\n\nthe GUC value should override the command line option, agreed ... but the\nability to use the command line should not be removed just because some\nppl aren't competent enough to adjust their startup scripts if they change\ntheir system ...\n\n\n\n",
"msg_date": "Wed, 18 Sep 2002 23:24:39 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PGXLOG variable worthwhile?"
},
{
"msg_contents": "Marc G. Fournier wrote:\n> On Wed, 18 Sep 2002, Bruce Momjian wrote:\n> \n> > Sorry, I don't see the logic here. Using postgresql.conf, you set it\n> > once and it remains set until you change it again. With -X, you have to\n> > use it every time. I think that's where the votes came from.\n> \n> Ah, so you are saying that you type out your full command line each and\n> every time you start up the server? I know, in my case, I have a shell\n> script setup that I edit my changes in so that I don't have to remember\n> ...\n\nYep, but your central place for changes should be postgresql.conf, not\nthe command line. If we tried go get every GUC param on the command\nline it would be unusable.\n\n\n> > You argued that -X and GUC make sense, but why add -X when can get it\n> > done at once in postgresql.conf. Also, consider changing the location\n> > does require moving the WAL files, so you already have this extra step.\n> > Adding to postgresql.conf is easy. I don't think you can just point it\n> > at a random empty directory on startup. Our goal was to reduce params\n> > to postmaster/postgres in favor of GUC, not add to them.\n> \n> I don't disagree that editing postgresql.conf is easy, but its not\n> something that ppl would naturally thing of ... if I want to move a\n> directory with most servers I run, I will generally do a man to find out\n> what command options are required to do this change, and, if none are\n> provided, just create a god-forsaken symlink ...\n> \n> The man page for postmaster should have something in it like:\n> \n> -X <directory> Specifies an alternate location for WAL files. Superseded\n> by setting xlog_path in postmaster.conf\n> \n> Hell, if you are going to remove -X because its 'easier to do it in\n> postmaster.conf', you should be looking at removing *all* command line\n> args that are better represented in the postmaster.conf file ...\n\nWell, those other options are things you may want to change frequently. \nThe xlog directory isn't going to be moving around, we hope. We have\nthe flags there only so they can be easily adjusted for testing, I\nthink, and in fact there has been discussion about removing more of\nthem.\n\n> its nice to be able to do a simple ps to find out which process is which,\n> and pointing where ... other then -D, I don't believe there is one option\n> in there that I couldn't have set in the postmaster.conf file, but, then,\n> to find out the various settings, I'd have to do ps to figure out where\n> the database files are stored, and then go look at the postmaster.conf\n> file to figure out what each are set to ...\n\nYea, but you aren't going to be needing to know the xlog directory that\nway, will you?\n\nFact is, xlog is seldom moved, and symlinks do it fine now. The GUC was\na compromise for people who didn't like symlinks. If we are getting\npushback from GUC we may as well just drop the GUC idea and stick with\nsymlinks. I think that's how the vote went last time and it seems to be\nheading in that direction again.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 18 Sep 2002 22:32:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PGXLOG variable worthwhile?"
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> On Wed, 18 Sep 2002, Bruce Momjian wrote:\n>> Sorry, I don't see the logic here. Using postgresql.conf, you set it\n>> once and it remains set until you change it again. With -X, you have to\n>> use it every time. I think that's where the votes came from.\n\n> Ah, so you are saying that you type out your full command line each and\n> every time you start up the server?\n\nLet's put it this way: would you be in favor of adding a\n\t\t--please-don't-wipe-my-database-directory\nswitch to the postmaster? And if you forget to specify that every time\nyou start the postmaster, we do an instant \"rm -rf $PGDATA\"?\n\nDoesn't seem like a good idea, does it?\n\nWell, specifying the XLOG location on the command line or as an\nenvironment variable is just about as deadly as the above loaded-gun-\npointed-at-foot scenario. You start the postmaster with the wrong\ncontext, even once, it's sayonara to your data integrity.\n\nThe point of insisting that the XLOG location be recorded *inside*\nthe data directory is to prevent simple admin errors from being\ncatastrophic. Do you remember when we regularly saw trouble reports\nfrom people who'd corrupted their database indexes by starting the\npostmaster with different LOCALE environments at different times? We\nfixed that by forcing the locale collation order to be specified inside\nthe database directory (in pg_control, but the details are not important\nhere), rather than allowing it to be taken from postmaster environment.\n\nIf we allow XLOG location to be determined by a postmaster switch or\nenvironment variable, then we *will* be opening the door for people\nto shoot themselves in the foot just like they used to do with locale.\n\nI learned something from those problems, and I do not intend to make\nthe same mistake again.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Sep 2002 23:25:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PGXLOG variable worthwhile? "
},
{
"msg_contents": "On Wed, 18 Sep 2002, Bruce Momjian wrote:\n\n> Yea, but you aren't going to be needing to know the xlog directory that\n> way, will you?\n\nWhy not? Who are you to tell me how my scripts work, or how they get\ntheir information? I have a script that runs to tell me how much disk\nspace each instance is using up, that parses the ps output for the -D\nargument ... having -X there would allow me to parse for that as well and,\nif it was in the ps output, add that appropriately into the calculations\n...\n\nMy point is, the functionality is there, and should be documented properly\n... encourage ppl to use the GUC setting in postmaster.conf, but just\nbecause you can't grasp that some of us *like* to use command line args,\ndon't remove such functionality ...\n\n\n",
"msg_date": "Thu, 19 Sep 2002 01:48:33 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] PGXLOG variable worthwhile?"
},
{
"msg_contents": "Marc G. Fournier wrote:\n> On Wed, 18 Sep 2002, Bruce Momjian wrote:\n> \n> > Yea, but you aren't going to be needing to know the xlog directory that\n> > way, will you?\n> \n> Why not? Who are you to tell me how my scripts work, or how they get\n> their information? I have a script that runs to tell me how much disk\n> space each instance is using up, that parses the ps output for the -D\n> argument ... having -X there would allow me to parse for that as well and,\n> if it was in the ps output, add that appropriately into the calculations\n> ...\n> \n> My point is, the functionality is there, and should be documented properly\n> ... encourage ppl to use the GUC setting in postmaster.conf, but just\n> because you can't grasp that some of us *like* to use command line args,\n> don't remove such functionality ...\n\nYou ask for a vote and see if you can get votes to add -X. We had that\nvote once already. We do make decisions on what people should use. If\nnot, we would be as hard to manage as Oracle.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 19 Sep 2002 00:50:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] PGXLOG variable worthwhile?"
},
{
"msg_contents": "On Wed, 18 Sep 2002, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > On Wed, 18 Sep 2002, Bruce Momjian wrote:\n> >> Sorry, I don't see the logic here. Using postgresql.conf, you set it\n> >> once and it remains set until you change it again. With -X, you have to\n> >> use it every time. I think that's where the votes came from.\n>\n> > Ah, so you are saying that you type out your full command line each and\n> > every time you start up the server?\n>\n> Let's put it this way: would you be in favor of adding a\n> \t\t--please-don't-wipe-my-database-directory\n> switch to the postmaster? And if you forget to specify that every time\n> you start the postmaster, we do an instant \"rm -rf $PGDATA\"?\n>\n> Doesn't seem like a good idea, does it?\n>\n> Well, specifying the XLOG location on the command line or as an\n> environment variable is just about as deadly as the above loaded-gun-\n> pointed-at-foot scenario. You start the postmaster with the wrong\n> context, even once, it's sayonara to your data integrity.\n>\n> The point of insisting that the XLOG location be recorded *inside*\n> the data directory is to prevent simple admin errors from being\n> catastrophic. Do you remember when we regularly saw trouble reports\n> from people who'd corrupted their database indexes by starting the\n> postmaster with different LOCALE environments at different times? We\n> fixed that by forcing the locale collation order to be specified inside\n> the database directory (in pg_control, but the details are not important\n> here), rather than allowing it to be taken from postmaster environment.\n>\n> If we allow XLOG location to be determined by a postmaster switch or\n> environment variable, then we *will* be opening the door for people\n> to shoot themselves in the foot just like they used to do with locale.\n>\n> I learned something from those problems, and I do not intend to make\n> the same mistake again.\n\nExcept that you are ... you are assuming that someone is going to edit\ntheir postmaster.conf file correctly ... if you want to avoid making the\nsame mistake again, there should be some sort of 'tag' that associates the\nfiles in the XLOG directory with the data directories themselves,\nregardless of *how* the XLOG directory is referenced ... something that\nlinks them at a level that an administrator *can't* make a mistake about\n... all forcing the use of the postmaster.conf file is doing is reducing\noptions, it isn't making sure that the XLOG directory pointed to is\napporopraite for the data directory itself ...\n\n\n",
"msg_date": "Thu, 19 Sep 2002 01:55:24 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PGXLOG variable worthwhile? "
},
{
"msg_contents": "I think Marc made a pretty good case about the use of command line\narguments but I think I have to vote with Tom. Many of the command line\narguments you seem to be using do sorta make sense to have for easy\nreference or to help validate your runtime environment for each\ninstance. The other side of that is, I completely agree with Tom in the\nit's a very dangerous option. It would be begging for people to shoot\nthemselves with it. Besides, just as you can easily parse the command\nline, you can also parse the config file to out that information. Plus,\nit really should be a very seldom used option. When it is used, it's\ndoubtful that you'll need the same level of dynamic control that you get\nby using command line options.\n\nAs a rule of thumb, if an option is rarely used or is very dangerous if\nimproperly used, I do think it should be in a configuration file to\ndiscourage adhoc use.\n\nLet's face it, specify XLOG location is hardly something people need to\nbe doing on the fly.\n\nMy vote is config file it and no command line option!\n\nGreg\n\n\nOn Wed, 2002-09-18 at 23:50, Bruce Momjian wrote:\n> Marc G. Fournier wrote:\n> > On Wed, 18 Sep 2002, Bruce Momjian wrote:\n> > \n> > > Yea, but you aren't going to be needing to know the xlog directory that\n> > > way, will you?\n> > \n> > Why not? Who are you to tell me how my scripts work, or how they get\n> > their information? I have a script that runs to tell me how much disk\n> > space each instance is using up, that parses the ps output for the -D\n> > argument ... having -X there would allow me to parse for that as well and,\n> > if it was in the ps output, add that appropriately into the calculations\n> > ...\n> > \n> > My point is, the functionality is there, and should be documented properly\n> > ... encourage ppl to use the GUC setting in postmaster.conf, but just\n> > because you can't grasp that some of us *like* to use command line args,\n> > don't remove such functionality ...\n> \n> You ask for a vote and see if you can get votes to add -X. We had that\n> vote once already. We do make decisions on what people should use. If\n> not, we would be as hard to manage as Oracle.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster",
"msg_date": "19 Sep 2002 08:47:14 -0500",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] PGXLOG variable worthwhile?"
},
{
"msg_contents": "Marc G. Fournier writes:\n\n> My point is, the functionality is there, and should be documented properly\n> ... encourage ppl to use the GUC setting in postmaster.conf, but just\n> because you can't grasp that some of us *like* to use command line args,\n> don't remove such functionality ...\n\nTop secret information: If it's made a GUC variable, it's automatically a\ncommand-line option.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 19 Sep 2002 23:37:17 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] PGXLOG variable worthwhile?"
},
{
"msg_contents": "On 19 Sep 2002, Greg Copeland wrote:\n\n> I think Marc made a pretty good case about the use of command line\n> arguments but I think I have to vote with Tom. Many of the command line\n> arguments you seem to be using do sorta make sense to have for easy\n> reference or to help validate your runtime environment for each\n> instance. The other side of that is, I completely agree with Tom in the\n> it's a very dangerous option. It would be begging for people to shoot\n> themselves with it. Besides, just as you can easily parse the command\n> line, you can also parse the config file to out that information. Plus,\n> it really should be a very seldom used option. When it is used, it's\n> doubtful that you'll need the same level of dynamic control that you get\n> by using command line options.\n> \n> As a rule of thumb, if an option is rarely used or is very dangerous if\n> improperly used, I do think it should be in a configuration file to\n> discourage adhoc use.\n> \n> Let's face it, specify XLOG location is hardly something people need to\n> be doing on the fly.\n> \n> My vote is config file it and no command line option!\n\nI'd go one step further, and say that it should not be something a user \nshould do by hand, but there should be a script to do it, and it would \nwork this way:\n\nIf there is a DIRECTORY called pg_xlog in $PGDATA, then use that. If \nthere is a FILE called pg_xlog in $PGDATA, then that file will have the \nlocation of the directory stored in it. That file will be created when \nthe move_pgxlog script is run, and that script will be have all the logic \ninside it to determine how to move the pg_xlog directory safely, i.e. \nmaking sure there's room on the destination, setting permissions, etc...\n\nthat way, if you're dumb as a rock or smart as a rocket scientist, you do \nit the same way, and the script makes sure you don't scram your database \nin a not too bright moment. No postgresql.conf var, no command line \nswitch, a file or directory, and a script.\n\nSeem workable? Or am I on crack?\n\n",
"msg_date": "Tue, 24 Sep 2002 11:33:18 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] PGXLOG variable worthwhile?"
}
] |
[
{
"msg_contents": "\n> > For numbers there is probably only the solution to invent an\n> > \"anynumber\" generic type.\n> \n> Actually, I had been toying with the notion of doing the following:\n> \n> 1. A numeric literal is initially typed as the smallest type that will\n> hold it in the series int2, int4, int8, numeric (notice NOT float8).\n\nYes, that sounds like a good plan for all scenarios that can follow !\n\n> 2. Allow implicit up-coercion int2->int4->int8->numeric->float4->float8,\n> but down-coercions aren't implicit except for assignment.\n\nHow about int2->int4->int8->numeric->float4->float8->numeric ?\nThat would also allow an upward path from float8.\n\n> 3. Eliminate most or all of the cross-numeric-type operators \n> (eg, there is no reason to support int2+int4 as a separate operator).\n\nYes.\n\n> With this approach, an expression like \"int4var = 42\" would be initially\n> typed as int4 and int2, but then the constant would be coerced to int4\n> because int4=int4 is the closest-match operator. (int2=int2 would not\n> be considered because down-coercion isn't implicitly invokable.) \n\nIt would fix the constants issue, yes. How about where int2col=int4col \nand it's indexability of int2col though ?\n\n> Also\n> we get more nearly SQL-standard behavior in expressions that combine\n> numeric with float4/float8: the preferred type will be float, which\n> accords with the spec's notions of exact numeric vs. \n> approximate numeric.\n\nI do not understand the standard here.\nEspecially the following would seem awkward if that would switch to approximate:\nset numericcol = numericcol * float4col; \n\nAndreas\n",
"msg_date": "Tue, 17 Sep 2002 11:00:07 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Proposal for resolving casting issues "
},
{
"msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n>> 2. Allow implicit up-coercion int2->int4->int8->numeric->float4->float8,\n>> but down-coercions aren't implicit except for assignment.\n\n> How about int2->int4->int8->numeric->float4->float8->numeric ?\n> That would also allow an upward path from float8.\n\nUh, what? That seems logically impossible to me ... or at least it\nwould reintroduce exactly the problem we need to get away from: casts\nbetween float4, float8, numeric would be considered equally good in\neither direction, creating ambiguity about which operator to use.\nHow are you envisioning it would work exactly?\n\nPerhaps I should clarify what I had in mind: because the parser only\nconsiders one level of type coercion when choosing a function or\nassigning to a result column, it's actually necessary to have all thirty\ncast combinations between the six numeric types available in pg_cast.\nMy notation \"int2->int4->int8->numeric->float4->float8\" is intended to\nimply that of the thirty, these would be marked as implicitly coercible:\n\nint2->int4\nint2->int8\nint2->numeric\nint2->float4\nint2->float8\nint4->int8\nint4->numeric\nint4->float4\nint4->float8\nint8->numeric\nint8->float4\nint8->float8\nnumeric->float4\nnumeric->float8\nfloat4->float8\n\nwhile the fifteen reverse coercions would be assignment-only.\n\nIf we allow any circularity then we will have pairs of types with both\ncast pathways marked as implicit, which will leave the parser unable to\nchoose which operator to use. This is exactly why \"numeric = float8\"\nhas failed in past versions: there are two alternatives that are equally\neasy to reach.\n\n\n> It would fix the constants issue, yes. How about where int2col=int4col \n> and it's indexability of int2col though ?\n\nSee my other response. The current scheme of using a cross-datatype\noperator isn't helpful for indexing such cases anyway...\n\n>> Also\n>> we get more nearly SQL-standard behavior in expressions that combine\n>> numeric with float4/float8: the preferred type will be float, which\n>> accords with the spec's notions of exact numeric vs. \n>> approximate numeric.\n\n> I do not understand the standard here.\n> Especially the following would seem awkward if that would switch to\n> approximate:\n>\tset numericcol = numericcol * float4col; \n\nWell, the spec's notion is that combining an \"exact\" number and an\n\"approximate\" number must yield an \"approximate\" result. This logic\nis hard to argue with, even though in our implementation it would\nseem to make more sense for numeric to be the top of the hierarchy\non range and precision grounds.\n\nNote that if you write, say,\n\tset numericcol = numericcol * 3.14159;\nmy proposal would do the \"right thing\" since the constant would be typed\nas numeric to start with and would stay that way. To do what you want\nwith a float variable, it'd be necessary to write\n\tset numericcol = numericcol * float4col::numeric;\nwhich is sort of ugly; but no uglier than\n\tset float4col = float4col * numericcol::float4;\nwhich is what you'd have to write if the system preferred numeric and\nyou wanted the other behavior.\n\nI too have been thinking for a long time that I didn't like following\nthe spec's lead on this point; but I am now starting to think that it's\nnot all that bad. This approach to handling constants is *much* cleaner\nthan what we've done in the past, or even any of the unimplemented\nproposals that I can recall. The behavior you'd get with combinations\nof float and numeric variables is, well, debatable; from an\nimplementor's point of view preferring a numeric result makes sense,\nbut it's much less clear that users would automatically think the same.\nGiven the spec's position, I am starting to think that preferring float\nis the right thing to do.\n\nBTW, I am thinking that we don't need the notion of \"preferred type\" at\nall in the numeric category if we use this approach. I have not worked\nthrough the details for the other type categories, but perhaps if we\nadopt similar systems of one-way implicit promotions in each category,\nwe could retire \"preferred types\" altogether --- which would let us get\nrid of hardwired type categories, too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Sep 2002 10:05:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for resolving casting issues "
},
{
"msg_contents": "Tom Lane wrote:\n> Note that if you write, say,\n> \tset numericcol = numericcol * 3.14159;\n> my proposal would do the \"right thing\" since the constant would be typed\n> as numeric to start with and would stay that way. To do what you want\n> with a float variable, it'd be necessary to write\n> \tset numericcol = numericcol * float4col::numeric;\n> which is sort of ugly; but no uglier than\n> \tset float4col = float4col * numericcol::float4;\n> which is what you'd have to write if the system preferred numeric and\n> you wanted the other behavior.\n\nI need a clarification. In the non-assignment case, does:\n\n\tWHERE numericcol = numericcol * 3.14159\n\nevaluate \"numericcol * 3.14159\" as a numeric?\n\nAnd does:\n\n\tWHERE 5.55 = numericcol * 3.14159\n\nevaluate \"numericcol * 3.14159\" as a numeric too?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 17 Sep 2002 15:14:41 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for resolving casting issues"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I need a clarification. In the non-assignment case, does:\n> \tWHERE numericcol = numericcol * 3.14159\n> evaluate \"numericcol * 3.14159\" as a numeric?\n\nYup (given my proposed changes that is).\n\n> And does:\n> \tWHERE 5.55 = numericcol * 3.14159\n> evaluate \"numericcol * 3.14159\" as a numeric too?\n\nYup. The context does not matter: when we have foo * bar, we are going\nto decide which kind of * operator is meant without regard to\nsurrounding context. It's very much a bottom-up process, and has to be.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Sep 2002 15:26:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for resolving casting issues "
}
] |
[
{
"msg_contents": "Hi all.\n\nI have read the last version of PostgreSQL (7.3 beta) and found that the second version of CREATE TYPE is very interesting.\n\nSo we can create a type that look like a RECORD.\nFor example:\nCREATE TYPE adress AS (number int, street text, country VARCHAR);\n\nBut can i use this type in a table definition like this:\nCREATE TABLE person (his_name VARCHAR, his_adress adress);\n\nSomeone can answer to my question.\n\nThanks for your help.\n\nJérôme Chochon.\n\n\n\n\n\n\n\n\n\nHi all.\n \nI have read the last version of PostgreSQL (7.3 \nbeta) and found that the second version of CREATE TYPE is very \ninteresting.\n \nSo we can create a type that look like a \nRECORD.\nFor example:\nCREATE TYPE adress AS (number int, street text, \ncountry VARCHAR);\n \nBut can i use this type in a table definition \nlike this:\nCREATE TABLE person (his_name VARCHAR, \nhis_adress adress);\n \nSomeone can answer to my question.\n \nThanks for your help.\n \nJérôme Chochon.",
"msg_date": "Tue, 17 Sep 2002 11:17:14 +0200",
"msg_from": "\"Jerome Chochon\" <jerome.chochon@ensma.fr>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 7.3: help on new CREATE TYPE"
},
{
"msg_contents": "Hi Jerome,\n\nThe RECORD type is used for writing stored procedures and functions that\nreturn sets.\n\neg. CREATE FUNCTION foo() RETURNS setof adress\nAS '...';\n\nSort of thing...\n\nChris\n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Jerome Chochon\nSent: Tuesday, 17 September 2002 5:17 PM\nTo: pgsql-hackers@postgresql.org; pgsql-announce@postgresql.org;\npgsql-general@postgresql.org\nSubject: [HACKERS] PostgreSQL 7.3: help on new CREATE TYPE\n\n\nHi all.\n\nI have read the last version of PostgreSQL (7.3 beta) and found that the\nsecond version of CREATE TYPE is very interesting.\n\nSo we can create a type that look like a RECORD.\nFor example:\nCREATE TYPE adress AS (number int, street text, country VARCHAR);\n\nBut can i use this type in a table definition like this:\nCREATE TABLE person (his_name VARCHAR, his_adress adress);\n\nSomeone can answer to my question.\n\nThanks for your help.\n\nJ�r�me Chochon.\n\n",
"msg_date": "Tue, 17 Sep 2002 17:25:24 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 7.3: help on new CREATE TYPE"
},
{
"msg_contents": "Hi all,\n\nI have a problem with inserting one milling records into a table using a\nfunction. This is for testing. The backend crashes on that every time,\nalthough the error messages seem to be different. Can I post a full\ndescription here or should that go to pgsql-general?\n\nThanks.\n\nBest Regards,\nMichael Paesold\n\n",
"msg_date": "Tue, 17 Sep 2002 12:02:33 +0200",
"msg_from": "\"Michael Paesold\" <mpaesold@gmx.at>",
"msg_from_op": false,
"msg_subject": "Backend crash"
},
{
"msg_contents": "Sorry if my english is not very good. ;-).\n\nWhen I say that the second form of CREATE TYPE allow you to make RECORD type\nlike RECORD, i don't want to speak about the record in PlPgsql but RECORD\nfrom programming language like ADA or C (typedef struct).\n\nSo the real question is:\nCan I use this new type like other user-type ?\nCREATE TABLE person (his_name VARCHAR, his_adress adress);\n...where adress is CREATE TYPE adress AS (number int, street text, country\nVARCHAR);\n\nThanks for your reply ?\n\n\n\n\n----- Original Message -----\nFrom: \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>\nTo: \"Jerome Chochon\" <jerome.chochon@ensma.fr>;\n<pgsql-hackers@postgresql.org>; <pgsql-announce@postgresql.org>;\n<pgsql-general@postgresql.org>\nSent: Tuesday, September 17, 2002 11:25 AM\nSubject: RE: [HACKERS] PostgreSQL 7.3: help on new CREATE TYPE\n\n\n> Hi Jerome,\n>\n> The RECORD type is used for writing stored procedures and functions that\n> return sets.\n>\n> eg. CREATE FUNCTION foo() RETURNS setof adress\n> AS '...';\n>\n> Sort of thing...\n>\n> Chris\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Jerome Chochon\n> Sent: Tuesday, 17 September 2002 5:17 PM\n> To: pgsql-hackers@postgresql.org; pgsql-announce@postgresql.org;\n> pgsql-general@postgresql.org\n> Subject: [HACKERS] PostgreSQL 7.3: help on new CREATE TYPE\n>\n>\n> Hi all.\n>\n> I have read the last version of PostgreSQL (7.3 beta) and found that the\n> second version of CREATE TYPE is very interesting.\n>\n> So we can create a type that look like a RECORD.\n> For example:\n> CREATE TYPE adress AS (number int, street text, country VARCHAR);\n>\n> But can i use this type in a table definition like this:\n> CREATE TABLE person (his_name VARCHAR, his_adress adress);\n>\n> Someone can answer to my question.\n>\n> Thanks for your help.\n>\n> J�r�me Chochon.\n>\n\n",
"msg_date": "Tue, 17 Sep 2002 12:47:39 +0200",
"msg_from": "\"Jerome Chochon\" <jerome.chochon@ensma.fr>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PostgreSQL 7.3: help on new CREATE TYPE"
},
{
"msg_contents": "\"Jerome Chochon\" <jerome.chochon@ensma.fr> writes:\n> Can I use this new type like other user-type ?\n> CREATE TABLE person (his_name VARCHAR, his_adress adress);\n> ...where adress is CREATE TYPE adress AS (number int, street text, country\n> VARCHAR);\n\nNot at the moment, though that might be an interesting direction to\npursue in future releases. At present, the only thing such a type is\nuseful for is to define the argument or result type of a function that\ntakes or returns records.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Sep 2002 10:22:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 7.3: help on new CREATE TYPE "
},
{
"msg_contents": "> When I say that the second form of CREATE TYPE allow you to make\n> RECORD type\n> like RECORD, i don't want to speak about the record in PlPgsql but RECORD\n> from programming language like ADA or C (typedef struct).\n>\n> So the real question is:\n> Can I use this new type like other user-type ?\n> CREATE TABLE person (his_name VARCHAR, his_adress adress);\n> ...where adress is CREATE TYPE adress AS (number int, street text, country\n> VARCHAR);\n\nNo.\n\nBy the way - the pgsql-announce list is not for asking quetsions in!\n\nChris\n\n",
"msg_date": "Wed, 18 Sep 2002 09:17:47 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 7.3: help on new CREATE TYPE"
},
{
"msg_contents": "\nIllustra did a very nice job with \"composite types\" which\ncorrespond to these record types. The composite types\nwere able to be used as a column type as jerome describes.\nThe subcolumns were accessed with dots. This gave us\n\tschema.table.column.attribute\nwhere of course attribute could itself be a composite type....\nWell, ok, it had some drawbacks, too.\n\nIf we ever are serious about implementing this I would help\nwith discussing and/or writing the specs. I can put together a nice spec.\nWhen I get a break on my book project, I might just write it up anyway.\n\nelein\nelein@norcov.com\n\nPS: Everyone please forgive me for reading list mail late and out of order...\nI am in awe of anyone keeping up.\n\nOn Tuesday 17 September 2002 07:22, Tom Lane wrote:\n> \"Jerome Chochon\" <jerome.chochon@ensma.fr> writes:\n> > Can I use this new type like other user-type ?\n> > CREATE TABLE person (his_name VARCHAR, his_adress adress);\n> > ...where adress is CREATE TYPE adress AS (number int, street text,\n> > country VARCHAR);\n>\n> Not at the moment, though that might be an interesting direction to\n> pursue in future releases. At present, the only thing such a type is\n> useful for is to define the argument or result type of a function that\n> takes or returns records.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n",
"msg_date": "Mon, 23 Sep 2002 19:10:48 -0700",
"msg_from": "elein <elein@norcov.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 7.3: help on new CREATE TYPE"
}
] |
[
{
"msg_contents": "Hi,\n\nI just talked to Sebastian again and we face another problem. The\nsoftware he's porting to PostgreSQL calls SQLProcedureColumns to get the\ninfo about the input columns and the result. But the problem is that the\nfunction in question returns an unnamed cursor. Before we start porting\nthe procedure/function we of course have to figure out how to tell the\napp that the procedure will return a cursor, but we couldn't find\nanything in the odbc specs.\n\nAs I do not have access to the MS SQL procedure as it is now I cannot\ntry anything myself, but I'm willing to act as a channel for Sebastian\nto talk to you. The matter of the fact is that I never saw a function\nreturning a cursor on PostgreSQL so far.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Tue, 17 Sep 2002 16:06:26 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "One more problem with odbc driver"
}
] |
[
{
"msg_contents": "Having not seen anyone asking about the progress on the 7.3beta RPMset, I \nthought I would give a statement as to where things stand.\n\nI am waiting the result of the pg_dump from 7.2.x to 7.3 restore discussion. \nThe structure of the entire packaging depends upon knowing how the upgrade \nwill be performed, since the rest of the packaging is just getting a good \nbuild, and excising the gborged clients, which will then have to have their \nown RPMs built. But I'll get the core built first, then I'll work on the \nclients.\n\nI have a basic build running, but it's not releasable. I haven't had time to \ngo through it with the properly fine-toothed comb that I want to as yet. I \nwould expect to be able to release an RPMset for beta 2 if that is a week or \ntwo off.\n\nI'll try to keep everyone who cares updated periodically.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 17 Sep 2002 11:51:39 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": true,
"msg_subject": "RPMS for 7.3 beta."
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> I have a basic build running, but it's not releasable. I haven't had time to\n> go through it with the properly fine-toothed comb that I want to as yet. I \n> would expect to be able to release an RPMset for beta 2 if that is a week or \n> two off.\n\nSounds good. I think the earliest we could be ready for beta2 is the\nend of this week; sometime next week may be more realistic.\n\nGiven that we'll be forcing an initdb for beta2 anyway, those who use\nRPMs may be just as happy to have missed beta1.\n\n> I am waiting the result of the pg_dump from 7.2.x to 7.3 restore discussion.\n\nRight. We clearly have to support loading of 7.2 dumps; the only issue\nin my mind is exactly how we kluge that up ;-). I just talked to Bruce\nabout this a little bit, and we came to the conclusion that there are\ntwo plausible-looking paths:\n\n1. Relax CREATE LANGUAGE to accept either LANGUAGE_HANDLER or OPAQUE as\nthe datatype of the function (ie, make it work more like CREATE TRIGGER\ndoes).\n\n2. Hack CREATE LANGUAGE so that if it's pointed at an OPAQUE-returning\nfunction, it actually updates the recorded return type of the function\nin pg_proc to say LANGUAGE_HANDLER.\n\nIf we go with #1 we're more or less admitting that we have to support\nOPAQUE forever, I think. If we go with #2, then dumps out of 7.3 or\nlater would be OPAQUE-free, and we could eventually remove OPAQUE a few\nrelease cycles down the road. So even though #2 looks mighty ugly,\nI am leaning in that direction.\n\nWhichever way we jump, I think the same behavior should be adopted for\nall three contexts where OPAQUE is relevant: language handlers,\ntriggers, and user-defined-datatype I/O functions. Either we accept\nOPAQUE forever, or we proactively fix the function declarations when\nan old dump is loaded.\n\nAnother interesting thought is that if we do the OPAQUE-to-HANDLER\nupdate thing, we could at the same time coerce the stored path for\nthe PL's shared library into the preferred '$libdir/foo' format,\nrather than the absolute-path form it's likely to have if we're dealing\nwith a pre-7.2 dump. This would not help anything immediately (if you\ngot past the CREATE FUNCTION then you gave a valid shlib path) but it'd\nvery possibly save people trouble down the road.\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Sep 2002 15:59:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RPMS for 7.3 beta. "
},
{
"msg_contents": "On Tuesday 17 September 2002 03:59 pm, Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > as yet. I would expect to be able to release an RPMset for beta 2 if\n> > that is a week or two off.\n\n> Given that we'll be forcing an initdb for beta2 anyway, those who use\n> RPMs may be just as happy to have missed beta1.\n\nHmmm. Any idea if any more initdb forcings are going to happen? :-)\n\n> > I am waiting the result of the pg_dump from 7.2.x to 7.3 restore\n> > discussion.\n\n> Right. We clearly have to support loading of 7.2 dumps; the only issue\n> in my mind is exactly how we kluge that up ;-). I just talked to Bruce\n> about this a little bit, and we came to the conclusion that there are\n> two plausible-looking paths:\n\n> Comments?\n\n From a user/packager viewpoint: the exact mechanics on the internal level, \nwhile nice to know (so that I know what to look for in bug reports), are \nrather irrelevant when it comes to 'how do I package?'. What I am looking \nat is whether the user will have to run 7.3's pg_dump in order to migrate \nolder data. If so I, and Oliver, will have to kludge up dependencies and \nlinkages in ways that I'm not happy with, but can do if need be. And \nmigration is 'need be' if ever there were 'need be'.\n\nI think that I will be able to just build a 'postgresql-olddump' package or \nsimilar that contains 7.3's pg_dump in a 7.2.2-friendly form, and let the \nvarious distributors worry about building that for older system libraries. \n:-) This is just a possibility -- it may not be nearly as hard as I fear it \nwill be -- best case is I do virtually nothing and let people upgrade the \npostgresql-libs and the main package (which includes pg_dump anyway), leaving \nthe existing postgresql-server package in place. They then dump, erase the \nold server package, and install the new server package. I have disabled rpm \nupgrades for the server subpackage as of 7.2.2, so that portion I know is \ndoable. I'll just have to try it. I may be overanalyzing the situation. :-)\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 17 Sep 2002 16:17:45 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": true,
"msg_subject": "Re: RPMS for 7.3 beta."
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> ... What I am looking \n> at is whether the user will have to run 7.3's pg_dump in order to migrate \n> older data.\n\nAFAIK this is not *necessary*, though it may be *helpful*. Aside from\nthe OPAQUE issue, which we will fix one way or another, I am aware of\nthese issues for loading a pre-7.3 dump:\n\n* A reloaded dump will fail to GRANT EXECUTE TO PUBLIC on functions,\n likewise fail to GRANT USAGE TO PUBLIC on procedural languages.\n This may not bother some people, but for those it does bother,\n it's not that hard to issue the GRANTs manually after loading the dump.\n\n* A reloaded dump will not create dependencies between serial columns\n and sequence objects, nor between triggers and foreign key\n constraints, thus 7.3's nifty new support for DROP CONSTRAINT won't\n work, nor will dropping a table make its associated sequences go away.\n However, this can be boiled down to saying that it still works like it\n did before.\n\nThere are of course the same old same old issues regarding pg_dump's\nability to choose a good dump order, but these are not worse than before\neither, and would bite you just as badly if you tried to reload your\ndump into 7.2.*.\n\nUsing 7.3's pg_dump would help you with the GRANT issue, but AFAIR it\nwon't do anything for reconstructing serial or foreign-key dependencies.\nAnd it definitely wouldn't help on the ordering issue. So it's probably\nnot worth the trouble if you can't do it trivially, which you can't in\nan RPM-upgrade context. (We do advise it for people who are building\nfrom source, since it's not difficult for them.)\n\nIn short, I'm not sure why you and Oliver are so unhappy. We may not\nhave made the world better than before for upgrade scenarios, but I\ndon't think we've made it worse either.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Sep 2002 16:40:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RPMS for 7.3 beta. "
},
{
"msg_contents": "> Using 7.3's pg_dump would help you with the GRANT issue, but AFAIR it\n> won't do anything for reconstructing serial or foreign-key dependencies.\n\nThe below perl script can help with both of those.\n\nhttp://www.rbt.ca/postgresql/upgrade/upgrade.tar.gz\n\nExplanation URL:\nhttp://www.rbt.ca/postgresql/upgrade.shtml\n\n\nDoesn't deal with DEFERRED triggers.\n\n-- \n Rod Taylor\n\n",
"msg_date": "17 Sep 2002 16:56:12 -0400",
"msg_from": "Rod Taylor <rbt@rbt.ca>",
"msg_from_op": false,
"msg_subject": "Re: RPMS for 7.3 beta."
},
{
"msg_contents": "Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > ... What I am looking \n> > at is whether the user will have to run 7.3's pg_dump in order to migrate \n> > older data.\n> \n> AFAIK this is not *necessary*, though it may be *helpful*. Aside from\n> the OPAQUE issue, which we will fix one way or another, I am aware of\n> these issues for loading a pre-7.3 dump:\n> \n> * A reloaded dump will fail to GRANT EXECUTE TO PUBLIC on functions,\n> likewise fail to GRANT USAGE TO PUBLIC on procedural languages.\n> This may not bother some people, but for those it does bother,\n> it's not that hard to issue the GRANTs manually after loading the dump.\n> \n> * A reloaded dump will not create dependencies between serial columns\n> and sequence objects, nor between triggers and foreign key\n> constraints, thus 7.3's nifty new support for DROP CONSTRAINT won't\n> work, nor will dropping a table make its associated sequences go away.\n> However, this can be boiled down to saying that it still works like it\n> did before.\n\nThese seem like poor reasons for using 7.3 pg_dump on 7.2 databases. \nItem #1 can be easily fixed via an SQL command issued after the load, if\ndesired, and #2 is really not something specific to the RPM issue. \n\nWe may be better writing a script that uses the names of the\ntriggers/sequences to create dependency information automatically. Has\nanyone looked at that?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 17 Sep 2002 17:58:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RPMS for 7.3 beta."
},
{
"msg_contents": "> Sounds good. I think the earliest we could be ready for beta2 is the\n> end of this week; sometime next week may be more realistic.\n>\n> Given that we'll be forcing an initdb for beta2 anyway, those who use\n> RPMs may be just as happy to have missed beta1.\n\nIf an initdb is planned - did that split->split_part or whatever change make\nit in?\n\nChris\n\n",
"msg_date": "Wed, 18 Sep 2002 09:25:32 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: RPMS for 7.3 beta. "
},
{
"msg_contents": "> * A reloaded dump will not create dependencies between serial columns\n> and sequence objects, nor between triggers and foreign key\n> constraints, thus 7.3's nifty new support for DROP CONSTRAINT won't\n> work, nor will dropping a table make its associated sequences go away.\n> However, this can be boiled down to saying that it still works like it\n> did before.\n\nRemember that Rod Taylor's written a script to fix at least the foreign key\nissue above. I think it'd be neat if that script were perfected and did\nserials as well and then we could recommend its use...\n\nChris\n\n",
"msg_date": "Wed, 18 Sep 2002 09:27:42 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: RPMS for 7.3 beta. "
},
{
"msg_contents": "On Tuesday 17 September 2002 04:40 pm, Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > ... What I am looking\n> > at is whether the user will have to run 7.3's pg_dump in order to migrate\n> > older data.\n\n> AFAIK this is not *necessary*, though it may be *helpful*. Aside from\n> the OPAQUE issue, which we will fix one way or another, I am aware of\n> these issues for loading a pre-7.3 dump:\n\nHelpful is good. If it proves not too hard I'm going to try that route. And \nthe more I think about the less difficult I think it will be. I've about \ngiven up on the upgrade ever really being easy.\n\n> In short, I'm not sure why you and Oliver are so unhappy. We may not\n> have made the world better than before for upgrade scenarios, but I\n> don't think we've made it worse either.\n\nIt's a long-term pain, Tom. With brief paroxysms worthy of appendicitis.\n\nI've been caught by it -- I lost data due to bad RPM packaging coupled with \nthe dump/restore cycle. That's what motivated me to start doing this in the \nfirst place, three years ago.\n\nI just want people to not get bit in a bad way and decide they don't want to \nuse PostgreSQL after all. And with the new features of 7.3, lots of users \nwho might have begun with 7.2 are going to want to upgrade -- but if it's too \npainful.... Sorry, it's just a sore spot for me, this whole upgrade issue. \nI know Oliver has the same problem, with slightly different presentation.\n\nI'm not meaning to be a pain; just trying to prevent some for someone else.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 17 Sep 2002 22:12:34 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": true,
"msg_subject": "Re: RPMS for 7.3 beta."
},
{
"msg_contents": "> I just want people to not get bit in a bad way and decide they\n> don't want to\n> use PostgreSQL after all. And with the new features of 7.3, lots\n> of users\n> who might have begun with 7.2 are going to want to upgrade -- but\n> if it's too\n> painful.... Sorry, it's just a sore spot for me, this whole\n> upgrade issue.\n> I know Oliver has the same problem, with slightly different presentation.\n\nIS there any solution to Postgres's upgrade problems? I mean, ever? With\nthe complex catalog design, etc - how is it every possible for us to do a\nplug-n-play major version upgrade (assuming datafile format doesn't change\nanymore)\n\nHow does pg_upgrade work?\n\nChris\n\n",
"msg_date": "Wed, 18 Sep 2002 10:27:10 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: RPMS for 7.3 beta."
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> > I just want people to not get bit in a bad way and decide they\n> > don't want to\n> > use PostgreSQL after all. And with the new features of 7.3, lots\n> > of users\n> > who might have begun with 7.2 are going to want to upgrade -- but\n> > if it's too\n> > painful.... Sorry, it's just a sore spot for me, this whole\n> > upgrade issue.\n> > I know Oliver has the same problem, with slightly different presentation.\n> \n> IS there any solution to Postgres's upgrade problems? I mean, ever? With\n> the complex catalog design, etc - how is it every possible for us to do a\n> plug-n-play major version upgrade (assuming datafile format doesn't change\n> anymore)\n> \n> How does pg_upgrade work?\n\npg_upgrade sort of worked for 7.2 but I got to it too late and I didn't\nproperly expand the pg_clog files. In 7.3, the file format has changed.\nIf we don't change the format for 7.4, I can do it, but I have to add\nschema stuff to it. Shouldn't be too hard.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 17 Sep 2002 22:53:41 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RPMS for 7.3 beta."
},
{
"msg_contents": "> > How does pg_upgrade work?\n> \n> pg_upgrade sort of worked for 7.2 but I got to it too late and I didn't\n> properly expand the pg_clog files. In 7.3, the file format has changed.\n> If we don't change the format for 7.4, I can do it, but I have to add\n> schema stuff to it. Shouldn't be too hard.\n\nI mean - how does it actually _work_?\n\nChris\n\n",
"msg_date": "Wed, 18 Sep 2002 11:04:02 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: RPMS for 7.3 beta."
},
{
"msg_contents": "On Tuesday 17 September 2002 10:27 pm, Christopher Kings-Lynne wrote:\n> Lamar Owen wrote:\n> > Sorry, it's just a sore spot for me, this whole\n> > upgrade issue.\n\n> IS there any solution to Postgres's upgrade problems? I mean, ever? With\n> the complex catalog design, etc - how is it every possible for us to do a\n> plug-n-play major version upgrade (assuming datafile format doesn't change\n> anymore)\n\nWhile I should know better, I'm going to reply.....:-)\n\nThe system catalog has poor system-user separation. Better separation might \nhelp the issue. Putting all the stuff that belongs to system into the \n'system' catalog and then putting the user's customizations into a 'user' \ncatalog, with a set of SQL scripts to upgrade the user portion if columns or \nother metadata changed in the user portion. This statement is vastly \nsimplified. Then you can blow out the system portion and reinit it without \ndisturbing the user data and metadata. The problem I believe would be \nenforcing a strict enough demarcation to make that possible. Then there's \nthe nontrivial issue of where the point of demarcation lies. But I should \nlet someone better versed in the system catalog structure answer that.\n\n<heresy>\nI'd give up a few extensibility features for solid upgrading. If I didn't \nhave so much invested in PostgreSQL I might take a hard look at MySQL 4, \nsince data migration has heretofore been one of their few real strengths. \nBut I've got three years of RPM maintenance and five years of infrastructure \nbuilt on PostgreSQL, so migrating to something else isn't a real palatable \noption at this point.\n</heresy>\n\n> How does pg_upgrade work?\n\nIf I am not mistaken pg_upgrade attempts to do just exactly what I described \nabove, moving data tables and associated metadata out of the way, initdb, and \nmove the data back, rebuiding the system catalog linkages into the user \nmetadata as it goes. And it works in a state where there is mixed metadata. \nAt least that's what I remember without looking at the source code to it -- \nthe code is in contrib/pg_upgrade and is a shell script. For laughs I have \nthe source code in another window now, and it is rather involved, issuing a \nnumber of queries to gather the information to relink the user metadata back \nin.\n\nIt then vacuums so that losing the transaction log file (!!) isn't fatal to \nthe upgrade.\n\nIt then stops postmaster and moves things out of the way, then an initdb is \nperformed. The schema is restored; the transaction statuses are restored, \nand data is moved back in, into the proper places. Moving back into the \nproper places is nontrivial, and the existing code makes no attempt to \nrollback partial upgrades. That failing could be fixed, however.\n\nThen:\n# Now that we have moved the WAL/transaction log files, vacuum again to\n# mark install rows with fixed transaction ids to prevent problems on xid\n# wraparound.\n\nLike I said, it's involved. I'm not sure it works for a 7.2.2-> 7.3 upgrade. \n\nIf the on-disk binary format has changed, tough cookie. It won't help us, \nsince it doesn't make any effort to convert data -- it's just moving it \naround and recreating the metadata linkages necessary.\n\nNow if a binary data converter could be paired with what pg_upgrade is \ncurrently doing, it might fly. But scattered in the code is the discouraging \ncomment:\n# Check for version compatibility.\n# This code will need to be updated/reviewed for each new PostgreSQL release.\n\nKeeping abreast of the changing formats and the other 'gotchas' is just about \ngoing to be a full-time job, since changes are made to the system catalogs, \nsyntax, semantics, and data format with little regard as to how it will \nimpact data migration. IOW, migration/upgrading shouldn't be an afterthought \nif it's going to work right.\n\nI wish (in a somewhat wistful, yet futile manner) that each change was \naccompanied by data migration strategies for that change, but I'm not holding \nmy breath, since the core developers have more important things to do. (Not \nbeing sarcastic -- just observing a fact).\n\nOh well. Chris, you got me wound up again... :-( I wish I had the time and \nfunding to go after it, but I have a full-time job already as a broadcast \nengineer, and while we use PostgreSQL in a mission critical role here, I \ncan't justify diverting other monies for this purpose. Money is tight enough \nalready.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 17 Sep 2002 23:09:57 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": true,
"msg_subject": "Re: RPMS for 7.3 beta."
},
{
"msg_contents": "\nThis is a better description that I could make. If you look at the\nscript it is very well commented so you should be able to see it works. \nAlso, read the manual page first. \n\nIn summary, doing any kind of data changes is quite involved (smaller\ntuple header for 7.3) and because it has to be redone for every release,\nit is quite a pain. Also, considering commercial databases don't do\nmuch better, I fell pretty OK about it. However, we do make releases\nmore frequently than commercial folks, so the pain is more consistent.\n\nMySQL hasn't changed their base table format in perhaps 10 years, so\nyea, that is a real win for them. Of course, they don't shoot out\nfeatures as fast as we do so that helps. You could pretend you are\nusing MySQL and just not upgrade for 5 years. ;-)\n\n---------------------------------------------------------------------------\n\nLamar Owen wrote:\n> On Tuesday 17 September 2002 10:27 pm, Christopher Kings-Lynne wrote:\n> > Lamar Owen wrote:\n> > > Sorry, it's just a sore spot for me, this whole\n> > > upgrade issue.\n> \n> > IS there any solution to Postgres's upgrade problems? I mean, ever? With\n> > the complex catalog design, etc - how is it every possible for us to do a\n> > plug-n-play major version upgrade (assuming datafile format doesn't change\n> > anymore)\n> \n> While I should know better, I'm going to reply.....:-)\n> \n> The system catalog has poor system-user separation. Better separation might \n> help the issue. Putting all the stuff that belongs to system into the \n> 'system' catalog and then putting the user's customizations into a 'user' \n> catalog, with a set of SQL scripts to upgrade the user portion if columns or \n> other metadata changed in the user portion. This statement is vastly \n> simplified. Then you can blow out the system portion and reinit it without \n> disturbing the user data and metadata. The problem I believe would be \n> enforcing a strict enough demarcation to make that possible. Then there's \n> the nontrivial issue of where the point of demarcation lies. But I should \n> let someone better versed in the system catalog structure answer that.\n> \n> <heresy>\n> I'd give up a few extensibility features for solid upgrading. If I didn't \n> have so much invested in PostgreSQL I might take a hard look at MySQL 4, \n> since data migration has heretofore been one of their few real strengths. \n> But I've got three years of RPM maintenance and five years of infrastructure \n> built on PostgreSQL, so migrating to something else isn't a real palatable \n> option at this point.\n> </heresy>\n> \n> > How does pg_upgrade work?\n> \n> If I am not mistaken pg_upgrade attempts to do just exactly what I described \n> above, moving data tables and associated metadata out of the way, initdb, and \n> move the data back, rebuiding the system catalog linkages into the user \n> metadata as it goes. And it works in a state where there is mixed metadata. \n> At least that's what I remember without looking at the source code to it -- \n> the code is in contrib/pg_upgrade and is a shell script. For laughs I have \n> the source code in another window now, and it is rather involved, issuing a \n> number of queries to gather the information to relink the user metadata back \n> in.\n> \n> It then vacuums so that losing the transaction log file (!!) isn't fatal to \n> the upgrade.\n> \n> It then stops postmaster and moves things out of the way, then an initdb is \n> performed. The schema is restored; the transaction statuses are restored, \n> and data is moved back in, into the proper places. Moving back into the \n> proper places is nontrivial, and the existing code makes no attempt to \n> rollback partial upgrades. That failing could be fixed, however.\n> \n> Then:\n> # Now that we have moved the WAL/transaction log files, vacuum again to\n> # mark install rows with fixed transaction ids to prevent problems on xid\n> # wraparound.\n> \n> Like I said, it's involved. I'm not sure it works for a 7.2.2-> 7.3 upgrade. \n> \n> If the on-disk binary format has changed, tough cookie. It won't help us, \n> since it doesn't make any effort to convert data -- it's just moving it \n> around and recreating the metadata linkages necessary.\n> \n> Now if a binary data converter could be paired with what pg_upgrade is \n> currently doing, it might fly. But scattered in the code is the discouraging \n> comment:\n> # Check for version compatibility.\n> # This code will need to be updated/reviewed for each new PostgreSQL release.\n> \n> Keeping abreast of the changing formats and the other 'gotchas' is just about \n> going to be a full-time job, since changes are made to the system catalogs, \n> syntax, semantics, and data format with little regard as to how it will \n> impact data migration. IOW, migration/upgrading shouldn't be an afterthought \n> if it's going to work right.\n> \n> I wish (in a somewhat wistful, yet futile manner) that each change was \n> accompanied by data migration strategies for that change, but I'm not holding \n> my breath, since the core developers have more important things to do. (Not \n> being sarcastic -- just observing a fact).\n> \n> Oh well. Chris, you got me wound up again... :-( I wish I had the time and \n> funding to go after it, but I have a full-time job already as a broadcast \n> engineer, and while we use PostgreSQL in a mission critical role here, I \n> can't justify diverting other monies for this purpose. Money is tight enough \n> already.\n> -- \n> Lamar Owen\n> WGCR Internet Radio\n> 1 Peter 4:11\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 17 Sep 2002 23:22:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RPMS for 7.3 beta."
},
{
"msg_contents": "> <heresy>\n> I'd give up a few extensibility features for solid upgrading. If\n> I didn't\n> have so much invested in PostgreSQL I might take a hard look at MySQL 4,\n> since data migration has heretofore been one of their few real\n> strengths.\n> But I've got three years of RPM maintenance and five years of\n> infrastructure\n> built on PostgreSQL, so migrating to something else isn't a real\n> palatable\n> option at this point.\n> </heresy>\n\nI do notice that I think MySQL requires you to run a script for some\nupgrades...\n\nChris\n\n",
"msg_date": "Wed, 18 Sep 2002 11:37:17 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: RPMS for 7.3 beta."
},
{
"msg_contents": "On Tue, 2002-09-17 at 21:40, Tom Lane wrote:\n> In short, I'm not sure why you and Oliver are so unhappy. We may not\n> have made the world better than before for upgrade scenarios, but I\n> don't think we've made it worse either.\n\nI'm unhappy because I know that I will get bug reports that I will have\nto deal with. They will take time and effort and would not be necessary\nif we had a seamless upgrade path. The more PostgreSQL gets used, the\nmore it will be used by 'clueless' users; they just install binary\npackages and expect them to work. That may currently be an unrealistic\nexpectation, but I would like it to become a goal of the project. It\nhas always been my goal as Debian maintainer, but I don't think I can\nachieve it for this release.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Give, and it shall be given unto you; good measure, \n pressed down, and shaken together, and running over, \n shall men pour into your lap. For by your standard of \n measure it will be measured to in return.\"\n Luke 6:38 \n\n",
"msg_date": "18 Sep 2002 04:44:42 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: RPMS for 7.3 beta."
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n>> How does pg_upgrade work?\n> [ pretty good description ]\n\nYou missed a key point, which is that pg_upgrade does not even try to\ncope with version-to-version system catalog changes. It assumes it can\nuse pg_dump to dump and reload the database schema. So there is no\nhope, ever, that it will be more reliable than pg_dump. All pg_upgrade\ntries to do is short-circuit the moving of the bulk data.\n\nThe bald fact of the matter is that we are still a good ways away from\nthe point where we might be willing to freeze the system catalogs. PG\nis evolving and improving by a substantial amount with every release,\nand the implication of that is that there *will* be some upgrade pain.\nIf you don't like that ... well ... you're welcome to keep using PG 6.1\n... but I haven't got a better answer.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Sep 2002 23:51:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RPMS for 7.3 beta. "
},
{
"msg_contents": "On Wed, 2002-09-18 at 04:22, Bruce Momjian wrote:\n> \n> In summary, doing any kind of data changes is quite involved (smaller\n> tuple header for 7.3) and because it has to be redone for every release,\n> it is quite a pain. \n\nIs it feasible to make a utility to rewrite each table, shortening the\nheaders and making any other necessary changes? (Taking for granted\nthat the database has been vacuumed and the postmaster shut down.)\n\nThis could build up over successive releases, with an input section\nappropriate to each older version and an output section for the current\nversion. Then an upgrade from any older version to the current one\ncould be done by pg_upgrade.\n\nIs this even worth considering? \n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Give, and it shall be given unto you; good measure, \n pressed down, and shaken together, and running over, \n shall men pour into your lap. For by your standard of \n measure it will be measured to in return.\"\n Luke 6:38 \n\n",
"msg_date": "18 Sep 2002 04:54:06 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: RPMS for 7.3 beta."
},
{
"msg_contents": "Oliver Elphick wrote:\n> On Tue, 2002-09-17 at 21:40, Tom Lane wrote:\n> > In short, I'm not sure why you and Oliver are so unhappy. We may not\n> > have made the world better than before for upgrade scenarios, but I\n> > don't think we've made it worse either.\n> \n> I'm unhappy because I know that I will get bug reports that I will have\n> to deal with. They will take time and effort and would not be necessary\n> if we had a seamless upgrade path.\n\nThis last line gave me a chuckle. It is like \"software wouldn't be\nnecessary if computers could read people's minds\". :-)\n\nThe issue with modifying the data files is that if we have to modify the\nlarge binary data file we may as well just dump/reload the data. If we\ndon't change the on-disk format for 7.4 I will try again to make\npg_upgrade work.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 18 Sep 2002 00:02:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RPMS for 7.3 beta."
},
{
"msg_contents": "On Tuesday 17 September 2002 11:22 pm, Bruce Momjian wrote:\n> This is a better description tha[n] I could make. If you look at the\n> script it is very well commented so you should be able to see it works.\n> Also, read the manual page first.\n\nI don't know how, but this time looking at the script, I just grokked it. \nMaybe that's because it finally clicked in my mind what was happening; \nregardless, thanks for the compliment; feel free to use that, edited as \nnecessary, in any documentation you might desire.\n\nBut you are certainly correct about the comments...some of which are more than \na little tongue in cheek...\n# Strip off the trailing directory name and store our data there\n# in the hope we are in the same filesystem so 'mv 'works.\n\n:-)\n\n> However, we do make releases\n> more frequently than commercial folks, so the pain is more consistent.\n\nWell, for me and Oliver it comes in waves -- every major release has its \nparoxysm. Then things cool off a little until next cycle. These one year \ncycles have, in that way, been a good thing..... :-P\n\nYou know, if the featureset of the new releases wasn't so _seductive_ it \nwouldn't be nearly as big of a problem... \n\n> You could pretend you are\n> using MySQL and just not upgrade for 5 years. ;-)\n\nDon't say that too loudly, or my production 6.5.3 database that backends the \nlarger portion of my intranet will hear you....I'm just now moving the whole \nshooting match over to 7.2.2 as part of our delayed website redesign to use \nOpenACS. That dataset started with 6.1.2 over five years ago, and it was the \n6.2.1->6.3.2 fiasco Red Hat created (by giving no warning that 5.1 had 6.3.2 \n(5.0 had 6.2.1)) that got my dander up the first time. I lost a few thousand \nrecords in that mess, which are now moot but then was a bad problem. Since \nthere wasn't an official Red Hat RPM for 6.1.2, that installation was from \nsource and didn't get obliterated when I moved from Red Hat 4.2 to 5.0. I \nwas able to run both 6.1.2 and 6.2.1 concurrently, and the migration went \nsmoothly -- but there were less than ten thousand records at that point.\n\nSo I _do_ have a three-year old database sitting there. Rock solid except for \none or two times of wierd vacuum/pg_dump interactions, solved by making them \nsequential.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 18 Sep 2002 00:10:09 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": true,
"msg_subject": "Re: RPMS for 7.3 beta."
},
{
"msg_contents": "On Tuesday 17 September 2002 11:51 pm, Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> >> How does pg_upgrade work?\n> > [ pretty good description ]\n\n> You missed a key point, which is that pg_upgrade does not even try to\n> cope with version-to-version system catalog changes. It assumes it can\n> use pg_dump to dump and reload the database schema. So there is no\n> hope, ever, that it will be more reliable than pg_dump. All pg_upgrade\n> tries to do is short-circuit the moving of the bulk data.\n\nYes, this is a key point and one that shouldn't be overlooked. If the \nmetadata belonging to the user's data didn't have to be pg_dumped, but was \ndecoupled somewhat from the system metadata about types, operators, classes, \nand the like, the schema (great, another overloaded term) wouldn't need \ndumping but would travel with its data.\n\n> The bald fact of the matter is that we are still a good ways away from\n> the point where we might be willing to freeze the system catalogs. \n\nNot talking about a freeze. Talking about separation of system/feature \nmetadata from user metadata that wouldn't change in the upgrade anyway -- \ntable names, fields, user types, views, triggers, etc, that belong to this \ndatabase and not to the installation as a whole. If columns need changed or \nadded to the user data's metadata, have the upgrade script run the \nappropriate ALTER commands and UPDATES necessary. The hard parts, I know, \nare the details behind the broad 'appropriate'.\n\n> PG\n> is evolving and improving by a substantial amount with every release,\n> and the implication of that is that there *will* be some upgrade pain.\n\nWhy is it a given conclusion? It should not be axiomatic that 'there *will* \nbe upgrade pain if we improve our features.' That's fatalistic.\n\nWe have innovative solutions in PostgreSQL that solve some pretty hairy \nproblems. WAL. MVCC. The subselect code (made my day when I heard about \nthat one -- but then had to wait seven months before Red Hat saw fit to \nprovide an RPM that I wasn't expecting.....the other reason I began RPM \nbuilding, even though it was two cycles later before I got up the nerve to \ntackle it...). The PL's. Foreign keys. TOAST (now that's a prime example of \na 'sideways' solution to a head-on problem).\n\nThis is just a different challenge: how to keep the loosely dynamic system \ncatalog structure while at the same time allowing the possibility of smooth \ndata migration so people can more easily take advantage of the improved \nsystem catalog structure. And yes I know that such a change is not for 7.3. \nToo late for that, and maybe too late for 7.4 too.\n\nBut unlike Bruce I winced at Oliver's last line -- it hit a little too close \nto home and to many multitudes of bug reports and nastygrams directed my way \nfor something I have tried to kludge around in the past. Yes, nastygrams, in \nthe grand old alt.flame tradition. When you maintain RPM's, you find \nyourself the point man for the entire project in some people's eyes. The bug \nreport about my RPM's trashing a fellow's RPM database was an extreme example \nof that. I get two-three dozen e-mails a week that I redirect to the web \nsite and/or the mailing lists. I'm sure Oliver is nodding his head in \nunderstanding on this one.\n\nI don't think seamless upgrading is a pipe dream. And I think that dismissing \nit out of hand as 'impossible' is a self-fulfilling prophecy.\n\nBut I do think it won't work well if it's just tacked-on.\n\nBut, like Tom, I really don't have more of an answer than that. I do \nunderstand pg_upgrade much better now, though.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 18 Sep 2002 00:36:36 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": true,
"msg_subject": "Re: RPMS for 7.3 beta."
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> On Tuesday 17 September 2002 11:51 pm, Tom Lane wrote:\n>> The bald fact of the matter is that we are still a good ways away from\n>> the point where we might be willing to freeze the system catalogs. \n\n> Not talking about a freeze. Talking about separation of system/feature \n> metadata from user metadata that wouldn't change in the upgrade anyway -- \n\nBut the system catalogs *store* that metadata. Whether the user's\nmetadata is changing or not in a high-level sense doesn't prove much\nabout what's happening to its low-level representation.\n\n>> PG\n>> is evolving and improving by a substantial amount with every release,\n>> and the implication of that is that there *will* be some upgrade pain.\n\n> Why is it a given conclusion? It should not be axiomatic that 'there *will* \n> be upgrade pain if we improve our features.' That's fatalistic.\n\nI prefer \"realistic\" :-). It is probably true that with an adequate\namount of effort directed towards upgrade issues we could make upgrading\nless painful than it's usually been for PG. (I didn't say \"zero pain\",\nmind you, only \"less pain\".) But where is that effort going to come\nfrom? None of the key developers care to spend their time that way;\nall of us have other issues that we find more interesting/compelling/fun.\nUnless someone of key-developer caliber comes along who *likes* spending\ntime on upgrade issues, it's not going to get better. Sorry to be the\nbearer of bad news, but that's reality as I see it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Sep 2002 00:55:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RPMS for 7.3 beta. "
},
{
"msg_contents": "On Wednesday 18 September 2002 12:55 am, Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > Not talking about a freeze. Talking about separation of system/feature\n> > metadata from user metadata that wouldn't change in the upgrade anyway --\n\n> But the system catalogs *store* that metadata.\n\nThey _currently_ store the user's metadata. But that's my point -- does the \nuser metadata that isn't typically substantially different after going \nthrough a dump/reload _have_ to coexist with the system data which is \nintrinsic to the basic backend operation?\n\nYes, I know I'm talking about refactoring/renormalizing the system catalogs. \nAnd I know that's neither interesting nor 'fun'. And a major undertaking.\n\n> from? None of the key developers care to spend their time that way;\n> all of us have other issues that we find more interesting/compelling/fun.\n> Unless someone of key-developer caliber comes along who *likes* spending\n> time on upgrade issues, it's not going to get better. Sorry to be the\n> bearer of bad news, but that's reality as I see it.\n\nQuoting myself from my reply a couple of hours ago to Chris:\n-> While I should know better, I'm going to reply.....:-)\n[snip]\n-> I wish (in a somewhat wistful, yet futile manner) that each change was \n-> accompanied by data migration strategies for that change, but I'm not \n-> holding my breath, since the core developers have more important things\n-> to do. (Not being sarcastic -- just observing a fact).\n\nYou're not telling me something I don't already know in your paragraph, Tom. \nData migration of real users isn't interesting, compelling, or fun. That's \nbeen made abundantly clear the last ten times the subject of upgrading has \ncome up. What's a real-world user to do? Find it interesting, compelling, \nand fun to work around our shortcoming? (here comes one of those paroxysms \nthat will keep me awake tonight....)\n\nI for one am not doing _this_ because I find it to be 'fun'. Quite the \nopposite -- you try to help people who end up cussing you out for something \nyou can't control. (And I see all those messages addressed to Tom coming \nthrough the lists, so I'm sure Tom is no stranger to this portion of the \nproject, either) \n\nI'm doing _this_ to try to help people not go through what I went through, as \nwell as to try to help the project in general, for both selfish and selfless \nreasons. If I were able to spend enough time on the issue I am quite \nconfident I could find a solution, in a year or so. But I find it \ncompelling, if nothing else, to put food on my kids' plates, which precludes \nme working much on this particular issue. But I do what I can, if nothing \nelse.\n\nBut it is _necessary_ to migrate data for one reason or another. Lack of \ndistributed backports for security patches, that are official releases, is \none quite compelling reason to go through an upgrade.\n\nChris, this is why I was somewhat reticent to reply before. I've been down \nthis dead-end road before. To distill Tom's comments:\nIt is technically feasible to make a better (not perfect) upgrade path, but \nnobody that can do it wants to.\n\nWhat good is an interesting, compelling, fun, featureful, new version if \nnobody ugrades to it due to migration difficulties? This release could be \nthe harbinger of further difficulties, I fear.\n\nSo, that's why I'm unhappy, to answer a question asked quite a while back in \nthe thread. \n\nBack on topic: I'll work towards using the 7.3 pg_dump unless the 7.2 dump can \nbe easily restored. Given the desireability for opaque to go away soon, if \nthe 7.3 pg_dump Does The Right Thing and creates an opaque-free dump, that in \nitself is enough reason to go that route, as it helps the user create a \nnonambiguous data dump. If it helps the user it is typically a Good Thing, \nand I am willing to put the effort into that. And it may prove to not be \nthat bad -- I'll know in a few days, hopefully.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 18 Sep 2002 01:46:54 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": true,
"msg_subject": "Re: RPMS for 7.3 beta."
},
{
"msg_contents": "On Wed, 2002-09-18 at 05:02, Bruce Momjian wrote:\n> Oliver Elphick wrote:\n> > I'm unhappy because I know that I will get bug reports that I will have\n> > to deal with. They will take time and effort and would not be necessary\n> > if we had a seamless upgrade path.\n> \n> This last line gave me a chuckle. It is like \"software wouldn't be\n> necessary if computers could read people's minds\". :-)\n\nNot really! We know what the formats are before and after. \n\nWe want PostgreSQL to be the best database. Why on earth can we not\nhave the same ambition for the upgrade process?\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Give, and it shall be given unto you; good measure, \n pressed down, and shaken together, and running over, \n shall men pour into your lap. For by your standard of \n measure it will be measured to in return.\"\n Luke 6:38 \n\n",
"msg_date": "18 Sep 2002 08:04:00 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: RPMS for 7.3 beta."
},
{
"msg_contents": "> Remember that Rod Taylor's written a script to fix at least the foreign key\n> issue above. I think it'd be neat if that script were perfected and did\n> serials as well and then we could recommend its use...\n\nIt does do serials (adds pg_depend entry -- which is just enough), as\nwell as changes unique indexes into unique constraints.\n\nAs I had a few items I didn't want to upgrade, it asks the user if they\nwant to do each one (-Y to fix 'em all).\n\n-- \n Rod Taylor\n\n",
"msg_date": "18 Sep 2002 08:24:07 -0400",
"msg_from": "Rod Taylor <rbt@rbt.ca>",
"msg_from_op": false,
"msg_subject": "Re: RPMS for 7.3 beta."
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> On Wednesday 18 September 2002 12:55 am, Tom Lane wrote:\n>> But the system catalogs *store* that metadata.\n\n> They _currently_ store the user's metadata. But that's my point -- does the \n> user metadata that isn't typically substantially different after going \n> through a dump/reload _have_ to coexist with the system data which is \n> intrinsic to the basic backend operation?\n\nI think we're talking at cross-purposes. When I said we can't freeze\nthe system catalogs yet, I meant that we cannot freeze the format/schema\nin which metadata is stored. That affects both system and user entries.\nYou seem to be envisioning moving user metadata into a separate set of\ntables from the predefined entries --- but that will help not one whit\nas far as easing upgrades goes.\n\n> Given the desireability for opaque to go away soon, if \n> the 7.3 pg_dump Does The Right Thing and creates an opaque-free dump,\n\nThe present proposal for that has the 7.3 backend patching things up\nduring reload; it won't matter whether you use 7.2 or 7.3 pg_dump to\ndump from a 7.2 database.\n\n> And it may prove to not be \n> that bad -- I'll know in a few days, hopefully.\n\nIf you find that it's not too painful then I do agree with doing it.\nThere will doubtless be future cycles where it's more valuable to be\nable to use the up-to-date pg_dump than it is in this one.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Sep 2002 09:51:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RPMS for 7.3 beta. "
},
{
"msg_contents": "\nI am working on a README and will add this to /contrib. Thanks.\n\n---------------------------------------------------------------------------\n\nRod Taylor wrote:\n> > Using 7.3's pg_dump would help you with the GRANT issue, but AFAIR it\n> > won't do anything for reconstructing serial or foreign-key dependencies.\n> \n> The below perl script can help with both of those.\n> \n> http://www.rbt.ca/postgresql/upgrade/upgrade.tar.gz\n> \n> Explanation URL:\n> http://www.rbt.ca/postgresql/upgrade.shtml\n> \n> \n> Doesn't deal with DEFERRED triggers.\n> \n> -- \n> Rod Taylor\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 18 Sep 2002 13:29:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RPMS for 7.3 beta."
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> > Sounds good. I think the earliest we could be ready for beta2 is the\n> > end of this week; sometime next week may be more realistic.\n> >\n> > Given that we'll be forcing an initdb for beta2 anyway, those who use\n> > RPMs may be just as happy to have missed beta1.\n> \n> If an initdb is planned - did that split->split_part or whatever change make\n> it in?\n\nYes, it did, and in fact if you didn't initdb after the patch was\napplied, you would see regression failures.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 18 Sep 2002 15:17:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RPMS for 7.3 beta."
},
{
"msg_contents": "Oliver Elphick writes:\n\n> We want PostgreSQL to be the best database. Why on earth can we not\n> have the same ambition for the upgrade process?\n\nWe do have that ambition. We just don't have enough clues and time to\nfollow up on it.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 18 Sep 2002 22:09:51 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: RPMS for 7.3 beta."
},
{
"msg_contents": "\nOK, I have added this script to /contrib/adddepend with an appropriate\nREADME. It seems to work quite well. It does require Pg:DBD. I will\nadd a mention of the script in the release notes.\n\n---------------------------------------------------------------------------\n\nRod Taylor wrote:\n> > Using 7.3's pg_dump would help you with the GRANT issue, but AFAIR it\n> > won't do anything for reconstructing serial or foreign-key dependencies.\n> \n> The below perl script can help with both of those.\n> \n> http://www.rbt.ca/postgresql/upgrade/upgrade.tar.gz\n> \n> Explanation URL:\n> http://www.rbt.ca/postgresql/upgrade.shtml\n> \n> \n> Doesn't deal with DEFERRED triggers.\n> \n> -- \n> Rod Taylor\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 18 Sep 2002 16:39:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RPMS for 7.3 beta."
},
{
"msg_contents": "On Tue, 17 Sep 2002, Tom Lane wrote:\n\n> > I am waiting the result of the pg_dump from 7.2.x to 7.3 restore discussion.\n>\n> Right. We clearly have to support loading of 7.2 dumps; the only issue\n> in my mind is exactly how we kluge that up ;-). I just talked to Bruce\n> about this a little bit, and we came to the conclusion that there are\n> two plausible-looking paths:\n>\n> 1. Relax CREATE LANGUAGE to accept either LANGUAGE_HANDLER or OPAQUE as\n> the datatype of the function (ie, make it work more like CREATE TRIGGER\n> does).\n>\n> 2. Hack CREATE LANGUAGE so that if it's pointed at an OPAQUE-returning\n> function, it actually updates the recorded return type of the function\n> in pg_proc to say LANGUAGE_HANDLER.\n\nStupid question, but why not just create an upgrade script that does any\nrequired translations external to the database?\n\n\n",
"msg_date": "Wed, 18 Sep 2002 22:21:13 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Restore from pre-v7.3 -> v7.3 (Was: Re: RPMS for 7.3 beta.)"
}
] |
[
{
"msg_contents": "I started by saying\n> * Within a category, \"up\" (lossless) conversions are implicit, \"down\"\n> (potentially lossy) conversions should be assignment-only.\nbut as always the devil is in the details.\n\nAfter further thought, and the thread with Andreas about where we might go\nwith this in 7.4, I have developed a two-stage plan for dealing with\nnumeric casts. We can make some progress in 7.3 but there is more work\nthat will have to be postponed. Here's my current thoughts (plan first,\nthen discussion):\n\nDo for 7.3:\n\n* Set up pg_cast so that up-coercions in the series\nint2->int4->int8->numeric->float4->float8 are implicit, while\ndown-coercions (the reverse direction of each of these fifteen casts)\nare marked assignment-only.\n\n* Modify make_const so that numeric literals are typed as the smallest\ntype that will hold them in the series int4, int8, numeric (as opposed\nto the former behavior, which was int4, float8, numeric).\n\n* Make only float8, not numeric, be a preferred type for category NUMERIC.\n\nDo for 7.4:\n\n* Change make_const so that numeric literals are typed as the smallest\ntype that will hold them in the series int2, int4, int8, numeric (ie,\nadd int2 to the possible set of initial datatypes for constants).\n\n* Remove most cross-datatype operators (int2+int4, etc), expecting such\noperations to be handled by an implicit cast and a single-datatype\noperator instead. This is necessary for comparison operators, because\nwe want operations like \"int4var = 42\" to be coerced to int4-only\noperations so that they are indexable. It's optional for operators that\nare never associated with indexes (like +), but I'm inclined to reduce\nthe code bulk and size of pg_proc (and pg_operator) by getting rid of as\nmuch as we can.\n\n* Fix planner to cope with merge and hash joins wherein the arguments\naren't plain Var nodes (must cope with Var + type promotion, and might\nas well just take any expression).\n\n* Develop similar promotion hierarchies for the other type categories.\nSee if we can't retire the notion of \"preferred type\" entirely.\n\nDiscussion:\n\nThe main point of the 7.3 changes is to create a consistent promotion scheme\nfor the numeric hierarchy. By twiddling make_const, we can improve the\nbehavior for large integers and float-format constants: these will be\ntyped as int8 or numeric and then if necessary up-converted to numeric,\nfloat4, or float8. It happens that there are no cross-datatype operators\nat present between int8 and numeric/float4/float8 nor between numeric and\nfloat4/float8, so we will get the desired up-conversion and not selection\nof a cross-datatype operator when such a constant is used with a numeric\nor float variable. In the existing code, an integer too large for int4\n(but not too large for int8) would be initially typed as float8, thus\nforcing us to allow float8->int8 as an implicit coercion to ensure\nreasonable behavior for int8 constants. So we must introduce int8 as\nan allowed initial type for constants if we want to remove float8->int8\nas an implicit coercion. But we can get rid of float8 as an initial type,\nwhich simplifies matters.\n\nWith these changes we can expect reasonable behavior for cases like\n\"where numericvar = float-style-constant\". The behavior will not get\nbetter for cases involving int2 or int8 variables compared to int-size\nconstants, but it won't get worse either. These changes will also bring\nus into line with the SQL spec concerning mixed float/numeric operations\n(the result should be approximate, ie float).\n\nWith the additional changes for 7.4 we can expect to finally fix the\nbehavior for int2 and int8 variables as well: cases like \"where int2var =\n42\" will be indexable without having to explicitly cast the constant.\n\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Sep 2002 14:57:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Numeric casting rules, take two"
},
{
"msg_contents": "\nTom, do you want any TODO items from this?\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> I started by saying\n> > * Within a category, \"up\" (lossless) conversions are implicit, \"down\"\n> > (potentially lossy) conversions should be assignment-only.\n> but as always the devil is in the details.\n> \n> After further thought, and the thread with Andreas about where we might go\n> with this in 7.4, I have developed a two-stage plan for dealing with\n> numeric casts. We can make some progress in 7.3 but there is more work\n> that will have to be postponed. Here's my current thoughts (plan first,\n> then discussion):\n> \n> Do for 7.3:\n> \n> * Set up pg_cast so that up-coercions in the series\n> int2->int4->int8->numeric->float4->float8 are implicit, while\n> down-coercions (the reverse direction of each of these fifteen casts)\n> are marked assignment-only.\n> \n> * Modify make_const so that numeric literals are typed as the smallest\n> type that will hold them in the series int4, int8, numeric (as opposed\n> to the former behavior, which was int4, float8, numeric).\n> \n> * Make only float8, not numeric, be a preferred type for category NUMERIC.\n> \n> Do for 7.4:\n> \n> * Change make_const so that numeric literals are typed as the smallest\n> type that will hold them in the series int2, int4, int8, numeric (ie,\n> add int2 to the possible set of initial datatypes for constants).\n> \n> * Remove most cross-datatype operators (int2+int4, etc), expecting such\n> operations to be handled by an implicit cast and a single-datatype\n> operator instead. This is necessary for comparison operators, because\n> we want operations like \"int4var = 42\" to be coerced to int4-only\n> operations so that they are indexable. It's optional for operators that\n> are never associated with indexes (like +), but I'm inclined to reduce\n> the code bulk and size of pg_proc (and pg_operator) by getting rid of as\n> much as we can.\n> \n> * Fix planner to cope with merge and hash joins wherein the arguments\n> aren't plain Var nodes (must cope with Var + type promotion, and might\n> as well just take any expression).\n> \n> * Develop similar promotion hierarchies for the other type categories.\n> See if we can't retire the notion of \"preferred type\" entirely.\n> \n> Discussion:\n> \n> The main point of the 7.3 changes is to create a consistent promotion scheme\n> for the numeric hierarchy. By twiddling make_const, we can improve the\n> behavior for large integers and float-format constants: these will be\n> typed as int8 or numeric and then if necessary up-converted to numeric,\n> float4, or float8. It happens that there are no cross-datatype operators\n> at present between int8 and numeric/float4/float8 nor between numeric and\n> float4/float8, so we will get the desired up-conversion and not selection\n> of a cross-datatype operator when such a constant is used with a numeric\n> or float variable. In the existing code, an integer too large for int4\n> (but not too large for int8) would be initially typed as float8, thus\n> forcing us to allow float8->int8 as an implicit coercion to ensure\n> reasonable behavior for int8 constants. So we must introduce int8 as\n> an allowed initial type for constants if we want to remove float8->int8\n> as an implicit coercion. But we can get rid of float8 as an initial type,\n> which simplifies matters.\n> \n> With these changes we can expect reasonable behavior for cases like\n> \"where numericvar = float-style-constant\". The behavior will not get\n> better for cases involving int2 or int8 variables compared to int-size\n> constants, but it won't get worse either. These changes will also bring\n> us into line with the SQL spec concerning mixed float/numeric operations\n> (the result should be approximate, ie float).\n> \n> With the additional changes for 7.4 we can expect to finally fix the\n> behavior for int2 and int8 variables as well: cases like \"where int2var =\n> 42\" will be indexable without having to explicitly cast the constant.\n> \n> \n> Comments?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 18 Sep 2002 01:01:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Numeric casting rules, take two"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom, do you want any TODO items from this?\n\nI think we have plenty already on this general subject, no? But you\ncould stick this whole thread into TODO.detail/typeconv if you like.\n(It's interesting to compare these ideas to where we were 2 years\nago...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Sep 2002 01:27:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Numeric casting rules, take two "
}
] |
[
{
"msg_contents": "Marc needs old PostgreSQL source code tarbals for our ftp site. We\nhave >= 6.1, and I have postgres95 1.01 and postgres 4.2. Does anyone\nhave 6.0.X and 1.0X?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 17 Sep 2002 16:18:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Old pgsql versions"
}
] |
[
{
"msg_contents": "Hey, me and a few other folks were having a discussion off list, and the \nsubject of inserts and missing columns came up. you may remember the point \nin the \"I'm done\" post by Bruce. It said:\n\n> o -Disallow missing columns in INSERT ... VALUES, per ANSI\n> > What is this, and why is it marked done?\n\nWe used to allow INSERT INTO tab VALUES (...) to skip the trailing\ncolumns and automatically fill in null's. That is fixed, per ANSI.\n\n\nAnyway, I just tested it on 7.3b1 and I can still do an insert with the \ncolumns missing and it fills in defaults or nulls, with defaults being the \npreference.\n\nSo, are we gonna make postgresql throw an error when someone tries to \nsubmit an insert with too few columns to match up to the implicit column \nlist, or not?\n\nThis just seems like a change designed to piss off users to me, but I can \nsee where it does encourage better query crafting.\n\n",
"msg_date": "Tue, 17 Sep 2002 14:44:54 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>",
"msg_from_op": true,
"msg_subject": "a quick question"
},
{
"msg_contents": "On Tue, 2002-09-17 at 16:44, scott.marlowe wrote:\n> Hey, me and a few other folks were having a discussion off list, and the \n> subject of inserts and missing columns came up. you may remember the point \n> in the \"I'm done\" post by Bruce. It said:\n> \n> > o -Disallow missing columns in INSERT ... VALUES, per ANSI\n> > > What is this, and why is it marked done?\n> \n> We used to allow INSERT INTO tab VALUES (...) to skip the trailing\n> columns and automatically fill in null's. That is fixed, per ANSI.\n> \n> So, are we gonna make postgresql throw an error when someone tries to \n> submit an insert with too few columns to match up to the implicit column \n> list, or not?\n\nThere was a vote to keep previous behaviour when the column list wasn't\nsupplied, so it's not to ANSI spec, it's to our improved version ;)\n\nINSERT INTO (...) VALUES (...) will not allow you to skip value entries,\nbut the keyword DEFAULT is available now, so it shouldn't be much of an\nissue.\n\n \n-- \n Rod Taylor\n\n",
"msg_date": "17 Sep 2002 16:55:49 -0400",
"msg_from": "Rod Taylor <rbt@rbt.ca>",
"msg_from_op": false,
"msg_subject": "Re: a quick question"
}
] |
[
{
"msg_contents": "Hi,all expert of the postgresql.\r\nI want to learn the kernel of postgresql. When I debug it with gdb, I come to a problem that I can't solve. In the BackendStartup() it forks a new child process. I can't trace into the new child process with attach. It say that Operation is not permitted.\r\nI really need your help.\r\nThank you very much.\r\nJinqiang Han\r\n\n\n\n\n\nHi,all expert of the postgresql.\nI want to learn the kernel of postgresql. When I debug it with gdb, I \r\ncome to a problem that I can't solve. In the BackendStartup() it forks a new \r\nchild process. I can't trace into the new child process with attach. It say that \r\nOperation is not permitted.\nI really need your help.\nThank you very much.\nJinqiang Han",
"msg_date": "Wed, 18 Sep 2002 10:9:36 +0800",
"msg_from": "=?GB2312?Q?=BA=AB=BD=FC=C7=BF?= <jqhan@db.pku.edu.cn>",
"msg_from_op": true,
"msg_subject": "inquiry"
}
] |
[
{
"msg_contents": "\nThere has been a lot of activity on open items in the past week. Here\nis the updated list.\n\nBasically, upgrading and casting have blown up into a variety of items.\n\n---------------------------------------------------------------------------\n\n P O S T G R E S Q L\n\n 7 . 3 O P E N I T E M S\n\n\nCurrent at ftp://candle.pha.pa.us/pub/postgresql/open_items.\n\nSource Code Changes\n-------------------\nSchema handling - ready? interfaces? client apps?\nDrop column handling - ready for all clients, apps?\nFix BeOS, QNX4 ports\nFix AIX large file compile failure of 2002-09-11 (Andreas)\nGet bison upgrade on postgresql.org for ecpg only (Marc)\nAllow ecpg to properly handle PREPARE/EXECUTE (Michael)\nFix vacuum btree bug (Tom)\nFix client apps for autocommit = off\nFix clusterdb to be schema-aware\nChange log_min_error_statement to be off by default (Gavin)\nFix return tuple counts/oid/tag for rules\nLoading 7.2 pg_dumps\n\tfix up function return types on lang/type/trigger creation or\n\t loosen opaque restrictions\n\tfunctions no longer public executable\n\tlanguages no longer public usable\nAdd schema dump option to pg_dump\nAdd casts: (Tom)\n\tassignment-level cast specification\n\tinet -> text\n\tmacaddr -> text\n\tint4 -> varchar?\n\tint8 -> varchar?\n\tadd param for length check for char()/varchar()\nCreate script to make proper dependencies for SERIAL and foreign keys (Rod)\nFix $libdir in loaded functions?\n\t\nOn Going\n--------\nPoint-in-time recovery\nWin32 port\nSecurity audit\n\nDocumentation Changes\n---------------------\nDocument need to add permissions to loaded functions and languages\nMove documation to gborg for moved projects\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 18 Sep 2002 01:10:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Open 7.3 items"
},
{
"msg_contents": "> Change log_min_error_statement to be off by default (Gavin)\n\nI will be happy to provide this simple fix once I can get some indication\nof the preferred implication. The discussion left off with Bruce prefering\nthat the GUC code for the *_min_* variables be variable specific where as\nTom saw no need to back out the generic assignment function I provided,\ndespite the fact that it behaves `illogically' (client_min_messages =\nFATAL?).\n\nGavin\n\n\n",
"msg_date": "Wed, 18 Sep 2002 15:59:48 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "> There has been a lot of activity on open items in the past week. Here\n> is the updated list.\n> \n> Basically, upgrading and casting have blown up into a variety of items.\n\nWhat's the timeframe for beta2? FreeBSD's going into a ports freeze\non Friday and I'd be slick to see it ship with 7.3beta2. 'nother few\nweeks before beta2 or is it right around the corner?\n\nFor those interested in PostgreSQL + FreeBSD, I have a patch pending\napproval that will let developers toggle between a devel port and the\nstable release for all ports that depend on PostgreSQL.\n\n-sc\n\n-- \nSean Chittenden\n",
"msg_date": "Wed, 18 Sep 2002 00:50:24 -0700",
"msg_from": "Sean Chittenden <sean@chittenden.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> There has been a lot of activity on open items in the past week. Here\n> is the updated list.\n\nSIMILAR TO and the associated SUBSTRING functionality need to be fixed.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 18 Sep 2002 22:11:26 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > There has been a lot of activity on open items in the past week. Here\n> > is the updated list.\n> \n> SIMILAR TO and the associated SUBSTRING functionality need to be fixed.\n> \n\nAdded to open items:\n\n\tFix SIMILAR TO to be Posix compiant or remove it \n\nI still had your email in my mailbox so I wouldn't have forgotten.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 18 Sep 2002 16:45:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Gavin Sherry wrote:\n> > Change log_min_error_statement to be off by default (Gavin)\n> \n> I will be happy to provide this simple fix once I can get some indication\n> of the preferred implication. The discussion left off with Bruce prefering\n> that the GUC code for the *_min_* variables be variable specific where as\n> Tom saw no need to back out the generic assignment function I provided,\n> despite the fact that it behaves `illogically' (client_min_messages =\n> FATAL?).\n\nThanks, Gavin. Tom convinced me that it was OK to have illogical\nvalues. Also, I think we need to support PANIC for server_min_messages\nanyway to use as a default value for 'off'. Does that make sense?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 18 Sep 2002 16:46:41 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Sean Chittenden wrote:\n> > There has been a lot of activity on open items in the past week. Here\n> > is the updated list.\n> > \n> > Basically, upgrading and casting have blown up into a variety of items.\n> \n> What's the timeframe for beta2? FreeBSD's going into a ports freeze\n> on Friday and I'd be slick to see it ship with 7.3beta2. 'nother few\n> weeks before beta2 or is it right around the corner?\n> \n> For those interested in PostgreSQL + FreeBSD, I have a patch pending\n> approval that will let developers toggle between a devel port and the\n> stable release for all ports that depend on PostgreSQL.\n\nI have heard end of this week or next week for beta2. Also, plan was to\nsplit the CVS tree at that time.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 18 Sep 2002 16:47:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Wed, 18 Sep 2002, Bruce Momjian wrote:\n\n> On Going\n> --------\n> Point-in-time recovery\n> Win32 port\n\nthese have nothing to do with v7.3, so shouldn't even be listed here ...\n\n\n",
"msg_date": "Wed, 18 Sep 2002 22:38:24 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Marc G. Fournier wrote:\n> On Wed, 18 Sep 2002, Bruce Momjian wrote:\n> \n> > On Going\n> > --------\n> > Point-in-time recovery\n> > Win32 port\n> \n> these have nothing to do with v7.3, so shouldn't even be listed here ...\n\nOK, removed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 18 Sep 2002 21:40:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Wed, 18 Sep 2002, Sean Chittenden wrote:\n\n> > There has been a lot of activity on open items in the past week. Here\n> > is the updated list.\n> >\n> > Basically, upgrading and casting have blown up into a variety of items.\n>\n> What's the timeframe for beta2? FreeBSD's going into a ports freeze\n> on Friday and I'd be slick to see it ship with 7.3beta2. 'nother few\n> weeks before beta2 or is it right around the corner?\n\nI was actually going to post this tonight anyway ... its been 2 weeks, and\nsince nobody should be committing anything but fixes (right guys?), I'm\ngoing to do up a beta2 on Friday due to the number changes that have been\ncommitted over the past 2 weeks ...\n\nBruce, can you make sure that any changes needed prior to my packaging are\ndone before noon ADT on Friday? I have no doubt that we have some\noutstanding issues to work through, but this will give a new checkpoint\nfor those testing ...\n\n\n",
"msg_date": "Wed, 18 Sep 2002 22:43:58 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Beta2 on Friday Morning (Was: Re: Open 7.3 items)"
},
{
"msg_contents": "Marc G. Fournier wrote:\n> On Wed, 18 Sep 2002, Sean Chittenden wrote:\n> \n> > > There has been a lot of activity on open items in the past week. Here\n> > > is the updated list.\n> > >\n> > > Basically, upgrading and casting have blown up into a variety of items.\n> >\n> > What's the timeframe for beta2? FreeBSD's going into a ports freeze\n> > on Friday and I'd be slick to see it ship with 7.3beta2. 'nother few\n> > weeks before beta2 or is it right around the corner?\n> \n> I was actually going to post this tonight anyway ... its been 2 weeks, and\n> since nobody should be committing anything but fixes (right guys?), I'm\n> going to do up a beta2 on Friday due to the number changes that have been\n> committed over the past 2 weeks ...\n> \n> Bruce, can you make sure that any changes needed prior to my packaging are\n> done before noon ADT on Friday? I have no doubt that we have some\n> outstanding issues to work through, but this will give a new checkpoint\n> for those testing ...\n\nWe are going to require an initdb for beta2 and I think we need to get\n_everything_ required in there before going to beta2. See the open\nitems list. I think we will need until the middle of next week for\nbeta2. In fact, I have the inheritance patch that will require an\ninitdb and that isn't even applied yet; Friday is too early.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 18 Sep 2002 21:55:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Beta2 on Friday Morning (Was: Re: Open 7.3 items)"
},
{
"msg_contents": "On Wed, 18 Sep 2002, Bruce Momjian wrote:\n\n> We are going to require an initdb for beta2 and I think we need to get\n> _everything_ required in there before going to beta2. See the open\n> items list. I think we will need until the middle of next week for\n> beta2. In fact, I have the inheritance patch that will require an\n> initdb and that isn't even applied yet; Friday is too early.\n\nWe are in beta, not release ... the purpose of going to beta2 is to\nprovide a new checkpoint to work bug reports off of, so having to deal\nwith an initdb should not be considered a problem by anyone, since only a\nfool would run beta in production, no? (and ya, I am such a fool at times,\nbut i do accept the fact that I am such *grin*)\n\n\n",
"msg_date": "Wed, 18 Sep 2002 22:59:48 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Beta2 on Friday Morning (Was: Re: Open 7.3 items)"
},
{
"msg_contents": "Marc G. Fournier wrote:\n> On Wed, 18 Sep 2002, Bruce Momjian wrote:\n> \n> > We are going to require an initdb for beta2 and I think we need to get\n> > _everything_ required in there before going to beta2. See the open\n> > items list. I think we will need until the middle of next week for\n> > beta2. In fact, I have the inheritance patch that will require an\n> > initdb and that isn't even applied yet; Friday is too early.\n> \n> We are in beta, not release ... the purpose of going to beta2 is to\n> provide a new checkpoint to work bug reports off of, so having to deal\n> with an initdb should not be considered a problem by anyone, since only a\n> fool would run beta in production, no? (and ya, I am such a fool at times,\n> but i do accept the fact that I am such *grin*)\n\nWe should get _all_ the known initdb-related issues into the code before\nwe go beta2 or beta3 is going to require another initdb.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 18 Sep 2002 22:03:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Beta2 on Friday Morning (Was: Re: Open 7.3 items)"
},
{
"msg_contents": "On Wed, 18 Sep 2002, Bruce Momjian wrote:\n\n> Marc G. Fournier wrote:\n> > On Wed, 18 Sep 2002, Bruce Momjian wrote:\n> >\n> > > We are going to require an initdb for beta2 and I think we need to get\n> > > _everything_ required in there before going to beta2. See the open\n> > > items list. I think we will need until the middle of next week for\n> > > beta2. In fact, I have the inheritance patch that will require an\n> > > initdb and that isn't even applied yet; Friday is too early.\n> >\n> > We are in beta, not release ... the purpose of going to beta2 is to\n> > provide a new checkpoint to work bug reports off of, so having to deal\n> > with an initdb should not be considered a problem by anyone, since only a\n> > fool would run beta in production, no? (and ya, I am such a fool at times,\n> > but i do accept the fact that I am such *grin*)\n>\n> We should get _all_ the known initdb-related issues into the code before\n> we go beta2 or beta3 is going to require another initdb.\n\nRight, and? How many times in the past has it been the last beta in the\ncycle that forced the initdb? Are you able to guarantee that there\n*won't* be another initdb required if we wait until mid-next week?\n\n\n\n",
"msg_date": "Wed, 18 Sep 2002 23:27:31 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Beta2 on Friday Morning (Was: Re: Open 7.3 items)"
},
{
"msg_contents": "Marc G. Fournier wrote:\n> On Wed, 18 Sep 2002, Bruce Momjian wrote:\n> \n> > Marc G. Fournier wrote:\n> > > On Wed, 18 Sep 2002, Bruce Momjian wrote:\n> > >\n> > > > We are going to require an initdb for beta2 and I think we need to get\n> > > > _everything_ required in there before going to beta2. See the open\n> > > > items list. I think we will need until the middle of next week for\n> > > > beta2. In fact, I have the inheritance patch that will require an\n> > > > initdb and that isn't even applied yet; Friday is too early.\n> > >\n> > > We are in beta, not release ... the purpose of going to beta2 is to\n> > > provide a new checkpoint to work bug reports off of, so having to deal\n> > > with an initdb should not be considered a problem by anyone, since only a\n> > > fool would run beta in production, no? (and ya, I am such a fool at times,\n> > > but i do accept the fact that I am such *grin*)\n> >\n> > We should get _all_ the known initdb-related issues into the code before\n> > we go beta2 or beta3 is going to require another initdb.\n> \n> Right, and? How many times in the past has it been the last beta in the\n> cycle that forced the initdb? Are you able to guarantee that there\n> *won't* be another initdb required if we wait until mid-next week?\n\nI agree, but if we _know_ we have more initdb issues to resolve (and\npg_dump load issues) doesn't it make sense to at least do all of them\nthat we have outstanding? If not, we are guaranteeing an initdb. I\nwould rather _try_ to avoid one for beta3.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 18 Sep 2002 22:34:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Beta2 on Friday Morning (Was: Re: Open 7.3 items)"
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> On Wed, 18 Sep 2002, Bruce Momjian wrote:\n> > We should get _all_ the known initdb-related issues into the code\n> > before we go beta2 or beta3 is going to require another initdb.\n> \n> Right, and? How many times in the past has it been the last beta in\n> the cycle that forced the initdb? Are you able to guarantee that\n> there won't* be another initdb required if we wait until mid-next\n> week?\n\nI completely agree with Bruce here. Requiring an initdb for every beta\nrelease significantly reduces the number of people who will be willing\nto try it out -- so initdb's between betas are not disasterous, but\nshould be avoided if possible.\n\nSince waiting till next week significantly reduces the chance of an\ninitdb for beta3 and has no serious disadvantage that I can see, it\nseems the right decision to me.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n",
"msg_date": "18 Sep 2002 23:29:38 -0400",
"msg_from": "Neil Conway <neilc@samurai.com>",
"msg_from_op": false,
"msg_subject": "Re: Beta2 on Friday Morning (Was: Re: Open 7.3 items)"
},
{
"msg_contents": "> I completely agree with Bruce here. Requiring an initdb for every beta\n> release significantly reduces the number of people who will be willing\n> to try it out -- so initdb's between betas are not disasterous, but\n> should be avoided if possible.\n\nBut it does mean that 7.3 to 7.3 pg_dump gets a good testing...\n\nYou could almost make it mandatory to have an initdb during beta :)\n\nChris\n\n",
"msg_date": "Thu, 19 Sep 2002 11:32:41 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Beta2 on Friday Morning (Was: Re: Open 7.3 items)"
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> ... I'm going to do up a beta2 on Friday due to the number changes\n> that have been committed over the past 2 weeks ...\n\nI want to review and apply Alvaro's attisinherited fix before we go\nbeta2. I think I can get that done tomorrow. I can't recall any\nother initdb-forcing fixes in the pipeline; Bruce, do you?\n\nWhich is not to say we don't have a ton of known bugs to fix...\nI'd lean towards a Monday-ish beta2 myself.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Sep 2002 23:55:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Beta2 on Friday Morning (Was: Re: Open 7.3 items) "
},
{
"msg_contents": "Tom Lane wrote:\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > ... I'm going to do up a beta2 on Friday due to the number changes\n> > that have been committed over the past 2 weeks ...\n> \n> I want to review and apply Alvaro's attisinherited fix before we go\n> beta2. I think I can get that done tomorrow. I can't recall any\n> other initdb-forcing fixes in the pipeline; Bruce, do you?\n\nLooking at the open item list, I see:\n\n fix up function return types on lang/type/trigger creation or\n loosen opaque restrictions\n\nSeems that should be fixed before beta2 because it does effect people\nloading data.\n\nAre we done with all of these?\n\t\n\tAdd casts: (Tom)\n\t assignment-level cast specification\n\t inet -> text\n\t macaddr -> text\n\t int4 -> varchar?\n\t int8 -> varchar?\n\t add param for length check for char()/varchar()\n\n> Which is not to say we don't have a ton of known bugs to fix...\n> I'd lean towards a Monday-ish beta2 myself.\n\nYes, I would like to get a few days of quiet before packaging beta2.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 19 Sep 2002 00:15:39 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Beta2 on Friday Morning (Was: Re: Open 7.3 items)"
},
{
"msg_contents": "On Wed, 18 Sep 2002, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > ... I'm going to do up a beta2 on Friday due to the number changes\n> > that have been committed over the past 2 weeks ...\n>\n> I want to review and apply Alvaro's attisinherited fix before we go\n> beta2. I think I can get that done tomorrow. I can't recall any\n> other initdb-forcing fixes in the pipeline; Bruce, do you?\n>\n> Which is not to say we don't have a ton of known bugs to fix...\n> I'd lean towards a Monday-ish beta2 myself.\n\n'k, then let's go with a Sunday night packaging, Monday announce, so that\nwe have beta2 testing starting right at the beginning of the week ...\n\n\n",
"msg_date": "Thu, 19 Sep 2002 01:34:38 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Beta2 on Friday Morning (Was: Re: Open 7.3 items) "
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Looking at the open item list, I see:\n> fix up function return types on lang/type/trigger creation or\n> loosen opaque restrictions\n\n> Seems that should be fixed before beta2 because it does effect people\n> loading data.\n\nYeah, we should do something with that. Are people okay with the idea\nof CREATE LANGUAGE, etc, retroactively changing prorettype from OPAQUE\nto the correct thing?\n\n> Are we done with all of these?\n> \tAdd casts: (Tom)\n> \t assignment-level cast specification\n> \t inet -> text\n> \t macaddr -> text\n> \t int4 -> varchar?\n> \t int8 -> varchar?\n> \t add param for length check for char()/varchar()\n\nAll but the inet/macaddr->text change; I backed that out after finding\nthat it induced a bunch of regression-test failures. The tests assume\nthat \"inet = integer\" will provoke a failure. Guess what: if both inet\nand integer have implicit casts to text, the system takes it.\n\nOn reflection I still feel that we should be getting rid of implicit\ncasts to text rather than adding more. This is still an open bug:\nhttp://archives.postgresql.org/pgsql-bugs/2001-10/msg00108.php\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Sep 2002 09:22:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Beta2 on Friday Morning (Was: Re: Open 7.3 items) "
},
{
"msg_contents": "Tom Lane writes:\n\n> Yeah, we should do something with that. Are people okay with the idea\n> of CREATE LANGUAGE, etc, retroactively changing prorettype from OPAQUE\n> to the correct thing?\n\nSeems like an appropriate time to throw a notice, though.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 19 Sep 2002 23:36:53 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Beta2 on Friday Morning (Was: Re: Open 7.3 items) "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> Yeah, we should do something with that. Are people okay with the idea\n>> of CREATE LANGUAGE, etc, retroactively changing prorettype from OPAQUE\n>> to the correct thing?\n\n> Seems like an appropriate time to throw a notice, though.\n\nOf course.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Sep 2002 17:40:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Beta2 on Friday Morning (Was: Re: Open 7.3 items) "
},
{
"msg_contents": "Tom Lane wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Tom Lane writes:\n> >> Yeah, we should do something with that. Are people okay with the idea\n> >> of CREATE LANGUAGE, etc, retroactively changing prorettype from OPAQUE\n> >> to the correct thing?\n> \n> > Seems like an appropriate time to throw a notice, though.\n> \n> Of course.\n\nNow that we have additional elog levels, is it a NOTICE or a WARNING. I\nam leaning to the latter.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 19 Sep 2002 18:00:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Beta2 on Friday Morning (Was: Re: Open 7.3 items)"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n> Yeah, we should do something with that. Are people okay with the idea\n> of CREATE LANGUAGE, etc, retroactively changing prorettype from OPAQUE\n> to the correct thing?\n>> \n> Seems like an appropriate time to throw a notice, though.\n>> \n>> Of course.\n\n> Now that we have additional elog levels, is it a NOTICE or a WARNING. I\n> am leaning to the latter.\n\nNOTICE seems sufficient to me.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Sep 2002 18:26:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Beta2 on Friday Morning (Was: Re: Open 7.3 items) "
},
{
"msg_contents": "\n\nCan I buy an extra day or two? I'm in DC till Saturday then there's the\ntrip home. How 'bout a wednesday beta release?\n\nOn Thu, 19 Sep 2002, Marc G. Fournier wrote:\n\n> On Wed, 18 Sep 2002, Tom Lane wrote:\n>\n> > \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > > ... I'm going to do up a beta2 on Friday due to the number changes\n> > > that have been committed over the past 2 weeks ...\n> >\n> > I want to review and apply Alvaro's attisinherited fix before we go\n> > beta2. I think I can get that done tomorrow. I can't recall any\n> > other initdb-forcing fixes in the pipeline; Bruce, do you?\n> >\n> > Which is not to say we don't have a ton of known bugs to fix...\n> > I'd lean towards a Monday-ish beta2 myself.\n>\n> 'k, then let's go with a Sunday night packaging, Monday announce, so that\n> we have beta2 testing starting right at the beginning of the week ...\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 19 Sep 2002 23:00:26 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Beta2 on Friday Morning (Was: Re: Open 7.3 items) "
},
{
"msg_contents": "I have to say that during beta testing I ALWAYS do an initdb and a reload \njust to make sure the pg_dumpall and pg_restore stuff works right. Plus \nto make sure problems that might only pop up with a new initdb are found \nas well. I probably \"burn it to the ground\" several times on a single \nbeta just to test different data sets and I prefer a clean database when \ndoing that so I'm sure the problems I see are from just that one dataset.\n\nSo, Requiring an initdb for every beta wouldn't bother me one bit. To me \nyou test a beta by migrating to it just like it was a production version, \nand that means a new build from the ground up, initdb and all.\n\nOn 18 Sep 2002, Neil Conway wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > On Wed, 18 Sep 2002, Bruce Momjian wrote:\n> > > We should get _all_ the known initdb-related issues into the code\n> > > before we go beta2 or beta3 is going to require another initdb.\n> > \n> > Right, and? How many times in the past has it been the last beta in\n> > the cycle that forced the initdb? Are you able to guarantee that\n> > there won't* be another initdb required if we wait until mid-next\n> > week?\n> \n> I completely agree with Bruce here. Requiring an initdb for every beta\n> release significantly reduces the number of people who will be willing\n> to try it out -- so initdb's between betas are not disasterous, but\n> should be avoided if possible.\n> \n> Since waiting till next week significantly reduces the chance of an\n> initdb for beta3 and has no serious disadvantage that I can see, it\n> seems the right decision to me.\n> \n> Cheers,\n> \n> Neil\n> \n> \n\n",
"msg_date": "Tue, 24 Sep 2002 11:23:04 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>",
"msg_from_op": false,
"msg_subject": "Re: Beta2 on Friday Morning (Was: Re: Open 7.3 items)"
}
] |
[
{
"msg_contents": "Hi all,\n\nI have written a test function, that will create a sequence and a table,\nthan insert one million rows into the table, analyze the table and create an\nindex on one of the columns.\n(so this will all happen inside on transaction)\n\nAfter doing that, the backend will crash.\n(but the data will be inserted)\n\nIf I comment out the table analyzing and the create index (I have not tested\nwhich on leads to the crash), everything works fine. I have sent a copy of\nthe error log, the psql session, the function and some parts of my\npostgresql.conf file.\n\nMy system is RedHat 7.2, Kernel 2.4.9-34, glibc-2.2.4, gcc 2.96, PostgreSQL\n7.2.2 built from source.\n\nIf you want, I could try other combinations of create/insert/analyze etc. to\ntest the exact steps needed to crash the backend.\n\nI know what I am doing is not really standard. This was rather a stability\ntest of postgres :). What do you think about this all?\n\nBest Regards,\nMichael Paesold\n\n\n--> logfile:\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index\n'bench_big_pkey' for table 'bench_big'\nDEBUG: recycled transaction log file 000000000000009F\n[...skipping: recycled transaction log file 00000000000000A0 to\n00000000000000AE]\nDEBUG: recycled transaction log file 00000000000000B0\nDEBUG: Analyzing bench_big\nDEBUG: server process (pid 13840) was terminated by signal 11\nDEBUG: terminating any other active server processes\nDEBUG: all server processes terminated; reinitializing shared memory and\nsemaphores\nDEBUG: database system was interrupted at 2002-09-17 11:45:56 CEST\nDEBUG: checkpoint record is at 0/B41170A4\nDEBUG: redo record is at 0/B400DF34; undo record is at 0/0; shutdown FALSE\nDEBUG: next transaction id: 96959; next oid: 6282462\nDEBUG: database system was not properly shut down; automatic recovery in\nprogress\nDEBUG: redo starts at 0/B400DF34\nDEBUG: ReadRecord: record with zero length at 0/B495F754\nDEBUG: redo done at 0/B495F730\nDEBUG: recycled transaction log file 00000000000000B2\nDEBUG: recycled transaction log file 00000000000000B1\nDEBUG: recycled transaction log file 00000000000000B3\nDEBUG: database system is ready\n\nThe first time I tried the insert, there was an additional notice from\nanother backend, just after the line \"DEBUG: terminating any other active\nserver processes\":\nNOTICE: Message from PostgreSQL backend:\n The Postmaster has informed me that some other backend\n died abnormally and possibly corrupted shared memory.\n I have rolled back the current transaction and am\n going to terminate your database system connection and exit.\n Please reconnect to the database system and repeat your query.\n\n--> in psql:\nbilling=# select create_benchmark ();\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index\n'bench_big_pkey' for table 'bench_big'\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n!# \\c\nPassword:\nYou are now connected to database billing as user billing.\nbilling=# select real_time from bench_big where int_id in (1, 1000000);\n real_time\n-------------------------------\n 2002-09-17 11:32:22.63334+02\n 2002-09-17 11:46:16.601282+02\n(2 rows)\n\n--> all rows have definatly been inserted!\n\n\n--> the trigger function:\n\nCREATE OR REPLACE FUNCTION create_benchmark () RETURNS BOOLEAN AS '\nDECLARE\n char100 VARCHAR :=\n\\'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ�������1234567890!\"�$%\n&/()=?+*#<>|-_,;.:^�{}�`[]\\';\n r1 INTEGER;\n r2 INTEGER;\n r3 INTEGER;\nBEGIN\n CREATE SEQUENCE bench_seq;\n\n CREATE TABLE bench_big (\n int_id INTEGER NOT NULL default nextval(\\'bench_seq\\'),\n bigint_id BIGINT NOT NULL,\n sometext1 VARCHAR (50),\n sometext2 VARCHAR (50),\n sometext3 VARCHAR (50),\n trx_time TIME WITHOUT TIME ZONE NOT NULL default CURRENT_TIME,\n trx_timestamp TIMESTAMP WITHOUT TIME ZONE NOT NULL default\nCURRENT_TIMESTAMP,\n trx_date DATE NOT NULL default CURRENT_DATE,\n real_time TIMESTAMP NOT NULL default timeofday(),\n someboolean1 BOOLEAN NOT NULL,\n someboolean2 BOOLEAN NOT NULL,\n PRIMARY KEY (int_id)\n );\n\n FOR i IN 1..1000000 LOOP\n r1 = CAST( RANDOM() * 49 AS INTEGER );\n r2 = CAST( RANDOM() * 49 AS INTEGER );\n r3 = CAST( RANDOM() * 49 AS INTEGER );\n\n INSERT INTO bench_big\n (bigint_id, sometext1, sometext2, sometext3, someboolean1,\nsomeboolean2)\n VALUES (\n CAST(RANDOM() * 10000000000 AS BIGINT),\n SUBSTR(char100, 50, 49), -- this should be r1, r1 (but doesn't work!)\n SUBSTR(char100, 50, 49), -- this should be r2, r2 (but doesn't work!)\n SUBSTR(char100, 50, 49), -- this should be r3, r3 (but doesn't work!)\n CASE WHEN r1 > 25 THEN TRUE ELSE FALSE END,\n CASE WHEN r3 > 10 THEN TRUE ELSE FALSE END\n );\n END LOOP;\n\n -- WARNING: un-commenting these lines could crash your postgres\n -- CREATE INDEX bench_bigint_id_idx ON bench_big(bigint_id);\n -- ANALYZE bench_big;\n\n RETURN TRUE;\nEND;\n' LANGUAGE 'plpgsql';\n\n\n--> Perhaps relevant parts of my postgresql.conf file:\n\n# Shared Memory Size\n#\nshared_buffers = 12288 # 2*max_connections, min 16 (one usually 8Kb)\nmax_fsm_relations = 100 # min 10, fsm is free space map (number of\ntables)\nmax_fsm_pages = 20000 # min 1000, fsm is free space map (one about 8Kb)\nmax_locks_per_transaction = 64 # min 10\nwal_buffers = 8 # min 4\n\n# Non-shared Memory Sizes\n#\nsort_mem = 4096 # min 32 (in Kb)\nvacuum_mem = 16384 # min 1024\n\n# Write-ahead log (WAL)\n#\nwal_files = 8 # range 0-64, default 0\nwal_sync_method = fdatasync # the default varies across platforms:\n# # fsync, fdatasync, open_sync, or open_datasync\n#wal_debug = 0 # range 0-16\n#commit_delay = 0 # range 0-100000\n#commit_siblings = 5 # range 1-1000\n#checkpoint_segments = 3 # in logfile segments (16MB each), min 1, default\n3\n#checkpoint_timeout = 300 # in seconds, range 30-3600\n#fsync = true\n\n\n\n\n",
"msg_date": "Wed, 18 Sep 2002 10:09:31 +0200",
"msg_from": "\"Michael Paesold\" <mpaesold@gmx.at>",
"msg_from_op": true,
"msg_subject": "Backend crash (long)"
},
{
"msg_contents": "I've definitely seen errors from including vacuum and/or analyze\nstatements in functions, I think I've seen crashes too. If you check the\ndocs I'm pretty sure they mention the specifics of not being able to use\nsuch statements.\n\nRobert Treat\n\nOn Wed, 2002-09-18 at 04:09, Michael Paesold wrote:\n> Hi all,\n> \n> I have written a test function, that will create a sequence and a table,\n> than insert one million rows into the table, analyze the table and create an\n> index on one of the columns.\n> (so this will all happen inside on transaction)\n> \n> After doing that, the backend will crash.\n> (but the data will be inserted)\n> \n> If I comment out the table analyzing and the create index (I have not tested\n> which on leads to the crash), everything works fine. I have sent a copy of\n> the error log, the psql session, the function and some parts of my\n> postgresql.conf file.\n> \n> My system is RedHat 7.2, Kernel 2.4.9-34, glibc-2.2.4, gcc 2.96, PostgreSQL\n> 7.2.2 built from source.\n> \n> If you want, I could try other combinations of create/insert/analyze etc. to\n> test the exact steps needed to crash the backend.\n> \n> I know what I am doing is not really standard. This was rather a stability\n> test of postgres :). What do you think about this all?\n> \n> Best Regards,\n> Michael Paesold\n> \n> \n> --> logfile:\n> NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index\n> 'bench_big_pkey' for table 'bench_big'\n> DEBUG: recycled transaction log file 000000000000009F\n> [...skipping: recycled transaction log file 00000000000000A0 to\n> 00000000000000AE]\n> DEBUG: recycled transaction log file 00000000000000B0\n> DEBUG: Analyzing bench_big\n> DEBUG: server process (pid 13840) was terminated by signal 11\n> DEBUG: terminating any other active server processes\n> DEBUG: all server processes terminated; reinitializing shared memory and\n> semaphores\n> DEBUG: database system was interrupted at 2002-09-17 11:45:56 CEST\n> DEBUG: checkpoint record is at 0/B41170A4\n> DEBUG: redo record is at 0/B400DF34; undo record is at 0/0; shutdown FALSE\n> DEBUG: next transaction id: 96959; next oid: 6282462\n> DEBUG: database system was not properly shut down; automatic recovery in\n> progress\n> DEBUG: redo starts at 0/B400DF34\n> DEBUG: ReadRecord: record with zero length at 0/B495F754\n> DEBUG: redo done at 0/B495F730\n> DEBUG: recycled transaction log file 00000000000000B2\n> DEBUG: recycled transaction log file 00000000000000B1\n> DEBUG: recycled transaction log file 00000000000000B3\n> DEBUG: database system is ready\n> \n> The first time I tried the insert, there was an additional notice from\n> another backend, just after the line \"DEBUG: terminating any other active\n> server processes\":\n> NOTICE: Message from PostgreSQL backend:\n> The Postmaster has informed me that some other backend\n> died abnormally and possibly corrupted shared memory.\n> I have rolled back the current transaction and am\n> going to terminate your database system connection and exit.\n> Please reconnect to the database system and repeat your query.\n> \n> --> in psql:\n> billing=# select create_benchmark ();\n> NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index\n> 'bench_big_pkey' for table 'bench_big'\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> !# \\c\n> Password:\n> You are now connected to database billing as user billing.\n> billing=# select real_time from bench_big where int_id in (1, 1000000);\n> real_time\n> -------------------------------\n> 2002-09-17 11:32:22.63334+02\n> 2002-09-17 11:46:16.601282+02\n> (2 rows)\n> \n> --> all rows have definatly been inserted!\n> \n> \n> --> the trigger function:\n> \n> CREATE OR REPLACE FUNCTION create_benchmark () RETURNS BOOLEAN AS '\n> DECLARE\n> char100 VARCHAR :=\n> \\'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZäöüÄÖÜß1234567890!\"§$%\n> &/()=?+*#<>|-_,;.:^°{}´`[]\\';\n> r1 INTEGER;\n> r2 INTEGER;\n> r3 INTEGER;\n> BEGIN\n> CREATE SEQUENCE bench_seq;\n> \n> CREATE TABLE bench_big (\n> int_id INTEGER NOT NULL default nextval(\\'bench_seq\\'),\n> bigint_id BIGINT NOT NULL,\n> sometext1 VARCHAR (50),\n> sometext2 VARCHAR (50),\n> sometext3 VARCHAR (50),\n> trx_time TIME WITHOUT TIME ZONE NOT NULL default CURRENT_TIME,\n> trx_timestamp TIMESTAMP WITHOUT TIME ZONE NOT NULL default\n> CURRENT_TIMESTAMP,\n> trx_date DATE NOT NULL default CURRENT_DATE,\n> real_time TIMESTAMP NOT NULL default timeofday(),\n> someboolean1 BOOLEAN NOT NULL,\n> someboolean2 BOOLEAN NOT NULL,\n> PRIMARY KEY (int_id)\n> );\n> \n> FOR i IN 1..1000000 LOOP\n> r1 = CAST( RANDOM() * 49 AS INTEGER );\n> r2 = CAST( RANDOM() * 49 AS INTEGER );\n> r3 = CAST( RANDOM() * 49 AS INTEGER );\n> \n> INSERT INTO bench_big\n> (bigint_id, sometext1, sometext2, sometext3, someboolean1,\n> someboolean2)\n> VALUES (\n> CAST(RANDOM() * 10000000000 AS BIGINT),\n> SUBSTR(char100, 50, 49), -- this should be r1, r1 (but doesn't work!)\n> SUBSTR(char100, 50, 49), -- this should be r2, r2 (but doesn't work!)\n> SUBSTR(char100, 50, 49), -- this should be r3, r3 (but doesn't work!)\n> CASE WHEN r1 > 25 THEN TRUE ELSE FALSE END,\n> CASE WHEN r3 > 10 THEN TRUE ELSE FALSE END\n> );\n> END LOOP;\n> \n> -- WARNING: un-commenting these lines could crash your postgres\n> -- CREATE INDEX bench_bigint_id_idx ON bench_big(bigint_id);\n> -- ANALYZE bench_big;\n> \n> RETURN TRUE;\n> END;\n> ' LANGUAGE 'plpgsql';\n> \n> \n> --> Perhaps relevant parts of my postgresql.conf file:\n> \n> # Shared Memory Size\n> #\n> shared_buffers = 12288 # 2*max_connections, min 16 (one usually 8Kb)\n> max_fsm_relations = 100 # min 10, fsm is free space map (number of\n> tables)\n> max_fsm_pages = 20000 # min 1000, fsm is free space map (one about 8Kb)\n> max_locks_per_transaction = 64 # min 10\n> wal_buffers = 8 # min 4\n> \n> # Non-shared Memory Sizes\n> #\n> sort_mem = 4096 # min 32 (in Kb)\n> vacuum_mem = 16384 # min 1024\n> \n> # Write-ahead log (WAL)\n> #\n> wal_files = 8 # range 0-64, default 0\n> wal_sync_method = fdatasync # the default varies across platforms:\n> # # fsync, fdatasync, open_sync, or open_datasync\n> #wal_debug = 0 # range 0-16\n> #commit_delay = 0 # range 0-100000\n> #commit_siblings = 5 # range 1-1000\n> #checkpoint_segments = 3 # in logfile segments (16MB each), min 1, default\n> 3\n> #checkpoint_timeout = 300 # in seconds, range 30-3600\n> #fsync = true\n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n\n\n",
"msg_date": "18 Sep 2002 10:08:19 -0400",
"msg_from": "Robert Treat <xzilla@users.sourceforge.net>",
"msg_from_op": false,
"msg_subject": "Re: Backend crash (long)"
},
{
"msg_contents": "\"Michael Paesold\" <mpaesold@gmx.at> writes:\n> I have written a test function, that will create a sequence and a table,\n> than insert one million rows into the table, analyze the table and create an\n> index on one of the columns.\n\nYou can't run ANALYZE inside a function. In CVS tip there's a check to\nprevent the VACUUM variant of this problem, but I'm not sure if it\nhandles the ANALYZE variant (yet).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Sep 2002 11:03:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Backend crash (long) "
},
{
"msg_contents": "On Wed, 2002-09-18 at 11:03, Tom Lane wrote:\n> \"Michael Paesold\" <mpaesold@gmx.at> writes:\n> > I have written a test function, that will create a sequence and a table,\n> > than insert one million rows into the table, analyze the table and create an\n> > index on one of the columns.\n> \n> You can't run ANALYZE inside a function. In CVS tip there's a check to\n> prevent the VACUUM variant of this problem, but I'm not sure if it\n> handles the ANALYZE variant (yet).\n\n\nANALYZE in 7.3 works fine in a transaction, so it shouldn't it be able\nto work in a function as well?\n\nrbt=# begin;\nBEGIN\nrbt=# analyze;\nANALYZE\nrbt=# commit;\nCOMMIT\nrbt=# create function test() returns bool as 'analyze; select true;'\nlanguage 'sql';\nCREATE FUNCTION\nrbt=# select test();\n test \n------\n t\n(1 row)\n\n\n\n-- \n Rod Taylor\n\n",
"msg_date": "18 Sep 2002 11:11:00 -0400",
"msg_from": "Rod Taylor <rbt@rbt.ca>",
"msg_from_op": false,
"msg_subject": "Re: Backend crash (long)"
},
{
"msg_contents": "Rod Taylor <rbt@rbt.ca> writes:\n> On Wed, 2002-09-18 at 11:03, Tom Lane wrote:\n>> \"Michael Paesold\" <mpaesold@gmx.at> writes:\n> I have written a test function, that will create a sequence and a table,\n> than insert one million rows into the table, analyze the table and create an\n> index on one of the columns.\n>> \n>> You can't run ANALYZE inside a function. In CVS tip there's a check to\n>> prevent the VACUUM variant of this problem, but I'm not sure if it\n>> handles the ANALYZE variant (yet).\n\n> ANALYZE in 7.3 works fine in a transaction, so it shouldn't it be able\n> to work in a function as well?\n\nPossibly it's okay in 7.3; I have a note to look at that, but haven't\ndone it yet. I think REINDEX has the same problem btw ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Sep 2002 11:15:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Backend crash (long) "
}
] |
[
{
"msg_contents": "Hello,\n\nDoes anyone have a CURRENT cvsup file for 7.3?\n\nI tried to follow the link on the developer website and it comes up 404.\n\nI've got one for 7.2-STABLE, but it is old and does not include the\nstuff that was broken out.\n\nBTW, I've install the 7.3-BETA, and so far everything is working the way it should\non FreeBSD 4.6-STABLE.\n\nThanks!\n\nGB\n\n-- \nGB Clark II | Roaming FreeBSD Admin\ngclarkii@VSServices.COM | General Geek \n CTHULU for President - Why choose the lesser of two evils?\n",
"msg_date": "Wed, 18 Sep 2002 03:41:55 -0500",
"msg_from": "GB Clark <postgres@vsservices.com>",
"msg_from_op": true,
"msg_subject": "CVsup file"
}
] |
[
{
"msg_contents": "Tiny patch fixing small documentation typo.\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83",
"msg_date": "Wed, 18 Sep 2002 13:07:26 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "please apply patch to contrib/ltree"
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nOleg Bartunov wrote:\n> Tiny patch fixing small documentation typo.\n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 18 Sep 2002 16:54:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: please apply patch to contrib/ltree"
},
{
"msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\nOleg Bartunov wrote:\n> Tiny patch fixing small documentation typo.\n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 19 Sep 2002 23:54:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: please apply patch to contrib/ltree"
}
] |
[
{
"msg_contents": "Greetings,\n\nAs far as I use the txtidx data structure in conjunction with gist indexing\nto make a word indexing of a very large UNICODE db, I've implemented a PostgreSQL\nfunction that uses libunac to unaccent TEXT fileds.\n\nThe resulting text is in UTF-8, but you can modify it in the sources with\nan appropriate value (using iconv charset names).\n\nGet libunac from: http://www.nongnu.org/unac/ (it uses iconv)\n\nExtract the archive, compile it (make). Move pg_unac.so to your postgresql\nshared libraries dir.\n\nLink it in postgresql:\n\nCREATE FUNCTION unac(TEXT) RETURNS TEXT AS 'path_to_pg_unac.so' LANGUAGE\nC;\n\nWhat about integrating unaccent libraries directly in tsearch? It is useful\nfor french search engines (for instance).\n\nBye.\n\nNhan NGO DINH\n\n\n__________________________________________________________________\nTiscali Ricaricasa\nla prima prepagata per navigare in Internet a meno di un'urbana e\nrisparmiare su tutte le tue telefonate. Acquistala on line e non avrai\nnessun costo di attivazione né di ricarica!\nhttp://ricaricasaonline.tiscali.it/",
"msg_date": "Wed, 18 Sep 2002 12:14:49 +0200",
"msg_from": "nngodinh@tiscali.it",
"msg_from_op": true,
"msg_subject": "unaccent"
},
{
"msg_contents": "On Wed, 18 Sep 2002 nngodinh@tiscali.it wrote:\n\n> Greetings,\n>\n> As far as I use the txtidx data structure in conjunction with gist indexing\n> to make a word indexing of a very large UNICODE db, I've implemented a PostgreSQL\n> function that uses libunac to unaccent TEXT fileds.\n>\n> The resulting text is in UTF-8, but you can modify it in the sources with\n> an appropriate value (using iconv charset names).\n>\n> Get libunac from: http://www.nongnu.org/unac/ (it uses iconv)\n>\n> Extract the archive, compile it (make). Move pg_unac.so to your postgresql\n> shared libraries dir.\n>\n> Link it in postgresql:\n>\n> CREATE FUNCTION unac(TEXT) RETURNS TEXT AS 'path_to_pg_unac.so' LANGUAGE\n> C;\n>\n> What about integrating unaccent libraries directly in tsearch? It is useful\n> for french search engines (for instance).\n\nI think better to have separate module contrib/unac and document using\nit with tsearch. Please write us a couple of lines about using\nyour function and we'll add them into tsearch documentation.\n\nbtw, use palloc instead of malloc in postgresql functions .\n\n>\n> Bye.\n>\n> Nhan NGO DINH\n>\n>\n> __________________________________________________________________\n> Tiscali Ricaricasa\n> la prima prepagata per navigare in Internet a meno di un'urbana e\n> risparmiare su tutte le tue telefonate. Acquistala on line e non avrai\n> nessun costo di attivazione nО©╫ di ricarica!\n> http://ricaricasaonline.tiscali.it/\n>\n>\n>\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Wed, 18 Sep 2002 15:08:59 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: unaccent"
},
{
"msg_contents": "On Wed, Sep 18, 2002 at 03:08:59PM +0300, Oleg Bartunov wrote:\n> On Wed, 18 Sep 2002 nngodinh@tiscali.it wrote:\n> >\n> > Get libunac from: http://www.nongnu.org/unac/ (it uses iconv)\n> >\n> > Extract the archive, compile it (make). Move pg_unac.so to your postgresql\n> > shared libraries dir.\n> >\n> I think better to have separate module contrib/unac and document using\n> it with tsearch. Please write us a couple of lines about using\n> your function and we'll add them into tsearch documentation.\n\n I think about --with-unaccent for PostgreSQL and to_ascii() in\n main tree. Comment?\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Wed, 18 Sep 2002 14:24:26 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: unaccent"
},
{
"msg_contents": "Not \"to_ascii\", since there are so many extended UNICODE characters that\ndoesn't have any accent and should not be converted to an ASCII character.\n\n>-- Messaggio Originale --\n>Date: Wed, 18 Sep 2002 14:24:26 +0200\n>From: Karel Zak <zakkr@zf.jcu.cz>\n>To: Oleg Bartunov <oleg@sai.msu.su>\n>Cc: nngodinh@tiscali.it, pgsql-hackers@postgresql.org\n>Subject: Re: [HACKERS] unaccent\n>\n>\n>On Wed, Sep 18, 2002 at 03:08:59PM +0300, Oleg Bartunov wrote:\n>> On Wed, 18 Sep 2002 nngodinh@tiscali.it wrote:\n>> >\n>> > Get libunac from: http://www.nongnu.org/unac/ (it uses iconv)\n>> >\n>> > Extract the archive, compile it (make). Move pg_unac.so to your postgresql\n>> > shared libraries dir.\n>> >\n>> I think better to have separate module contrib/unac and document using\n>> it with tsearch. Please write us a couple of lines about using\n>> your function and we'll add them into tsearch documentation.\n>\n> I think about --with-unaccent for PostgreSQL and to_ascii() in\n> main tree. Comment?\n>\n> Karel\n>\n>--\n> Karel Zak <zakkr@zf.jcu.cz>\n> http://home.zf.jcu.cz/~zakkr/\n>\n> C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 6: Have you searched our list archives?\n>\n>http://archives.postgresql.org\n\n\n__________________________________________________________________\nTiscali Ricaricasa\nla prima prepagata per navigare in Internet a meno di un'urbana e\nrisparmiare su tutte le tue telefonate. Acquistala on line e non avrai\nnessun costo di attivazione né di ricarica!\nhttp://ricaricasaonline.tiscali.it/\n\n\n\n",
"msg_date": "Wed, 18 Sep 2002 14:42:05 +0200",
"msg_from": "nngodinh@tiscali.it",
"msg_from_op": true,
"msg_subject": "Re: unaccent"
},
{
"msg_contents": "On Wed, 18 Sep 2002, Karel Zak wrote:\n\n> On Wed, Sep 18, 2002 at 03:08:59PM +0300, Oleg Bartunov wrote:\n> > On Wed, 18 Sep 2002 nngodinh@tiscali.it wrote:\n> > >\n> > > Get libunac from: http://www.nongnu.org/unac/ (it uses iconv)\n> > >\n> > > Extract the archive, compile it (make). Move pg_unac.so to your postgresql\n> > > shared libraries dir.\n> > >\n> > I think better to have separate module contrib/unac and document using\n> > it with tsearch. Please write us a couple of lines about using\n> > your function and we'll add them into tsearch documentation.\n>\n> I think about --with-unaccent for PostgreSQL and to_ascii() in\n> main tree. Comment?\n\nHmm, it'd require linking yet another library. contrib module is\na standard way to test/develope possible future feature.\n\n>\n> Karel\n>\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Wed, 18 Sep 2002 16:46:39 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: unaccent"
},
{
"msg_contents": "nngodinh@tiscali.it writes:\n\n> Not \"to_ascii\", since there are so many extended UNICODE characters that\n> doesn't have any accent and should not be converted to an ASCII character.\n\nReally, the accent conversion should be part of the character set\nconversion routines. At least my local iconv does that.\n\nIn general, the determination of what is an accent and how to convert it\nis both dependent on locale and the intended usage. It's not clear how\nthat should be handled.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 19 Sep 2002 19:53:28 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: unaccent"
}
] |
[
{
"msg_contents": "Greetings,\n\nDoes anyone know a function that strips ANY occurence of a given character\nfrom a TEXT?\n\nThx.\n\nNhan NGO DINH\n\n\n__________________________________________________________________\nTiscali Ricaricasa\nla prima prepagata per navigare in Internet a meno di un'urbana e\nrisparmiare su tutte le tue telefonate. Acquistala on line e non avrai\nnessun costo di attivazione né di ricarica!\nhttp://ricaricasaonline.tiscali.it/\n\n\n\n",
"msg_date": "Wed, 18 Sep 2002 12:18:48 +0200",
"msg_from": "nngodinh@tiscali.it",
"msg_from_op": true,
"msg_subject": "strip a character from text"
},
{
"msg_contents": "On Wed, 2002-09-18 at 11:18, nngodinh@tiscali.it wrote:\n> Greetings,\n> \n> Does anyone know a function that strips ANY occurence of a given character\n> from a TEXT?\n\nIt sounds like a job for a PL/Perl function.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Give, and it shall be given unto you; good measure, \n pressed down, and shaken together, and running over, \n shall men pour into your lap. For by your standard of \n measure it will be measured to in return.\"\n Luke 6:38 \n\n",
"msg_date": "18 Sep 2002 13:30:49 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: strip a character from text"
},
{
"msg_contents": "I'm about to write a C function... If I can't found alternatives.\n\n>-- Messaggio Originale --\n>Subject: Re: [HACKERS] strip a character from text\n>From: Oliver Elphick <olly@lfix.co.uk>\n>To: nngodinh@tiscali.it\n>Cc: pgsql-hackers@postgresql.org\n>Date: 18 Sep 2002 13:30:49 +0100\n>\n>\n>On Wed, 2002-09-18 at 11:18, nngodinh@tiscali.it wrote:\n>> Greetings,\n>>\n>> Does anyone know a function that strips ANY occurence of a given character\n>> from a TEXT?\n>\n>It sounds like a job for a PL/Perl function.\n>\n>--\n>Oliver Elphick Oliver.Elphick@lfix.co.uk\n>Isle of Wight, UK\n>http://www.lfix.co.uk/oliver\n>GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n> ========================================\n> \"Give, and it shall be given unto you; good measure,\n> pressed down, and shaken together, and running over,\n> shall men pour into your lap. For by your standard of\n> measure it will be measured to in return.\"\n> Luke 6:38\n>\n\n\n__________________________________________________________________\nTiscali Ricaricasa\nla prima prepagata per navigare in Internet a meno di un'urbana e\nrisparmiare su tutte le tue telefonate. Acquistala on line e non avrai\nnessun costo di attivazione né di ricarica!\nhttp://ricaricasaonline.tiscali.it/\n\n\n\n",
"msg_date": "Wed, 18 Sep 2002 14:43:51 +0200",
"msg_from": "nngodinh@tiscali.it",
"msg_from_op": true,
"msg_subject": "Re: strip a character from text"
},
{
"msg_contents": "nngodinh@tiscali.it wrote:\n> I'm about to write a C function... If I can't found alternatives.\n> \n> \n\nNote that in 7.3 (in beta now) there is a new replace() function which will do \nthis:\n\nregression=# select replace('abcdefghabcdef','c','');\n replace\n--------------\n abdefghabdef\n\n\nJoe\n\n\n",
"msg_date": "Wed, 18 Sep 2002 09:41:14 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: strip a character from text"
}
] |
[
{
"msg_contents": "Hello,\n\n I am Spanish student of Computer Science. I have read through the \npart of the postgreSQL documentacion about genetic query optimization\nin PostgreSQL. \n\n I am following a Genetic Algorithms course at my University. I have\nto do an assigment giving an example of optimization with genetic\nalgorithms, and my teacher suggested me to do it using databases, but\nactually he does not know very much about this. \n \n I have done some kind of research from the references given in the docs\nand get as much information I had available (mostly from the Berkeley web \nsite), but I dont get examples and information for a newbie in this kind \nof matters.\n\n I would like to know if you have any sample program, or simple query\noptimization which I could start playing around with, to show the\noptimization posibilities of genetic algorithms in PostgreSQL.\n\n If this is not possible, If you dont mind please give me any further\nlinks that could be interesting for me.\n\n Many thanks in advance.\n\n Miguel\n\n",
"msg_date": "Wed, 18 Sep 2002 11:28:57 MET",
"msg_from": "iafmgc@unileon.es",
"msg_from_op": true,
"msg_subject": "genetic algorithm in PostgreSQL"
},
{
"msg_contents": "iafmgc@unileon.es writes:\n> I would like to know if you have any sample program, or simple\n> query optimization which I could start playing around with, to show\n> the optimization posibilities of genetic algorithms in PostgreSQL.\n\nWell, the code for the GEQO implementation in PostgreSQL is open --\nyou can just take a look at that. I'm not aware of any simpler\nversions, unfortunately.\n\n> If this is not possible, If you dont mind please give me any\n> further links that could be interesting for me.\n\nThere's some literature on the subject that you should probably read:\n\n http://citeseer.nj.nec.com/bennett91genetic.html\n http://citeseer.nj.nec.com/stillger96genetic.html\n\nIf you're unfamiliar with database query optimization itself, I found\nthis to be a good survey of the field:\n\n http://citeseer.nj.nec.com/371707.html\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n",
"msg_date": "18 Sep 2002 12:39:35 -0400",
"msg_from": "Neil Conway <neilc@samurai.com>",
"msg_from_op": false,
"msg_subject": "Re: genetic algorithm in PostgreSQL"
}
] |
[
{
"msg_contents": "The best way to use it is quite simple. If you want to index the table \"titles\"\nand \"title\" is the field containing the text to be indexed, you can create\nanother unaccented field, for instance \"utitle\".\n\nUPDATE titles SET utitle = unac(title);\n\nOf course you can set it up as a trigger function. Then you can use utitle\nwith txt2txtidx and tsearch.\n\nAnother solution is to generate the txtidx field (i.e. titleidx) directly\nusing unac:\n\nUPDATE titles SET titleidx = txt2txtidx(unac(title));\n\nBut the problem is that I've not succeeded using it with tsearch because\n(of course) it doesn't allow functions as parameters. So my first idea was\nto integrate unac in tsearch.\n\nBye.\n\n>-- Messaggio Originale --\n>Date: Wed, 18 Sep 2002 15:08:59 +0300 (GMT)\n>From: Oleg Bartunov <oleg@sai.msu.su>\n>To: nngodinh@tiscali.it\n>Cc: pgsql-hackers@postgresql.org\n>Subject: Re: [HACKERS] unaccent\n>\n>\n>On Wed, 18 Sep 2002 nngodinh@tiscali.it wrote:\n>\n>> Greetings,\n>>\n>> As far as I use the txtidx data structure in conjunction with gist indexing\n>> to make a word indexing of a very large UNICODE db, I've implemented\na\n>PostgreSQL\n>> function that uses libunac to unaccent TEXT fileds.\n>>\n>> The resulting text is in UTF-8, but you can modify it in the sources\nwith\n>> an appropriate value (using iconv charset names).\n>>\n>> Get libunac from: http://www.nongnu.org/unac/ (it uses iconv)\n>>\n>> Extract the archive, compile it (make). Move pg_unac.so to your postgresql\n>> shared libraries dir.\n>>\n>> Link it in postgresql:\n>>\n>> CREATE FUNCTION unac(TEXT) RETURNS TEXT AS 'path_to_pg_unac.so' LANGUAGE\n>> C;\n>>\n>> What about integrating unaccent libraries directly in tsearch? It is\nuseful\n>> for french search engines (for instance).\n>\n>I think better to have separate module contrib/unac and document using\n>it with tsearch. Please write us a couple of lines about using\n>your function and we'll add them into tsearch documentation.\n>\n>btw, use palloc instead of malloc in postgresql functions .\n>\n>>\n>> Bye.\n>>\n>> Nhan NGO DINH\n>>\n>>\n>> __________________________________________________________________\n>> Tiscali Ricaricasa\n>> la prima prepagata per navigare in Internet a meno di un'urbana e\n>> risparmiare su tutte le tue telefonate. Acquistala on line e non avrai\n>> nessun costo di attivazione n? di ricarica!\n>> http://ricaricasaonline.tiscali.it/\n>>\n>>\n>>\n>>\n>\n>\tRegards,\n>\t\tOleg\n>_____________________________________________________________\n>Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n>Sternberg Astronomical Institute, Moscow University (Russia)\n>Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n>phone: +007(095)939-16-83, +007(095)939-23-83\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: if posting/reading through Usenet, please send an appropriate\n>subscribe-nomail command to majordomo@postgresql.org so that your\n>message can get through to the mailing list cleanly\n\n\n\n__________________________________________________________________\nTiscali Ricaricasa\nla prima prepagata per navigare in Internet a meno di un'urbana e\nrisparmiare su tutte le tue telefonate. Acquistala on line e non avrai\nnessun costo di attivazione né di ricarica!\nhttp://ricaricasaonline.tiscali.it/\n\n\n\n",
"msg_date": "Wed, 18 Sep 2002 14:37:40 +0200",
"msg_from": "nngodinh@tiscali.it",
"msg_from_op": true,
"msg_subject": "Re: unaccent"
},
{
"msg_contents": "On Wed, 18 Sep 2002 nngodinh@tiscali.it wrote:\n\n> The best way to use it is quite simple. If you want to index the table \"titles\"\n> and \"title\" is the field containing the text to be indexed, you can create\n> another unaccented field, for instance \"utitle\".\n>\n> UPDATE titles SET utitle = unac(title);\n>\n> Of course you can set it up as a trigger function. Then you can use utitle\n> with txt2txtidx and tsearch.\n>\n> Another solution is to generate the txtidx field (i.e. titleidx) directly\n> using unac:\n>\n> UPDATE titles SET titleidx = txt2txtidx(unac(title));\n>\n> But the problem is that I've not succeeded using it with tsearch because\n> (of course) it doesn't allow functions as parameters. So my first idea was\n> to integrate unac in tsearch.\n\nwhat's exactly a problem ?\nUPDATE titles SET titleidx = txt2txtidx(unac(title));\nworks fine. Perhaps, you have a problem with query ?\n\n>\n> Bye.\n>\n> >-- Messaggio Originale --\n> >Date: Wed, 18 Sep 2002 15:08:59 +0300 (GMT)\n> >From: Oleg Bartunov <oleg@sai.msu.su>\n> >To: nngodinh@tiscali.it\n> >Cc: pgsql-hackers@postgresql.org\n> >Subject: Re: [HACKERS] unaccent\n> >\n> >\n> >On Wed, 18 Sep 2002 nngodinh@tiscali.it wrote:\n> >\n> >> Greetings,\n> >>\n> >> As far as I use the txtidx data structure in conjunction with gist indexing\n> >> to make a word indexing of a very large UNICODE db, I've implemented\n> a\n> >PostgreSQL\n> >> function that uses libunac to unaccent TEXT fileds.\n> >>\n> >> The resulting text is in UTF-8, but you can modify it in the sources\n> with\n> >> an appropriate value (using iconv charset names).\n> >>\n> >> Get libunac from: http://www.nongnu.org/unac/ (it uses iconv)\n> >>\n> >> Extract the archive, compile it (make). Move pg_unac.so to your postgresql\n> >> shared libraries dir.\n> >>\n> >> Link it in postgresql:\n> >>\n> >> CREATE FUNCTION unac(TEXT) RETURNS TEXT AS 'path_to_pg_unac.so' LANGUAGE\n> >> C;\n> >>\n> >> What about integrating unaccent libraries directly in tsearch? It is\n> useful\n> >> for french search engines (for instance).\n> >\n> >I think better to have separate module contrib/unac and document using\n> >it with tsearch. Please write us a couple of lines about using\n> >your function and we'll add them into tsearch documentation.\n> >\n> >btw, use palloc instead of malloc in postgresql functions .\n> >\n> >>\n> >> Bye.\n> >>\n> >> Nhan NGO DINH\n> >>\n> >>\n> >> __________________________________________________________________\n> >> Tiscali Ricaricasa\n> >> la prima prepagata per navigare in Internet a meno di un'urbana e\n> >> risparmiare su tutte le tue telefonate. Acquistala on line e non avrai\n> >> nessun costo di attivazione n? di ricarica!\n> >> http://ricaricasaonline.tiscali.it/\n> >>\n> >>\n> >>\n> >>\n> >\n> >\tRegards,\n> >\t\tOleg\n> >_____________________________________________________________\n> >Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> >Sternberg Astronomical Institute, Moscow University (Russia)\n> >Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> >phone: +007(095)939-16-83, +007(095)939-23-83\n> >\n> >\n> >---------------------------(end of broadcast)---------------------------\n> >TIP 3: if posting/reading through Usenet, please send an appropriate\n> >subscribe-nomail command to majordomo@postgresql.org so that your\n> >message can get through to the mailing list cleanly\n>\n>\n>\n> __________________________________________________________________\n> Tiscali Ricaricasa\n> la prima prepagata per navigare in Internet a meno di un'urbana e\n> risparmiare su tutte le tue telefonate. Acquistala on line e non avrai\n> nessun costo di attivazione nО©╫ di ricarica!\n> http://ricaricasaonline.tiscali.it/\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Wed, 18 Sep 2002 17:04:56 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: unaccent"
},
{
"msg_contents": "The txt2txtidx function works fine with unac. The problem is with the trigger:\n\ncreate trigger txtidxupdate before update or insert on titles for each row\nexecute procedure tsearch(titleidx, title);\n\nAs you know tsearch(titleidx, unac(title)) doesn't work.\n\n>-- Messaggio Originale --\n>Date: Wed, 18 Sep 2002 17:04:56 +0300 (GMT)\n>From: Oleg Bartunov <oleg@sai.msu.su>\n>To: nngodinh@tiscali.it\n>Cc: pgsql-hackers@postgresql.org\n>Subject: Re: [HACKERS] unaccent\n>\n>\n>On Wed, 18 Sep 2002 nngodinh@tiscali.it wrote:\n>\n>> The best way to use it is quite simple. If you want to index the table\n>\"titles\"\n>> and \"title\" is the field containing the text to be indexed, you can create\n>> another unaccented field, for instance \"utitle\".\n>>\n>> UPDATE titles SET utitle = unac(title);\n>>\n>> Of course you can set it up as a trigger function. Then you can use utitle\n>> with txt2txtidx and tsearch.\n>>\n>> Another solution is to generate the txtidx field (i.e. titleidx) directly\n>> using unac:\n>>\n>> UPDATE titles SET titleidx = txt2txtidx(unac(title));\n>>\n>> But the problem is that I've not succeeded using it with tsearch because\n>> (of course) it doesn't allow functions as parameters. So my first idea\n>was\n>> to integrate unac in tsearch.\n>\n>what's exactly a problem ?\n>UPDATE titles SET titleidx = txt2txtidx(unac(title));\n>works fine. Perhaps, you have a problem with query ?\n>\n>>\n>> Bye.\n>>\n>> >-- Messaggio Originale --\n>> >Date: Wed, 18 Sep 2002 15:08:59 +0300 (GMT)\n>> >From: Oleg Bartunov <oleg@sai.msu.su>\n>> >To: nngodinh@tiscali.it\n>> >Cc: pgsql-hackers@postgresql.org\n>> >Subject: Re: [HACKERS] unaccent\n>> >\n>> >\n>> >On Wed, 18 Sep 2002 nngodinh@tiscali.it wrote:\n>> >\n>> >> Greetings,\n>> >>\n>> >> As far as I use the txtidx data structure in conjunction with gist\nindexing\n>> >> to make a word indexing of a very large UNICODE db, I've implemented\n>> a\n>> >PostgreSQL\n>> >> function that uses libunac to unaccent TEXT fileds.\n>> >>\n>> >> The resulting text is in UTF-8, but you can modify it in the sources\n>> with\n>> >> an appropriate value (using iconv charset names).\n>> >>\n>> >> Get libunac from: http://www.nongnu.org/unac/ (it uses iconv)\n>> >>\n>> >> Extract the archive, compile it (make). Move pg_unac.so to your postgresql\n>> >> shared libraries dir.\n>> >>\n>> >> Link it in postgresql:\n>> >>\n>> >> CREATE FUNCTION unac(TEXT) RETURNS TEXT AS 'path_to_pg_unac.so' LANGUAGE\n>> >> C;\n>> >>\n>> >> What about integrating unaccent libraries directly in tsearch? It\nis\n>> useful\n>> >> for french search engines (for instance).\n>> >\n>> >I think better to have separate module contrib/unac and document using\n>> >it with tsearch. Please write us a couple of lines about using\n>> >your function and we'll add them into tsearch documentation.\n>> >\n>> >btw, use palloc instead of malloc in postgresql functions .\n>> >\n>> >>\n>> >> Bye.\n>> >>\n>> >> Nhan NGO DINH\n>> >>\n>> >>\n>> >> __________________________________________________________________\n>> >> Tiscali Ricaricasa\n>> >> la prima prepagata per navigare in Internet a meno di un'urbana e\n>> >> risparmiare su tutte le tue telefonate. Acquistala on line e non avrai\n>> >> nessun costo di attivazione n? di ricarica!\n>> >> http://ricaricasaonline.tiscali.it/\n>> >>\n>> >>\n>> >>\n>> >>\n>> >\n>> >\tRegards,\n>> >\t\tOleg\n>> >_____________________________________________________________\n>> >Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n>> >Sternberg Astronomical Institute, Moscow University (Russia)\n>> >Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n>> >phone: +007(095)939-16-83, +007(095)939-23-83\n>> >\n>> >\n>> >---------------------------(end of broadcast)---------------------------\n>> >TIP 3: if posting/reading through Usenet, please send an appropriate\n>> >subscribe-nomail command to majordomo@postgresql.org so that your\n>> >message can get through to the mailing list cleanly\n>>\n>>\n>>\n>> __________________________________________________________________\n>> Tiscali Ricaricasa\n>> la prima prepagata per navigare in Internet a meno di un'urbana e\n>> risparmiare su tutte le tue telefonate. Acquistala on line e non avrai\n>> nessun costo di attivazione n? di ricarica!\n>> http://ricaricasaonline.tiscali.it/\n>>\n>>\n>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 4: Don't 'kill -9' the postmaster\n>>\n>\n>\tRegards,\n>\t\tOleg\n>_____________________________________________________________\n>Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n>Sternberg Astronomical Institute, Moscow University (Russia)\n>Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n>phone: +007(095)939-16-83, +007(095)939-23-83\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 6: Have you searched our list archives?\n>\n>http://archives.postgresql.org\n\n\n__________________________________________________________________\nTiscali Ricaricasa\nla prima prepagata per navigare in Internet a meno di un'urbana e\nrisparmiare su tutte le tue telefonate. Acquistala on line e non avrai\nnessun costo di attivazione né di ricarica!\nhttp://ricaricasaonline.tiscali.it/\n\n\n\n",
"msg_date": "Wed, 18 Sep 2002 16:28:04 +0200",
"msg_from": "nngodinh@tiscali.it",
"msg_from_op": true,
"msg_subject": "Re: unaccent"
}
] |
[
{
"msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n(From the SQL list:)\n\n> And we know it is a bug:\n\n> * to_char(0,'FM999.99') returns a period, to_char(1,'FM999.99') does not\n\nI took a look at this bug a week ago, and noticed that inside of the file \nsrc/backend/utils/adt/formatting.c\nwe are specifically causing the above behavior, perhaps in an effort to \nmimic Oracle's implementation of it. Unless I am missing something, it \nseems that we can simply take out the hack inside of the above file, \nor mark the bug as solved...\n\nSearch for the strings \"terrible\" and \"terible\" to find the spots \ninside of formatting.c\n\nGreg Sabino Mullane greg@turnstep.com\nPGP Key: 0x14964AC8 200209180909\n\n-----BEGIN PGP SIGNATURE-----\n\niD8DBQE9iIDvvJuQZxSWSsgRAqRLAJ9gV8oTnMFTsSmQzMdKppNlWW/TvACgvDu2\nf0TDVbi//F5jwZn7K9+9wLE=\n=TIs7\n-----END PGP SIGNATURE-----\n\n\n",
"msg_date": "Wed, 18 Sep 2002 13:41:38 -0000",
"msg_from": "greg@turnstep.com",
"msg_from_op": true,
"msg_subject": "The notorious to_char bug"
},
{
"msg_contents": "greg@turnstep.com writes:\n> (From the SQL list:)\n\n>> And we know it is a bug:\n>> * to_char(0,'FM999.99') returns a period, to_char(1,'FM999.99') does not\n\n> I took a look at this bug a week ago, and noticed that inside of the file \n> src/backend/utils/adt/formatting.c\n> we are specifically causing the above behavior, perhaps in an effort to \n> mimic Oracle's implementation of it.\n\nHm. Can anyone try these cases on Oracle? If the code goes out of its\nway to have this odd behavior, maybe it's because Oracle does too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Sep 2002 11:12:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: The notorious to_char bug "
},
{
"msg_contents": "Oracle 8.1.7.2\n\nSQL> SELECT to_char(0,'FM999.99') AS tst_char FROM dual;\n\nTST_CHA\n-------\n0.\n\nSQL> SELECT to_char(1,'FM999.99') AS tst_char FROM dual;\n\nTST_CHA\n-------\n1.\n\n\n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Tom Lane\nSent: Wednesday, September 18, 2002 9:12 AM\nTo: greg@turnstep.com\nCc: pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] The notorious to_char bug\n\n\ngreg@turnstep.com writes:\n> (From the SQL list:)\n\n>> And we know it is a bug:\n>> * to_char(0,'FM999.99') returns a period, to_char(1,'FM999.99') does not\n\n> I took a look at this bug a week ago, and noticed that inside of the file\n> src/backend/utils/adt/formatting.c\n> we are specifically causing the above behavior, perhaps in an effort to\n> mimic Oracle's implementation of it.\n\nHm. Can anyone try these cases on Oracle? If the code goes out of its\nway to have this odd behavior, maybe it's because Oracle does too.\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\nsubscribe-nomail command to majordomo@postgresql.org so that your\nmessage can get through to the mailing list cleanly\n\n",
"msg_date": "Wed, 18 Sep 2002 11:45:47 -0600",
"msg_from": "\"Jonah H. Harris\" <jharris@nightstarcorporation.com>",
"msg_from_op": false,
"msg_subject": "Re: The notorious to_char bug "
}
] |
[
{
"msg_contents": "\n> > Note that if you write, say,\n> > \tset numericcol = numericcol * 3.14159;\n> > my proposal would do the \"right thing\" since the constant would be typed\n> > as numeric to start with and would stay that way. To do what you want\n> > with a float variable, it'd be necessary to write\n> > \tset numericcol = numericcol * float4col::numeric;\n\nYes, that is the case where the new behavior would imho not be good (but you \nsay spec compliant). I loose precision even though there is room to hold it.\n\n> > which is sort of ugly; but no uglier than\n> > \tset float4col = float4col * numericcol::float4;\n\nInformix does the calculations in numeric, and then converts the result\nif no casts are supplied (would do set float4col = float4(float4col::numeric * numericcol)).\n\nWould be interesting what others do ?\n\nTest script:\ncreate table atab (a decimal(30), b smallfloat, c decimal(30), d smallfloat);\ninsert into atab values (1.000000000000001,100000.0,0, 0);\nupdate atab set c=a*b-b, d=a*b-b where 1=1;\nselect a*b-b, b, c,d from atab;\n\n (expression) b c d\n\n 1e-10 100000.0000000 1e-10 1e-10\n\nI hope this test is ok ?\nIt still seems to me, that numeric should be the preferred type, and not float8.\n\nAndreas\n",
"msg_date": "Wed, 18 Sep 2002 18:01:05 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Proposal for resolving casting issues"
},
{
"msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> Note that if you write, say,\n> set numericcol = numericcol * 3.14159;\n> my proposal would do the \"right thing\" since the constant would be typed\n> as numeric to start with and would stay that way. To do what you want\n> with a float variable, it'd be necessary to write\n> set numericcol = numericcol * float4col::numeric;\n\n> Yes, that is the case where the new behavior would imho not be good (but you \n> say spec compliant). I loose precision even though there is room to hold it.\n\nLose what precision? It seems silly to imagine that the product of\na numeric and a float4 is good to more digits than there are in the\nfloat4. This is exactly the spec's point: combining an exact and an\napproximate input will give you an approximate result.\n\n(Unless of course the value in the float4 happens to be exact, eg,\nan integer of not very many digits. But if you are relying on that\nto be true, why aren't you using an exact format for storing it?)\n\n> Informix does the calculations in numeric, and then converts the result\n> if no casts are supplied (would do set float4col = float4(float4col::numeric * numericcol)).\n\nI am not sure what the argument is for following Informix's lead rather\nthan the standard's lead; especially when Informix evidently doesn't\nunderstand numerical analysis ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Sep 2002 12:26:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for resolving casting issues "
}
] |
[
{
"msg_contents": "I thought you had named the conversion functions after the IANA names. I\nfound the following inconsistencies, however:\n\nsjis should be shift_jis\nwin1250 should be windows_1250 (similarly 866, 1251)\nkoi8r should be koi8_r\n\nI think we should fix this now.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 18 Sep 2002 22:11:38 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Inconsistent Conversion Names"
},
{
"msg_contents": "> I thought you had named the conversion functions after the IANA names. I\n> found the following inconsistencies, however:\n> \n> sjis should be shift_jis\n\nThe conversion named \"SJIS\" is different from IANA's \"shift_jis\". It\nactually matches \"Windows-31J\" in IANA, which is too ugly to being\nemploied as our conversion name, IMO.\n\n> win1250 should be windows_1250 (similarly 866, 1251)\n\nI agree with win1250 -> windows_1250, win1251 -> windows_1251, but do\nnot agree with renaming win866. There's no windows_866 in IANA. Maybe\nthat should be \"ibm866\"?\n\n> koi8r should be koi8_r\n\nSomeone said that the conversion table is actually koi8r + koi8u,\nbeing different from IANA's koi8_r. Not sure though.\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 19 Sep 2002 14:57:12 +0900 (JST)",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent Conversion Names"
},
{
"msg_contents": "Tatsuo Ishii writes:\n\n> The conversion named \"SJIS\" is different from IANA's \"shift_jis\". It\n> actually matches \"Windows-31J\" in IANA, which is too ugly to being\n> emploied as our conversion name, IMO.\n\nOK\n\n> I agree with win1250 -> windows_1250, win1251 -> windows_1251, but do\n> not agree with renaming win866. There's no windows_866 in IANA. Maybe\n> that should be \"ibm866\"?\n\nIs it ibm866 or are you wondering yourself?\n\n> Someone said that the conversion table is actually koi8r + koi8u,\n> being different from IANA's koi8_r. Not sure though.\n\nI found mention in the archives by Oleg B. that it is in fact koi8_r.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 19 Sep 2002 19:57:06 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistent Conversion Names"
},
{
"msg_contents": "On Thu, 19 Sep 2002, Peter Eisentraut wrote:\n\n> Tatsuo Ishii writes:\n>\n> > The conversion named \"SJIS\" is different from IANA's \"shift_jis\". It\n> > actually matches \"Windows-31J\" in IANA, which is too ugly to being\n> > emploied as our conversion name, IMO.\n>\n> OK\n>\n> > I agree with win1250 -> windows_1250, win1251 -> windows_1251, but do\n> > not agree with renaming win866. There's no windows_866 in IANA. Maybe\n> > that should be \"ibm866\"?\n>\n> Is it ibm866 or are you wondering yourself?\n\nit's a total mess. I know CP866, CP-866,IBM866,IBM_866\nIANA isn't a standard but a recommendation.\nglibc uses name mangling, so KOI8-R -> koi8r\n\n>\n> > Someone said that the conversion table is actually koi8r + koi8u,\n> > being different from IANA's koi8_r. Not sure though.\n>\n> I found mention in the archives by Oleg B. that it is in fact koi8_r.\n>\n\non my system (linux) I have ru_RU.KOI8-R locale.\n\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Thu, 19 Sep 2002 23:19:15 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent Conversion Names"
},
{
"msg_contents": "Oleg Bartunov writes:\n\n> > > Someone said that the conversion table is actually koi8r + koi8u,\n> > > being different from IANA's koi8_r. Not sure though.\n> >\n> > I found mention in the archives by Oleg B. that it is in fact koi8_r.\n> >\n>\n> on my system (linux) I have ru_RU.KOI8-R locale.\n\nWe're not talking about your system. :-)\n\nDo you know whether PostgreSQL's conversion tables cover koi8-r or koi8-u\nor both? The documentation contains contradicting information about that.\nActually, since we use the officially provided Unicode conversion tables,\nwe should know what they cover.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sat, 21 Sep 2002 20:38:08 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistent Conversion Names"
}
] |
[
{
"msg_contents": "Thomas Lockhart wrote:\n> ...\n> > Fix SIMILAR TO to be Posix compiant or remove it\n> \n> Sorry, was there a decision here?\n> \n> No one has described the problem, just declared that there is one and\n> declared that the feature should be removed.\n> \n> In the old days, one might have expected to approach this differently,\n> with a contribution to help fix a problem, after describing it. I'm not\n> quite understanding the current process, if there is one.\n> \n\nI had it in my mailbox as an unresolved issue. Peter wanted it added so\nI did it. I don't know the issue either. If you want it removed from\nopen item, I will do that too.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 18 Sep 2002 19:20:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Wed, 18 Sep 2002, Bruce Momjian wrote:\n\n> Thomas Lockhart wrote:\n> > ...\n> > > Fix SIMILAR TO to be Posix compiant or remove it\n> >\n> > Sorry, was there a decision here?\n> >\n> > No one has described the problem, just declared that there is one and\n> > declared that the feature should be removed.\n> >\n> > In the old days, one might have expected to approach this differently,\n> > with a contribution to help fix a problem, after describing it. I'm not\n> > quite understanding the current process, if there is one.\n> >\n>\n> I had it in my mailbox as an unresolved issue. Peter wanted it added so\n> I did it. I don't know the issue either. If you want it removed from\n> open item, I will do that too.\n\nWell, if nobody can identify what exactly the problem is, it should\ndefinitely be removed from the Open Items list ... maybe we need to lay\ndown some 'rules' for the TODO list? Some sort of criteria other hten\n\"someone suggested it\" to work with? For instance, change the TODO to a\npseudo-FAQ format ... where an item added to it has to have some sort of\n'associated' description?\n\nFor instance, how is SIMILAR TO *not* Posix compliant? What *is* a Posix\ncompliant version? Where is such compliance defined? Is there a\nreference?\n\nAlso, since when has 'lack of compliance' been basis to remove something\n... \"its not fully compliant, so even partial functionality isn't\nallowed\"?\n\nBasically, there should be *some* basis for an item to be on the TODO list\n... some sort \"this is how it should be\" ...\n\nHow many items on the TODO list are ones that nobody even knows what they\nare about anymore? :)\n\n\n",
"msg_date": "Wed, 18 Sep 2002 22:55:59 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "The TODO List (Was: Re: Open 7.3 items)"
},
{
"msg_contents": "Marc G. Fournier wrote:\n> Well, if nobody can identify what exactly the problem is, it should\n> definitely be removed from the Open Items list ... maybe we need to lay\n> down some 'rules' for the TODO list? Some sort of criteria other hten\n> \"someone suggested it\" to work with? For instance, change the TODO to a\n> pseudo-FAQ format ... where an item added to it has to have some sort of\n> 'associated' description?\n> \n> For instance, how is SIMILAR TO *not* Posix compliant? What *is* a Posix\n> compliant version? Where is such compliance defined? Is there a\n> reference?\n> \n> Also, since when has 'lack of compliance' been basis to remove something\n> ... \"its not fully compliant, so even partial functionality isn't\n> allowed\"?\n> \n> Basically, there should be *some* basis for an item to be on the TODO list\n> ... some sort \"this is how it should be\" ...\n> \n> How many items on the TODO list are ones that nobody even knows what they\n> are about anymore? :)\n\nI think you are confusing the open items list with the TODO list. TODO\nusually has some basis, while open items is just that, things we need to\ndecide on. Peter brought it up and wanted it on the list so I put it\non. I can be taken off just as easily. I put Peter's name on the item,\nand a question mark. The open items list is just so we don't forget\nthings.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 18 Sep 2002 22:02:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: The TODO List (Was: Re: Open 7.3 items)"
},
{
"msg_contents": "On Wed, 18 Sep 2002, Bruce Momjian wrote:\n\n> I think you are confusing the open items list with the TODO list. TODO\n> usually has some basis, while open items is just that, things we need to\n> decide on. Peter brought it up and wanted it on the list so I put it\n> on. I can be taken off just as easily. I put Peter's name on the item,\n> and a question mark. The open items list is just so we don't forget\n> things.\n\nI'm in agreement with Thomas here ... unless a problem has been defined a\nbit more specifically then 'it isn't posix compliant', it shouldn't be\nconsidered an open item ... please remove?\n\n",
"msg_date": "Wed, 18 Sep 2002 23:28:39 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: The TODO List (Was: Re: Open 7.3 items)"
},
{
"msg_contents": "Marc G. Fournier wrote:\n> On Wed, 18 Sep 2002, Bruce Momjian wrote:\n> \n> > I think you are confusing the open items list with the TODO list. TODO\n> > usually has some basis, while open items is just that, things we need to\n> > decide on. Peter brought it up and wanted it on the list so I put it\n> > on. I can be taken off just as easily. I put Peter's name on the item,\n> > and a question mark. The open items list is just so we don't forget\n> > things.\n> \n> I'm in agreement with Thomas here ... unless a problem has been defined a\n> bit more specifically then 'it isn't posix compliant', it shouldn't be\n> considered an open item ... please remove?\n\nRemoved. See, I can remove them as quickly as I add them. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 18 Sep 2002 22:34:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: The TODO List (Was: Re: Open 7.3 items)"
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> I'm in agreement with Thomas here ... unless a problem has been defined a\n> bit more specifically then 'it isn't posix compliant', it shouldn't be\n> considered an open item ... please remove?\n\nA quick review of SQL99 says that their notion of SIMILAR TO patterns\nis an unholy witches' brew: it does *both* common-or-garden regexp\nexpressions and LIKE patterns. Specifically, I see these\nmetacharacters:\n\n\t|\t\tOR (regexp-ish)\n\n\t*\t\trepeat 0 or more times (regexp-ish)\n\n\t+\t\trepeat 1 or more times (regexp-ish)\n\n\t%\t\tmatch any character sequence (like LIKE)\n\n\t_\t\tmatch any one character (like LIKE)\n\n\t[...]\t\talmost-but-not-quite-regexp-ish character class\n\n\t(...)\t\tgrouping (regexp-ish)\n\nplus a just-like-LIKE treatment of a selectable escape character.\n\nBut the most important variation from common regex practice is that\n(if I'm reading the spec correctly) the pattern must match to the\nentire target string --- ie, it's effectively both left- and right-\nanchored. This is like LIKE patterns but utterly unlike common regexp\nusage.\n\nI could live with the fact that our regexp patterns don't implement all\nof the spec-mandated metacharacters. But I do not think we can ignore\nthe difference in anchoring behavior. This is not a subset of the spec\nbehavior, it is just plain wrong.\n\nI vote with Peter: we fix this or we disable it before 7.3 release.\nIt is not anywhere near spec compliant, and we will be doing no one\na favor by releasing it in the current state.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Sep 2002 23:14:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: The TODO List (Was: Re: Open 7.3 items) "
},
{
"msg_contents": "\nRe-added to open items:\n\n\tFix SIMILAR TO to be ANSI compliant or remove it (Peter, Tom)\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > I'm in agreement with Thomas here ... unless a problem has been defined a\n> > bit more specifically then 'it isn't posix compliant', it shouldn't be\n> > considered an open item ... please remove?\n> \n> A quick review of SQL99 says that their notion of SIMILAR TO patterns\n> is an unholy witches' brew: it does *both* common-or-garden regexp\n> expressions and LIKE patterns. Specifically, I see these\n> metacharacters:\n> \n> \t|\t\tOR (regexp-ish)\n> \n> \t*\t\trepeat 0 or more times (regexp-ish)\n> \n> \t+\t\trepeat 1 or more times (regexp-ish)\n> \n> \t%\t\tmatch any character sequence (like LIKE)\n> \n> \t_\t\tmatch any one character (like LIKE)\n> \n> \t[...]\t\talmost-but-not-quite-regexp-ish character class\n> \n> \t(...)\t\tgrouping (regexp-ish)\n> \n> plus a just-like-LIKE treatment of a selectable escape character.\n> \n> But the most important variation from common regex practice is that\n> (if I'm reading the spec correctly) the pattern must match to the\n> entire target string --- ie, it's effectively both left- and right-\n> anchored. This is like LIKE patterns but utterly unlike common regexp\n> usage.\n> \n> I could live with the fact that our regexp patterns don't implement all\n> of the spec-mandated metacharacters. But I do not think we can ignore\n> the difference in anchoring behavior. This is not a subset of the spec\n> behavior, it is just plain wrong.\n> \n> I vote with Peter: we fix this or we disable it before 7.3 release.\n> It is not anywhere near spec compliant, and we will be doing no one\n> a favor by releasing it in the current state.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 18 Sep 2002 23:33:00 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: The TODO List (Was: Re: Open 7.3 items)"
},
{
"msg_contents": "On Wed, 18 Sep 2002, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > I'm in agreement with Thomas here ... unless a problem has been defined a\n> > bit more specifically then 'it isn't posix compliant', it shouldn't be\n> > considered an open item ... please remove?\n>\n> A quick review of SQL99 says that their notion of SIMILAR TO patterns\n> is an unholy witches' brew: it does *both* common-or-garden regexp\n> expressions and LIKE patterns. Specifically, I see these\n> metacharacters:\n>\n> \t|\t\tOR (regexp-ish)\n>\n> \t*\t\trepeat 0 or more times (regexp-ish)\n>\n> \t+\t\trepeat 1 or more times (regexp-ish)\n>\n> \t%\t\tmatch any character sequence (like LIKE)\n>\n> \t_\t\tmatch any one character (like LIKE)\n>\n> \t[...]\t\talmost-but-not-quite-regexp-ish character class\n>\n> \t(...)\t\tgrouping (regexp-ish)\n>\n> plus a just-like-LIKE treatment of a selectable escape character.\n>\n> But the most important variation from common regex practice is that\n> (if I'm reading the spec correctly) the pattern must match to the\n> entire target string --- ie, it's effectively both left- and right-\n> anchored. This is like LIKE patterns but utterly unlike common regexp\n> usage.\n>\n> I could live with the fact that our regexp patterns don't implement all\n> of the spec-mandated metacharacters. But I do not think we can ignore\n> the difference in anchoring behavior. This is not a subset of the spec\n> behavior, it is just plain wrong.\n>\n> I vote with Peter: we fix this or we disable it before 7.3 release.\n> It is not anywhere near spec compliant, and we will be doing no one\n> a favor by releasing it in the current state.\n\nWhat would it take to get it to a fixed state? Who implemented SIMILAR TO\nin the first place? Who is able to fix this? And, finally, what are the\nimplications of leaving things as they are?\n\n From my read of what you are saying above, its currently implemented with\nan implied ^ and $ around the pattern match?\n\n",
"msg_date": "Thu, 19 Sep 2002 01:40:09 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: The TODO List (Was: Re: Open 7.3 items) "
},
{
"msg_contents": "On Wed, 18 Sep 2002, Bruce Momjian wrote:\n\n>\n> Re-added to open items:\n>\n> \tFix SIMILAR TO to be ANSI compliant or remove it (Peter, Tom)\n\nTke that @#$@$@@$@#$ thing out of there until its actually been fully\ndiscussed ... you are starting to remind me of Charlie Brown ... this, I\nthink, was Thomas' whole point, in that things are added way too faster\nand easily without fully understanding all of the ramifications ... let a\ndiscussion cool down *before* you take things off, or add things to, the\nlist ...\n\n\n\n\n>\n> ---------------------------------------------------------------------------\n>\n> Tom Lane wrote:\n> > \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > > I'm in agreement with Thomas here ... unless a problem has been defined a\n> > > bit more specifically then 'it isn't posix compliant', it shouldn't be\n> > > considered an open item ... please remove?\n> >\n> > A quick review of SQL99 says that their notion of SIMILAR TO patterns\n> > is an unholy witches' brew: it does *both* common-or-garden regexp\n> > expressions and LIKE patterns. Specifically, I see these\n> > metacharacters:\n> >\n> > \t|\t\tOR (regexp-ish)\n> >\n> > \t*\t\trepeat 0 or more times (regexp-ish)\n> >\n> > \t+\t\trepeat 1 or more times (regexp-ish)\n> >\n> > \t%\t\tmatch any character sequence (like LIKE)\n> >\n> > \t_\t\tmatch any one character (like LIKE)\n> >\n> > \t[...]\t\talmost-but-not-quite-regexp-ish character class\n> >\n> > \t(...)\t\tgrouping (regexp-ish)\n> >\n> > plus a just-like-LIKE treatment of a selectable escape character.\n> >\n> > But the most important variation from common regex practice is that\n> > (if I'm reading the spec correctly) the pattern must match to the\n> > entire target string --- ie, it's effectively both left- and right-\n> > anchored. This is like LIKE patterns but utterly unlike common regexp\n> > usage.\n> >\n> > I could live with the fact that our regexp patterns don't implement all\n> > of the spec-mandated metacharacters. But I do not think we can ignore\n> > the difference in anchoring behavior. This is not a subset of the spec\n> > behavior, it is just plain wrong.\n> >\n> > I vote with Peter: we fix this or we disable it before 7.3 release.\n> > It is not anywhere near spec compliant, and we will be doing no one\n> > a favor by releasing it in the current state.\n> >\n> > \t\t\tregards, tom lane\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> >\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n>\n\n",
"msg_date": "Thu, 19 Sep 2002 01:42:54 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: The TODO List (Was: Re: Open 7.3 items)"
},
{
"msg_contents": "\nIt is an open issue. It has to be resolved. When it is, I will remove\nit. I added a question mark to it but it needs to be tracked. I keep\nhaving to add and remove it because I have people telling me what to do.\n\nIt was Peter who told me to add it, and you and Thomas to remove it. It\nisn't me adding/removing on my own.\n\n---------------------------------------------------------------------------\n\nMarc G. Fournier wrote:\n> On Wed, 18 Sep 2002, Bruce Momjian wrote:\n> \n> >\n> > Re-added to open items:\n> >\n> > \tFix SIMILAR TO to be ANSI compliant or remove it (Peter, Tom)\n> \n> Tke that @#$@$@@$@#$ thing out of there until its actually been fully\n> discussed ... you are starting to remind me of Charlie Brown ... this, I\n> think, was Thomas' whole point, in that things are added way too faster\n> and easily without fully understanding all of the ramifications ... let a\n> discussion cool down *before* you take things off, or add things to, the\n> list ...\n> \n> \n> \n> \n> >\n> > ---------------------------------------------------------------------------\n> >\n> > Tom Lane wrote:\n> > > \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > > > I'm in agreement with Thomas here ... unless a problem has been defined a\n> > > > bit more specifically then 'it isn't posix compliant', it shouldn't be\n> > > > considered an open item ... please remove?\n> > >\n> > > A quick review of SQL99 says that their notion of SIMILAR TO patterns\n> > > is an unholy witches' brew: it does *both* common-or-garden regexp\n> > > expressions and LIKE patterns. Specifically, I see these\n> > > metacharacters:\n> > >\n> > > \t|\t\tOR (regexp-ish)\n> > >\n> > > \t*\t\trepeat 0 or more times (regexp-ish)\n> > >\n> > > \t+\t\trepeat 1 or more times (regexp-ish)\n> > >\n> > > \t%\t\tmatch any character sequence (like LIKE)\n> > >\n> > > \t_\t\tmatch any one character (like LIKE)\n> > >\n> > > \t[...]\t\talmost-but-not-quite-regexp-ish character class\n> > >\n> > > \t(...)\t\tgrouping (regexp-ish)\n> > >\n> > > plus a just-like-LIKE treatment of a selectable escape character.\n> > >\n> > > But the most important variation from common regex practice is that\n> > > (if I'm reading the spec correctly) the pattern must match to the\n> > > entire target string --- ie, it's effectively both left- and right-\n> > > anchored. This is like LIKE patterns but utterly unlike common regexp\n> > > usage.\n> > >\n> > > I could live with the fact that our regexp patterns don't implement all\n> > > of the spec-mandated metacharacters. But I do not think we can ignore\n> > > the difference in anchoring behavior. This is not a subset of the spec\n> > > behavior, it is just plain wrong.\n> > >\n> > > I vote with Peter: we fix this or we disable it before 7.3 release.\n> > > It is not anywhere near spec compliant, and we will be doing no one\n> > > a favor by releasing it in the current state.\n> > >\n> > > \t\t\tregards, tom lane\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 4: Don't 'kill -9' the postmaster\n> > >\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 359-1001\n> > + If your life is a hard drive, | 13 Roberts Road\n> > + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> >\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 19 Sep 2002 00:46:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: The TODO List (Was: Re: Open 7.3 items)"
},
{
"msg_contents": "On Thu, 19 Sep 2002, Bruce Momjian wrote:\n\n>\n> It is an open issue. It has to be resolved. When it is, I will remove\n> it. I added a question mark to it but it needs to be tracked. I keep\n> having to add and remove it because I have people telling me what to do.\n>\n> It was Peter who told me to add it, and you and Thomas to remove it. It\n> isn't me adding/removing on my own.\n\nRight, so you have two telling you to remove it, one telling you to add\nit, and two that are discussion why/if it *should* be added ... Tom feels\nit should be added, and I'm clarifing the why of it ... don't re-add it\nuntil we've determined *if* it is actually an open issue or not ... stop\njumping the gun ...\n\n\n >\n> ---------------------------------------------------------------------------\n>\n> Marc G. Fournier wrote:\n> > On Wed, 18 Sep 2002, Bruce Momjian wrote:\n> >\n> > >\n> > > Re-added to open items:\n> > >\n> > > \tFix SIMILAR TO to be ANSI compliant or remove it (Peter, Tom)\n> >\n> > Tke that @#$@$@@$@#$ thing out of there until its actually been fully\n> > discussed ... you are starting to remind me of Charlie Brown ... this, I\n> > think, was Thomas' whole point, in that things are added way too faster\n> > and easily without fully understanding all of the ramifications ... let a\n> > discussion cool down *before* you take things off, or add things to, the\n> > list ...\n> >\n> >\n> >\n> >\n> > >\n> > > ---------------------------------------------------------------------------\n> > >\n> > > Tom Lane wrote:\n> > > > \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > > > > I'm in agreement with Thomas here ... unless a problem has been defined a\n> > > > > bit more specifically then 'it isn't posix compliant', it shouldn't be\n> > > > > considered an open item ... please remove?\n> > > >\n> > > > A quick review of SQL99 says that their notion of SIMILAR TO patterns\n> > > > is an unholy witches' brew: it does *both* common-or-garden regexp\n> > > > expressions and LIKE patterns. Specifically, I see these\n> > > > metacharacters:\n> > > >\n> > > > \t|\t\tOR (regexp-ish)\n> > > >\n> > > > \t*\t\trepeat 0 or more times (regexp-ish)\n> > > >\n> > > > \t+\t\trepeat 1 or more times (regexp-ish)\n> > > >\n> > > > \t%\t\tmatch any character sequence (like LIKE)\n> > > >\n> > > > \t_\t\tmatch any one character (like LIKE)\n> > > >\n> > > > \t[...]\t\talmost-but-not-quite-regexp-ish character class\n> > > >\n> > > > \t(...)\t\tgrouping (regexp-ish)\n> > > >\n> > > > plus a just-like-LIKE treatment of a selectable escape character.\n> > > >\n> > > > But the most important variation from common regex practice is that\n> > > > (if I'm reading the spec correctly) the pattern must match to the\n> > > > entire target string --- ie, it's effectively both left- and right-\n> > > > anchored. This is like LIKE patterns but utterly unlike common regexp\n> > > > usage.\n> > > >\n> > > > I could live with the fact that our regexp patterns don't implement all\n> > > > of the spec-mandated metacharacters. But I do not think we can ignore\n> > > > the difference in anchoring behavior. This is not a subset of the spec\n> > > > behavior, it is just plain wrong.\n> > > >\n> > > > I vote with Peter: we fix this or we disable it before 7.3 release.\n> > > > It is not anywhere near spec compliant, and we will be doing no one\n> > > > a favor by releasing it in the current state.\n> > > >\n> > > > \t\t\tregards, tom lane\n> > > >\n> > > > ---------------------------(end of broadcast)---------------------------\n> > > > TIP 4: Don't 'kill -9' the postmaster\n> > > >\n> > >\n> > > --\n> > > Bruce Momjian | http://candle.pha.pa.us\n> > > pgman@candle.pha.pa.us | (610) 359-1001\n> > > + If your life is a hard drive, | 13 Roberts Road\n> > > + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> > >\n> >\n> >\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n>\n\n",
"msg_date": "Thu, 19 Sep 2002 01:57:55 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: The TODO List (Was: Re: Open 7.3 items)"
},
{
"msg_contents": "Marc G. Fournier wrote:\n> On Thu, 19 Sep 2002, Bruce Momjian wrote:\n> \n> >\n> > It is an open issue. It has to be resolved. When it is, I will remove\n> > it. I added a question mark to it but it needs to be tracked. I keep\n> > having to add and remove it because I have people telling me what to do.\n> >\n> > It was Peter who told me to add it, and you and Thomas to remove it. It\n> > isn't me adding/removing on my own.\n> \n> Right, so you have two telling you to remove it, one telling you to add\n> it, and two that are discussion why/if it *should* be added ... Tom feels\n> it should be added, and I'm clarifing the why of it ... don't re-add it\n> until we've determined *if* it is actually an open issue or not ... stop\n> jumping the gun ...\n\nI will make the decision. If you want to maintain your own open items\nlist, go ahead.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 19 Sep 2002 01:00:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: The TODO List (Was: Re: Open 7.3 items)"
},
{
"msg_contents": "On Thu, 19 Sep 2002, Bruce Momjian wrote:\n\n> Marc G. Fournier wrote:\n> > On Thu, 19 Sep 2002, Bruce Momjian wrote:\n> >\n> > >\n> > > It is an open issue. It has to be resolved. When it is, I will remove\n> > > it. I added a question mark to it but it needs to be tracked. I keep\n> > > having to add and remove it because I have people telling me what to do.\n> > >\n> > > It was Peter who told me to add it, and you and Thomas to remove it. It\n> > > isn't me adding/removing on my own.\n> >\n> > Right, so you have two telling you to remove it, one telling you to add\n> > it, and two that are discussion why/if it *should* be added ... Tom feels\n> > it should be added, and I'm clarifing the why of it ... don't re-add it\n> > until we've determined *if* it is actually an open issue or not ... stop\n> > jumping the gun ...\n>\n> I will make the decision. If you want to maintain your own open items\n> list, go ahead.\n\nAh, okay, so your list doesn't necessarily follow reality, its more for\nyour own use ... k, as long as we have that clarified, we're fine ...\n\n\n",
"msg_date": "Thu, 19 Sep 2002 09:27:47 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: The TODO List (Was: Re: Open 7.3 items)"
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> Who implemented SIMILAR TO in the first place? \n\nThomas. He put in the syntax, but as it stands it's simply syntactic\nsugar for ~ --- that is, our Posix-compatible regex match operator.\nSince the spec demands very non-Posix behavior, this is wrong.\n\nAFAICS, getting SIMILAR TO to operate per spec would require adding some\nsort of translation function that converts the spec-style pattern into\na Posix pattern that our regex match engine would handle. This would at\nleast require adding ^ and $ around the pattern, converting the escape\ncharacter if any, and translating % and _ into .* and . respectively.\nThere are probably some differences of detail that we'd need to fix\nlater, but that would get it to a state where we need not be ashamed\nto release it.\n\nWe already have a similar mechanism for handling LIKE ... ESCAPE\nclauses, so it doesn't seem too difficult to do. But I haven't got\ntime for it...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Sep 2002 09:29:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: The TODO List (Was: Re: Open 7.3 items) "
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> Right, so you have two telling you to remove it, one telling you to add\n> it, and two that are discussion why/if it *should* be added ... Tom feels\n> it should be added, and I'm clarifing the why of it ... don't re-add it\n> until we've determined *if* it is actually an open issue or not ... stop\n> jumping the gun ...\n\nIt *is* an open issue, Marc: Peter and I think so, at least. You cannot\ndeclare by fiat that it isn't.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Sep 2002 09:31:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: The TODO List (Was: Re: Open 7.3 items) "
},
{
"msg_contents": "Tom Lane wrote:\n> AFAICS, getting SIMILAR TO to operate per spec would require adding some\n> sort of translation function that converts the spec-style pattern into\n> a Posix pattern that our regex match engine would handle. This would at\n> least require adding ^ and $ around the pattern, converting the escape\n> character if any, and translating % and _ into .* and . respectively.\n> There are probably some differences of detail that we'd need to fix\n> later, but that would get it to a state where we need not be ashamed\n> to release it.\n> \n> We already have a similar mechanism for handling LIKE ... ESCAPE\n> clauses, so it doesn't seem too difficult to do. But I haven't got\n> time for it...\n\nIt seems like a merge of regex and LIKE patterns. ANSI doesn't have\nregex so maybe SIMILAR TO is their solution to that.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 19 Sep 2002 11:32:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: The TODO List (Was: Re: Open 7.3 items)"
},
{
"msg_contents": "On Thu, 19 Sep 2002, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > Who implemented SIMILAR TO in the first place?\n>\n> Thomas. He put in the syntax, but as it stands it's simply syntactic\n> sugar for ~ --- that is, our Posix-compatible regex match operator.\n> Since the spec demands very non-Posix behavior, this is wrong.\n>\n> AFAICS, getting SIMILAR TO to operate per spec would require adding some\n> sort of translation function that converts the spec-style pattern into\n> a Posix pattern that our regex match engine would handle. This would at\n> least require adding ^ and $ around the pattern, converting the escape\n> character if any, and translating % and _ into .* and . respectively.\n> There are probably some differences of detail that we'd need to fix\n> later, but that would get it to a state where we need not be ashamed\n> to release it.\n>\n> We already have a similar mechanism for handling LIKE ... ESCAPE\n> clauses, so it doesn't seem too difficult to do. But I haven't got\n> time for it...\n\n'K, just curious here, but ... Thomas, do you agree with Tom's\ninterpretation of the spec? If so, would it be possible to get the above\nfixed?\n\nOr is there an ambiguity there (not like *that* has never happened before)\nthat Tom/Peter are being more strict about then the spec requires?\n\n\n\n",
"msg_date": "Thu, 19 Sep 2002 13:30:09 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "SIMILAR TO syntax (Was: Re: The TODO List (Was: Re: O...)"
},
{
"msg_contents": "> On Thu, 19 Sep 2002, Tom Lane wrote:\n>> AFAICS, getting SIMILAR TO to operate per spec would require adding some\n>> sort of translation function that converts the spec-style pattern into\n>> a Posix pattern that our regex match engine would handle.\n\nI did something about this. The translation function probably needs\nwork (for one thing, it's not multibyte-aware) but it's a start; and\nwe shouldn't need any more initdbs to tweak it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 22 Sep 2002 16:00:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SIMILAR TO syntax (Was: Re: The TODO List (Was: Re: O...) "
}
] |
[
{
"msg_contents": "I am working with several groups getting the Win32 port ready for 7.4\nand I have a few questions:\n\nWhat is the standard workaround for the fact that rename() isn't atomic\non Win32? Do we need to create our own locking around the\nreading/writing of files that are normally updated in place using\nrename()?\n\nSecond, when you unlink() a file on Win32, do applications continue\naccessing the old file contents if they had the file open before the\nunlink?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 18 Sep 2002 20:01:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Win32 rename()/unlink() questions"
},
{
"msg_contents": "On Wed, Sep 18, 2002 at 08:01:42PM -0400, Bruce Momjian wrote:\n \n> Second, when you unlink() a file on Win32, do applications continue\n> accessing the old file contents if they had the file open before the\n> unlink?\n\nI'm pretty sure it errors with 'file in use'. Pretty ugly, huh?\n\nRoss\n\n",
"msg_date": "Thu, 19 Sep 2002 00:07:22 -0500",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: Win32 rename()/unlink() questions"
},
{
"msg_contents": "Bruce Momjian wrote:\n> I am working with several groups getting the Win32 port ready for 7.4\n> and I have a few questions:\n> \n> What is the standard workaround for the fact that rename() isn't atomic\n> on Win32? Do we need to create our own locking around the\n> reading/writing of files that are normally updated in place using\n> rename()?\n\nVisual C++ comes with the source to Microsoft's C library:\n\nrename() calls MoveFile() which will error if:\n\n1. The target file exists\n2. The source file is in use\n\nMoveFileEx() (not available on 95/98) can overwrite the target \nfile if it exists. The Apache APR portability library uses \nMoveFileEx() to rename files if under NT/XP/2K vs. a sequence of :\n\n1. CreateFile() to test for target file existence\n2. DeleteFile() to remove the target file\n3. MoveFile() to rename the old file to new\n\nunder Windows 95/98. Of course, some other process could create \nthe target file between 2 and 3, so their rename() would just \nerror out in that situation. I haven't tested it, but I recall \nreading somewhere that MoveFileEx() has the ability to rename an \nopened file. I'm 99% sure MoveFile() will fail if the source \nfile is open.\n\n> \n> Second, when you unlink() a file on Win32, do applications continue\n> accessing the old file contents if they had the file open before the\n> unlink?\n> \n\nunlink() just calls DeleteFile() which will error if:\n\n1. The target file is in use\n\nCreateFile() has the option:\n\nFILE_FLAG_DELETE_ON_CLOSE\n\nwhich might be able to be used to simulate traditional unlink() \nbehavior.\n\nHope that helps,\n\nMike Mascari\nmascarm@mascari.com\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Thu, 19 Sep 2002 01:23:45 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": false,
"msg_subject": "Re: Win32 rename()/unlink() questions"
},
{
"msg_contents": "> On Wed, Sep 18, 2002 at 08:01:42PM -0400, Bruce Momjian wrote:\n>\n> > Second, when you unlink() a file on Win32, do applications continue\n> > accessing the old file contents if they had the file open before the\n> > unlink?\n>\n> I'm pretty sure it errors with 'file in use'. Pretty ugly, huh?\n\nYeah - the windows filesystem is pretty poor when it comes to multiuser\naccess. That's why even as administrator I cannot delete borked files and\npeople's profiles and stuff off our NT server - the files are always 'in\nuse'. Even if you kick all users off, reboot the machine, do whatever.\nIt's terrible.\n\nChris\n\n",
"msg_date": "Thu, 19 Sep 2002 13:24:01 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Win32 rename()/unlink() questions"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n>>On Wed, Sep 18, 2002 at 08:01:42PM -0400, Bruce Momjian wrote:\n>>\n>>\n>>>Second, when you unlink() a file on Win32, do applications continue\n>>>accessing the old file contents if they had the file open before the\n>>>unlink?\n>>\n>>I'm pretty sure it errors with 'file in use'. Pretty ugly, huh?\n> \n> \n> Yeah - the windows filesystem is pretty poor when it comes to multiuser\n> access. That's why even as administrator I cannot delete borked files and\n> people's profiles and stuff off our NT server - the files are always 'in\n> use'. Even if you kick all users off, reboot the machine, do whatever.\n> It's terrible.\n >\n > Chris\n >\n\nYep. That's why often it requires rebooting to uninstall \nsoftware. How can the installer remove itself? Under Windows \n95/98/ME, you have to manually add entries to WININIT.INI. With \nWindows NT/XP/2K, MoveFileEx() with a NULL target and the \nMOVEFILE_DELAY_UNTIL_REBOOT flag will add the appropriate \nentries into the system registry so that the next time the \nmachine reboots it will remove the files specified. Its a real \npain and a real hack of an OS.\n\nMike Mascari\nmascarm@mascari.com\n\n\n",
"msg_date": "Thu, 19 Sep 2002 01:31:19 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": false,
"msg_subject": "Re: Win32 rename()/unlink() questions"
},
{
"msg_contents": "Mike Mascari wrote:\n> Bruce Momjian wrote:\n> > I am working with several groups getting the Win32 port ready for 7.4\n> > and I have a few questions:\n> > \n> > What is the standard workaround for the fact that rename() isn't atomic\n> > on Win32? Do we need to create our own locking around the\n> > reading/writing of files that are normally updated in place using\n> > rename()?\n> \n> Visual C++ comes with the source to Microsoft's C library:\n> \n> rename() calls MoveFile() which will error if:\n> \n> 1. The target file exists\n> 2. The source file is in use\n> \n> MoveFileEx() (not available on 95/98) can overwrite the target \n> file if it exists. The Apache APR portability library uses \n> MoveFileEx() to rename files if under NT/XP/2K vs. a sequence of :\n> \n> 1. CreateFile() to test for target file existence\n> 2. DeleteFile() to remove the target file\n> 3. MoveFile() to rename the old file to new\n> \n> under Windows 95/98. Of course, some other process could create \n> the target file between 2 and 3, so their rename() would just \n> error out in that situation. I haven't tested it, but I recall \n> reading somewhere that MoveFileEx() has the ability to rename an \n> opened file. I'm 99% sure MoveFile() will fail if the source \n> file is open.\n\nOK, I downloaded APR and see in apr_file_rename():\n\n if (MoveFileEx(frompath, topath, MOVEFILE_REPLACE_EXISTING |\n MOVEFILE_COPY_ALLOWED))\n\n\nLooking at the entire APR function, they have lots of tests so it works\non Win9X and wide characters. I think we will just use the APR as a\nguide in implementing the things we need. I think MoveFileEx() is the\nproper way to go; any other solution requires loop tests for rename.\n\nI see the MoveFileEx manual page at:\n\n\thttp://msdn.microsoft.com/library/default.asp?url=/library/en-us/fileio/base/movefile.asp\n\n> > Second, when you unlink() a file on Win32, do applications continue\n> > accessing the old file contents if they had the file open before the\n> > unlink?\n> > \n> \n> unlink() just calls DeleteFile() which will error if:\n> \n> 1. The target file is in use\n> \n> CreateFile() has the option:\n> \n> FILE_FLAG_DELETE_ON_CLOSE\n> \n> which might be able to be used to simulate traditional unlink() \n> behavior.\n\nNo, that flag isn't going to help us. I wonder what MoveFileEx does if\nthe target file exists _and_ is open by another user? I don't see any\nloop in that Win32 rename() routine, and I looked at the Unix version of\napr_file_rename and its just a straight rename() call.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 19 Sep 2002 16:24:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Win32 rename()/unlink() questions"
},
{
"msg_contents": "Bruce Momjian wrote:\n> > > Second, when you unlink() a file on Win32, do applications continue\n> > > accessing the old file contents if they had the file open before the\n> > > unlink?\n> > > \n> > \n> > unlink() just calls DeleteFile() which will error if:\n> > \n> > 1. The target file is in use\n> > \n> > CreateFile() has the option:\n> > \n> > FILE_FLAG_DELETE_ON_CLOSE\n> > \n> > which might be able to be used to simulate traditional unlink() \n> > behavior.\n> \n> No, that flag isn't going to help us. I wonder what MoveFileEx does if\n> the target file exists _and_ is open by another user? I don't see any\n> loop in that Win32 rename() routine, and I looked at the Unix version of\n> apr_file_rename and its just a straight rename() call.\n\nThis says that if the target is in use, it is overwritten:\n\n\thttp://support.microsoft.com/default.aspx?scid=KB;EN-US;q140570&\n\nWhile I think that is good news, does it open the problem of other\nreaders reading partial updates to the file and therefore seeing\ngarbage. Not sure how to handle that, nor am I even sure how I would\ntest it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 19 Sep 2002 22:50:41 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Win32 rename()/unlink() questions"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Bruce Momjian wrote:\n>>>\n>>>unlink() just calls DeleteFile() which will error if:\n>>>\n>>>1. The target file is in use\n>>>\n>>>CreateFile() has the option:\n>>>\n>>>FILE_FLAG_DELETE_ON_CLOSE\n>>>\n>>>which might be able to be used to simulate traditional unlink() \n>>>behavior.\n>>\n>>No, that flag isn't going to help us. I wonder what MoveFileEx does if\n>>the target file exists _and_ is open by another user? I don't see any\n>>loop in that Win32 rename() routine, and I looked at the Unix version of\n>>apr_file_rename and its just a straight rename() call.\n> \n> \n> This says that if the target is in use, it is overwritten:\n> \n> \thttp://support.microsoft.com/default.aspx?scid=KB;EN-US;q140570&\n\nI read the article and did not come away with that conclusion. \nThe article describes using the MOVEFILE_DELAY_UNTIL_REBOOT \nflag, which was created for the express purpose of allowing a \nSETUP.EXE to remove itself, or rather tell Windows to remove it \non the next reboot. Also, if you want the Win32 port to run in \n95/98/ME, you can't rely on MoveFileEx(), you have to use \nMoveFile().\n\nI will do some testing with concurrency and let you know. But \ndon't get your hopes up. This is one of the many advantages that \nTABLESPACEs have when more than one relation is stored in a \nsingle DATAFILE. There was Oracle for MS-DOS, after all..\n\nMike Mascari\nmascarm@mascari.com\n\n\n\n\n",
"msg_date": "Fri, 20 Sep 2002 00:01:49 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": false,
"msg_subject": "Re: Win32 rename()/unlink() questions"
},
{
"msg_contents": "Mike Mascari wrote:\n> I read the article and did not come away with that conclusion. \n> The article describes using the MOVEFILE_DELAY_UNTIL_REBOOT \n> flag, which was created for the express purpose of allowing a \n> SETUP.EXE to remove itself, or rather tell Windows to remove it \n> on the next reboot. Also, if you want the Win32 port to run in \n> 95/98/ME, you can't rely on MoveFileEx(), you have to use \n> MoveFile().\n> \n> I will do some testing with concurrency and let you know. But \n> don't get your hopes up. This is one of the many advantages that \n> TABLESPACEs have when more than one relation is stored in a \n> single DATAFILE. There was Oracle for MS-DOS, after all..\n\nI was focusing on handling of pg_pwd and other config file that are\nwritten by various backend while other backends are reading them. The\nactual data files should be OK because we have an exclusive lock when we\nare adding/removing them.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 20 Sep 2002 00:05:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Win32 rename()/unlink() questions"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Mike Mascari wrote:\n> \n>>I will do some testing with concurrency and let you know. But \n>>don't get your hopes up. This is one of the many advantages that \n>>TABLESPACEs have when more than one relation is stored in a \n>>single DATAFILE. There was Oracle for MS-DOS, after all..\n> \n> \n> I was focusing on handling of pg_pwd and other config file that are\n> written by various backend while other backends are reading them. The\n> actual data files should be OK because we have an exclusive lock when we\n> are adding/removing them.\n> \n\nOK. So you want to test:\n\n1. Process 1 opens \"foo\"\n2. Process 2 opens \"foo\"\n3. Process 1 renames \"foo\" to \"bar\"\n4. Process 2 can safely read from its open file handle\n\nIs that what you want tested? I have a small Win32 app ready to \ntest. Just let me know the scenarios...\n\nMike Mascari\nmascarm@mascari.com\n\n\n\n\n",
"msg_date": "Fri, 20 Sep 2002 00:31:46 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": false,
"msg_subject": "Re: Win32 rename()/unlink() questions"
},
{
"msg_contents": "Mike Mascari wrote:\n> Bruce Momjian wrote:\n> \n>> Mike Mascari wrote:\n>>\n>>> I will do some testing with concurrency and let you know. But don't \n>>> get your hopes up. This is one of the many advantages that \n>>> TABLESPACEs have when more than one relation is stored in a single \n>>> DATAFILE. There was Oracle for MS-DOS, after all..\n>>\n>>\n>>\n>> I was focusing on handling of pg_pwd and other config file that are\n>> written by various backend while other backends are reading them. The\n>> actual data files should be OK because we have an exclusive lock when we\n>> are adding/removing them.\n>>\n> \n> OK. So you want to test:\n> \n> 1. Process 1 opens \"foo\"\n> 2. Process 2 opens \"foo\"\n> 3. Process 1 renames \"foo\" to \"bar\"\n> 4. Process 2 can safely read from its open file handle\n\nActually, looking at the pg_pwd code, you want to determine a \nway for:\n\n1. Process 1 opens \"foo\"\n2. Process 2 opens \"foo\"\n3. Process 1 creates \"bar\"\n4. Process 1 renames \"bar\" to \"foo\"\n5. Process 2 can continue to read data from the open file handle \nand get the original \"foo\" data.\n\nIs that correct?\n\nMike Mascari\nmascarm@mascari.com\n\n",
"msg_date": "Fri, 20 Sep 2002 01:01:03 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": false,
"msg_subject": "Re: Win32 rename()/unlink() questions"
},
{
"msg_contents": "Mike Mascari wrote:\n> Actually, looking at the pg_pwd code, you want to determine a \n> way for:\n> \n> 1. Process 1 opens \"foo\"\n> 2. Process 2 opens \"foo\"\n> 3. Process 1 creates \"bar\"\n> 4. Process 1 renames \"bar\" to \"foo\"\n> 5. Process 2 can continue to read data from the open file handle \n> and get the original \"foo\" data.\n\nYep, that's it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 20 Sep 2002 01:29:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Win32 rename()/unlink() questions"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Mike Mascari wrote:\n> \n>>Actually, looking at the pg_pwd code, you want to determine a \n>>way for:\n>>\n>>1. Process 1 opens \"foo\"\n>>2. Process 2 opens \"foo\"\n>>3. Process 1 creates \"bar\"\n>>4. Process 1 renames \"bar\" to \"foo\"\n>>5. Process 2 can continue to read data from the open file handle \n>>and get the original \"foo\" data.\n> \n> \n> Yep, that's it.\n> \n\nSo far, MoveFileEx(\"foo\", \"bar\", MOVEFILE_REPLACE_EXISTING) \nreturns \"Access Denied\" when Process 1 attempts the rename. But \nI'm continuing to investigate the possibilities...\n\nMike Mascari\nmascarm@mascari.com\n\n\n\n\n",
"msg_date": "Fri, 20 Sep 2002 01:35:23 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": false,
"msg_subject": "Re: Win32 rename()/unlink() questions"
},
{
"msg_contents": "\nOn Fri, 20 Sep 2002, Mike Mascari wrote:\n\n> Bruce Momjian wrote:\n> > Mike Mascari wrote:\n> >\n> >>Actually, looking at the pg_pwd code, you want to determine a\n> >>way for:\n> >>\n> >>1. Process 1 opens \"foo\"\n> >>2. Process 2 opens \"foo\"\n> >>3. Process 1 creates \"bar\"\n> >>4. Process 1 renames \"bar\" to \"foo\"\n> >>5. Process 2 can continue to read data from the open file handle\n> >>and get the original \"foo\" data.\n> >\n> >\n> > Yep, that's it.\n> >\n>\n> So far, MoveFileEx(\"foo\", \"bar\", MOVEFILE_REPLACE_EXISTING)\n> returns \"Access Denied\" when Process 1 attempts the rename. But\n> I'm continuing to investigate the possibilities...\n\nDoes a sequence like\nProcess 1 opens \"foo\"\nProcess 2 opens \"foo\"\nProcess 1 creates \"bar\"\nProcess 1 renames \"foo\" to <something>\n - where something is generated to not overlap an existing file\nProcess 1 renames \"bar\" to \"foo\"\nProcess 2 continues reading\nlet you do the replace and keep reading (at the penalty that\nyou've now got to have a way to know when to remove the\nvarious <something>s)\n\n\n",
"msg_date": "Thu, 19 Sep 2002 22:50:36 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Win32 rename()/unlink() questions"
},
{
"msg_contents": "Stephan Szabo wrote:\n> On Fri, 20 Sep 2002, Mike Mascari wrote:\n>>Bruce Momjian wrote:\n>>>Mike Mascari wrote:\n>>>>Actually, looking at the pg_pwd code, you want to determine a\n>>>>way for:\n>>>>\n>>>>1. Process 1 opens \"foo\"\n>>>>2. Process 2 opens \"foo\"\n>>>>3. Process 1 creates \"bar\"\n>>>>4. Process 1 renames \"bar\" to \"foo\"\n>>>>5. Process 2 can continue to read data from the open file handle\n>>>>and get the original \"foo\" data.\n>>>\n>>>\n>>>Yep, that's it.\n>>>\n>>\n>>So far, MoveFileEx(\"foo\", \"bar\", MOVEFILE_REPLACE_EXISTING)\n>>returns \"Access Denied\" when Process 1 attempts the rename. But\n>>I'm continuing to investigate the possibilities...\n> \n> \n> Does a sequence like\n> Process 1 opens \"foo\"\n> Process 2 opens \"foo\"\n> Process 1 creates \"bar\"\n> Process 1 renames \"foo\" to <something>\n> - where something is generated to not overlap an existing file\n> Process 1 renames \"bar\" to \"foo\"\n> Process 2 continues reading\n> let you do the replace and keep reading (at the penalty that\n> you've now got to have a way to know when to remove the\n> various <something>s)\n\nYes! Indeed that does work.\n\nMike Mascari\nmascarm@mascari.com\n\n",
"msg_date": "Fri, 20 Sep 2002 02:03:43 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": false,
"msg_subject": "Re: Win32 rename()/unlink() questions"
},
{
"msg_contents": "On Fri, 20 Sep 2002, Mike Mascari wrote:\n\n> Stephan Szabo wrote:\n> > On Fri, 20 Sep 2002, Mike Mascari wrote:\n> >>So far, MoveFileEx(\"foo\", \"bar\", MOVEFILE_REPLACE_EXISTING)\n> >>returns \"Access Denied\" when Process 1 attempts the rename. But\n> >>I'm continuing to investigate the possibilities...\n> >\n> >\n> > Does a sequence like\n> > Process 1 opens \"foo\"\n> > Process 2 opens \"foo\"\n> > Process 1 creates \"bar\"\n> > Process 1 renames \"foo\" to <something>\n> > - where something is generated to not overlap an existing file\n> > Process 1 renames \"bar\" to \"foo\"\n> > Process 2 continues reading\n> > let you do the replace and keep reading (at the penalty that\n> > you've now got to have a way to know when to remove the\n> > various <something>s)\n>\n> Yes! Indeed that does work.\n\nThinking back, I think that may still fail on Win95 (using MoveFile).\nOnce in the past I had to work on (un)installers for Win* and I\nvaguely remember Win95 being more strict than Win98 but that may just\nhave been with moving the executable you're currently running.\n\n",
"msg_date": "Thu, 19 Sep 2002 23:14:14 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Win32 rename()/unlink() questions"
},
{
"msg_contents": "Stephan Szabo wrote:\n> On Fri, 20 Sep 2002, Mike Mascari wrote:\n>>\n>>Yes! Indeed that does work.\n> \n> \n> Thinking back, I think that may still fail on Win95 (using MoveFile).\n> Once in the past I had to work on (un)installers for Win* and I\n> vaguely remember Win95 being more strict than Win98 but that may just\n> have been with moving the executable you're currently running.\n\nWell, here's the test:\n\nfoo.txt contains \"This is FOO!\"\nbar.txt contains \"This is BAR!\"\n\nProcess 1 opens foo.txt\nProcess 2 opens foo.txt\nProcess 1 sleeps 7.5 seconds\nProcess 2 sleeps 15 seconds\nProcess 1 uses MoveFile() to rename \"foo.txt\" to \"foo2.txt\"\nProcess 1 uses MoveFile() to rename \"bar.txt\" to \"foo.txt\"\nProcess 1 uses DeleteFile() to remove \"foo2.txt\"\nProcess 2 awakens and displays \"This is FOO!\"\n\nOn the filesystem, we then have:\n\nfoo.txt containing \"This is BAR!\"\n\nThe good news is that this works fine under NT 4 using just \nMoveFile(). The bad news is that it requires the files be opened \nusing CreateFile() with the FILE_SHARE_DELETE flag set. The C \nlibrary which ships with Visual C++ 6 ultimately calls \nCreateFile() via fopen() but with no opportunity through the \nstandard C library routines to use the FILE_SHARE_DELETE flag. \nAnd the FILE_SHARE_DELETE flag cannot be used under Windows \n95/98 (Bad Parameter). Which means, on those platforms, there \nstill doesn't appear to be a solution. Under NT/XP/2K, \nAllocateFile() will have to modified to call CreateFile() \ninstead of fopen(). I'm not sure about ME, but I suspect it \nbehaves similarly to 95/98.\n\nMike Mascari\nmascarm@mascari.com\n\n\n\n\n",
"msg_date": "Fri, 20 Sep 2002 03:13:26 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": false,
"msg_subject": "Re: Win32 rename()/unlink() questions"
},
{
"msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> ... let you do the replace and keep reading (at the penalty that\n> you've now got to have a way to know when to remove the\n> various <something>s)\n\nThat is the hard part. Mike's description omitted one crucial step:\n\n6. The old \"foo\" goes away when the last open file handle for it is\nclosed.\n\nI doubt there is any practical way for Postgres to cause that to happen\nif the OS itself does not have any support for it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 Sep 2002 10:27:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Win32 rename()/unlink() questions "
},
{
"msg_contents": "\nI don't think we are not going to be supporting Win9X so there isn't an\nissue there. We will be supporting Win2000/NT/XP.\n\nI don't understand FILE_SHARE_DELETE. I read the description at:\n\n\thttp://msdn.microsoft.com/library/default.asp?url=/library/en-us/fileio/base/createfile.asp\n\nbut I don't understand it:\n\n\tFILE_SHARE_DELETE - Windows NT/2000/XP: Subsequent open operations on\n\tthe object will succeed only if delete access is requested. \n\n---------------------------------------------------------------------------\n\nMike Mascari wrote:\n> Stephan Szabo wrote:\n> > On Fri, 20 Sep 2002, Mike Mascari wrote:\n> >>\n> >>Yes! Indeed that does work.\n> > \n> > \n> > Thinking back, I think that may still fail on Win95 (using MoveFile).\n> > Once in the past I had to work on (un)installers for Win* and I\n> > vaguely remember Win95 being more strict than Win98 but that may just\n> > have been with moving the executable you're currently running.\n> \n> Well, here's the test:\n> \n> foo.txt contains \"This is FOO!\"\n> bar.txt contains \"This is BAR!\"\n> \n> Process 1 opens foo.txt\n> Process 2 opens foo.txt\n> Process 1 sleeps 7.5 seconds\n> Process 2 sleeps 15 seconds\n> Process 1 uses MoveFile() to rename \"foo.txt\" to \"foo2.txt\"\n> Process 1 uses MoveFile() to rename \"bar.txt\" to \"foo.txt\"\n> Process 1 uses DeleteFile() to remove \"foo2.txt\"\n> Process 2 awakens and displays \"This is FOO!\"\n> \n> On the filesystem, we then have:\n> \n> foo.txt containing \"This is BAR!\"\n> \n> The good news is that this works fine under NT 4 using just \n> MoveFile(). The bad news is that it requires the files be opened \n> using CreateFile() with the FILE_SHARE_DELETE flag set. The C \n> library which ships with Visual C++ 6 ultimately calls \n> CreateFile() via fopen() but with no opportunity through the \n> standard C library routines to use the FILE_SHARE_DELETE flag. \n> And the FILE_SHARE_DELETE flag cannot be used under Windows \n> 95/98 (Bad Parameter). Which means, on those platforms, there \n> still doesn't appear to be a solution. Under NT/XP/2K, \n> AllocateFile() will have to modified to call CreateFile() \n> instead of fopen(). I'm not sure about ME, but I suspect it \n> behaves similarly to 95/98.\n> \n> Mike Mascari\n> mascarm@mascari.com\n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 20 Sep 2002 10:31:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Win32 rename()/unlink() questions"
},
{
"msg_contents": "Bruce Momjian wrote:\n> I don't think we are not going to be supporting Win9X so there isn't an\n> issue there. We will be supporting Win2000/NT/XP.\n> \n> I don't understand FILE_SHARE_DELETE. I read the description at:\n> \n> \thttp://msdn.microsoft.com/library/default.asp?url=/library/en-us/fileio/base/createfile.asp\n> \n> but I don't understand it:\n> \n> \tFILE_SHARE_DELETE - Windows NT/2000/XP: Subsequent open operations on\n> \tthe object will succeed only if delete access is requested.\n\nI think that's a rather poor description. I think it just means \nthat if the file is opened once via CreateFile() with \nFILE_SHARE_DELETE, then any subsequent CreateFile() calls will \nfail unless they too have FILE_SHARE_DELETE. In other words, if \none of us can delete this file while its open, any of us can.\n\nMike Mascari\nmascarm@mascari.com\n\n\n\n\n",
"msg_date": "Fri, 20 Sep 2002 10:57:00 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": false,
"msg_subject": "Re: Win32 rename()/unlink() questions"
},
{
"msg_contents": "\nIt is good that moving the file out of the way works, but it doesn't\ncompletely solve the problem.\n\nWhat we have now with Unix rename is ideal:\n\n\t1) old opens continue seeing the old contents\n\t2) new opens see the new contents\n\t3) the file always exists under the fixed name\n\nWe have that with MoveFileEx(), but we have to loop over the routine\nuntil is succeeds. If we move the old file out of the way, we loose the\nability to know the file always exists and then we have to loop over\nopen() until is succeeds.\n\nI think we may be best just looping on MoveFileEx() until is succeeds. \nWe do the pg_pwd writes while holding an exclusive lock on pg_shadow so\nthat will guarantee that no one else will slip an old version of the\nfile in after we have written it. However, it also prevents pg_shadow\naccess while we are doing the looping. Yuck.\n\n---------------------------------------------------------------------------\n\nMike Mascari wrote:\n> Stephan Szabo wrote:\n> > On Fri, 20 Sep 2002, Mike Mascari wrote:\n> >>Bruce Momjian wrote:\n> >>>Mike Mascari wrote:\n> >>>>Actually, looking at the pg_pwd code, you want to determine a\n> >>>>way for:\n> >>>>\n> >>>>1. Process 1 opens \"foo\"\n> >>>>2. Process 2 opens \"foo\"\n> >>>>3. Process 1 creates \"bar\"\n> >>>>4. Process 1 renames \"bar\" to \"foo\"\n> >>>>5. Process 2 can continue to read data from the open file handle\n> >>>>and get the original \"foo\" data.\n> >>>\n> >>>\n> >>>Yep, that's it.\n> >>>\n> >>\n> >>So far, MoveFileEx(\"foo\", \"bar\", MOVEFILE_REPLACE_EXISTING)\n> >>returns \"Access Denied\" when Process 1 attempts the rename. But\n> >>I'm continuing to investigate the possibilities...\n> > \n> > \n> > Does a sequence like\n> > Process 1 opens \"foo\"\n> > Process 2 opens \"foo\"\n> > Process 1 creates \"bar\"\n> > Process 1 renames \"foo\" to <something>\n> > - where something is generated to not overlap an existing file\n> > Process 1 renames \"bar\" to \"foo\"\n> > Process 2 continues reading\n> > let you do the replace and keep reading (at the penalty that\n> > you've now got to have a way to know when to remove the\n> > various <something>s)\n> \n> Yes! Indeed that does work.\n> \n> Mike Mascari\n> mascarm@mascari.com\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 20 Sep 2002 11:04:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Win32 rename()/unlink() questionst"
},
{
"msg_contents": "Mike Mascari wrote:\n> Bruce Momjian wrote:\n> > I don't think we are not going to be supporting Win9X so there isn't an\n> > issue there. We will be supporting Win2000/NT/XP.\n> > \n> > I don't understand FILE_SHARE_DELETE. I read the description at:\n> > \n> > \thttp://msdn.microsoft.com/library/default.asp?url=/library/en-us/fileio/base/createfile.asp\n> > \n> > but I don't understand it:\n> > \n> > \tFILE_SHARE_DELETE - Windows NT/2000/XP: Subsequent open operations on\n> > \tthe object will succeed only if delete access is requested.\n> \n> I think that's a rather poor description. I think it just means \n> that if the file is opened once via CreateFile() with \n> FILE_SHARE_DELETE, then any subsequent CreateFile() calls will \n> fail unless they too have FILE_SHARE_DELETE. In other words, if \n> one of us can delete this file while its open, any of us can.\n\nI don't understand what that gets us.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 20 Sep 2002 11:05:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Win32 rename()/unlink() questions"
},
{
"msg_contents": "On Fri, 20 Sep 2002, Mike Mascari wrote:\n\n> Bruce Momjian wrote:\n> > I don't think we are not going to be supporting Win9X so there isn't an\n> > issue there. We will be supporting Win2000/NT/XP.\n> >\n> > I don't understand FILE_SHARE_DELETE. I read the description at:\n> >\n> > \thttp://msdn.microsoft.com/library/default.asp?url=/library/en-us/fileio/base/createfile.asp\n> >\n> > but I don't understand it:\n> >\n> > \tFILE_SHARE_DELETE - Windows NT/2000/XP: Subsequent open operations on\n> > \tthe object will succeed only if delete access is requested.\n>\n> I think that's a rather poor description. I think it just means\n> that if the file is opened once via CreateFile() with\n> FILE_SHARE_DELETE, then any subsequent CreateFile() calls will\n> fail unless they too have FILE_SHARE_DELETE. In other words, if\n> one of us can delete this file while its open, any of us can.\n\nThe question is, what happens if two people have the file open\nand one goes and tries to delete it? Can the other still read\nfrom it?\n\n",
"msg_date": "Fri, 20 Sep 2002 08:10:19 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Win32 rename()/unlink() questions"
},
{
"msg_contents": "Mike Mascari wrote:\n\n> instead of fopen(). I'm not sure about ME, but I suspect it\n> behaves similarly to 95/98.\n\nI just checked with Katie and the good news (tm) is that the Win32 port\nwe did here at PeerDirect doesn't support 95/98 and ME anyway. It does\nsupport NT4, 2000 and XP. So don't bother.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n",
"msg_date": "Fri, 20 Sep 2002 11:36:39 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Win32 rename()/unlink() questions"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I think we may be best just looping on MoveFileEx() until is succeeds. \n> We do the pg_pwd writes while holding an exclusive lock on pg_shadow so\n> that will guarantee that no one else will slip an old version of the\n> file in after we have written it. However, it also prevents pg_shadow\n> access while we are doing the looping. Yuck.\n\nSurely you're not evaluating this on the assumption that the pg_shadow\ntriggers are the only places that use rename() ?\n\nI see other places in pgstat and relcache that expect rename() to work\nper Unix spec.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 Sep 2002 11:50:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Win32 rename()/unlink() questionst "
},
{
"msg_contents": "Stephan Szabo wrote:\n> On Fri, 20 Sep 2002, Mike Mascari wrote:\n> \n> \n>>I think that's a rather poor description. I think it just means\n>>that if the file is opened once via CreateFile() with\n>>FILE_SHARE_DELETE, then any subsequent CreateFile() calls will\n>>fail unless they too have FILE_SHARE_DELETE. In other words, if\n>>one of us can delete this file while its open, any of us can.\n> \n> \n> The question is, what happens if two people have the file open\n> and one goes and tries to delete it? Can the other still read\n> from it?\n\nYes. I just tested it and it worked. I'll test Bruce's scenario \nas well:\n\nfoo contains: \"FOO\"\nbar contains: \"BAR\"\n\n1. Process 1 opens \"foo\"\n2. Process 2 opens \"foo\"\n3. Process 1 calls MoveFile(\"foo\", \"foo2\");\n4. Process 3 opens \"foo\" <- Successful?\n5. Process 1 calls MoveFile(\"bar\", \"foo\");\n6. Process 4 opens \"foo\" <- Successful?\n7. Process 1 calls DeleteFile(\"foo2\");\n8. Process 1, 2, 3, 4 all read from their respective handles.\n\nI think the thing to worry about is a race condition between the \ntwo MoveFile() attempts. A very ugly hack would be to loop in a \nCreateFile() in an attempt to open \"foo\", giving up if the error \nis not a NOT EXISTS error code.\n\nMike Mascari\nmascarm@mascari.com\n\n",
"msg_date": "Fri, 20 Sep 2002 11:54:52 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": false,
"msg_subject": "Re: Win32 rename()/unlink() questions"
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I think we may be best just looping on MoveFileEx() until is succeeds. \n> > We do the pg_pwd writes while holding an exclusive lock on pg_shadow so\n> > that will guarantee that no one else will slip an old version of the\n> > file in after we have written it. However, it also prevents pg_shadow\n> > access while we are doing the looping. Yuck.\n> \n> Surely you're not evaluating this on the assumption that the pg_shadow\n> triggers are the only places that use rename() ?\n> \n> I see other places in pgstat and relcache that expect rename() to work\n> per Unix spec.\n\nYes, I know there are others but I think we will need _a_ rename that\nworks 100% and then replace that in all Win32 rename cases.\n\nGiven what I have seen, I think a single rename with a loop that uses\nMoveFileEx() may be our best bet. It is localized, doesn't affect the\nopen() code, and should work well. The only downside is that under\nheavy read activity the loop will loop around a few times but I just\ndon't see another solution.\n\nI was initially concerned that the loop in rename could let old renames\nupdate the file overwriting newer contents but I realize now that\nrename() itself has the same issue (an old rename could hit in the code\nafter a newer rename) so in all cases we must already have the proper\nlocking in place.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 20 Sep 2002 11:56:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Win32 rename()/unlink() questionst"
},
{
"msg_contents": "I wrote:\n> Stephan Szabo wrote:\n >>\n>> The question is, what happens if two people have the file open\n>> and one goes and tries to delete it? Can the other still read\n>> from it?\n> \n> Yes. I just tested it and it worked. I'll test Bruce's scenario as well:\n> \n> foo contains: \"FOO\"\n> bar contains: \"BAR\"\n> \n> 1. Process 1 opens \"foo\"\n> 2. Process 2 opens \"foo\"\n> 3. Process 1 calls MoveFile(\"foo\", \"foo2\");\n> 4. Process 3 opens \"foo\" <- Successful?\n> 5. Process 1 calls MoveFile(\"bar\", \"foo\");\n> 6. Process 4 opens \"foo\" <- Successful?\n> 7. Process 1 calls DeleteFile(\"foo2\");\n> 8. Process 1, 2, 3, 4 all read from their respective handles.\n\nProcess 1: \"FOO\"\nProcess 2: \"FOO\"\nProcess 3: Error - File does not exist\nProcess 4: \"BAR\"\n\nIts interesting in that it allows for Unix-style rename() and \nunlink() behavior, but with a race condition. Without Stephan's \ntwo MoveFile() trick and the FILE_SHARE_DELETE flag, however, \nthe result would be Access Denied. Are the places in the backend \nthat use rename() and unlink() renaming and unlinking files that \nare only opened for a brief moment by other backends?\n\nMike Mascari\nmascarm@mascari.com\n\n",
"msg_date": "Fri, 20 Sep 2002 12:27:32 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": false,
"msg_subject": "Re: Win32 rename()/unlink() questions"
},
{
"msg_contents": "Mike Mascari wrote:\n> Its interesting in that it allows for Unix-style rename() and \n> unlink() behavior, but with a race condition. Without Stephan's \n> two MoveFile() trick and the FILE_SHARE_DELETE flag, however, \n> the result would be Access Denied. Are the places in the backend \n> that use rename() and unlink() renaming and unlinking files that \n> are only opened for a brief moment by other backends?\n\nYes, those files are only opened for a brief moment. They are not held\nopen.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 20 Sep 2002 13:31:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Win32 rename()/unlink() questions"
},
{
"msg_contents": "Mike Mascari wrote:\n> > foo contains: \"FOO\"\n> > bar contains: \"BAR\"\n> > \n> > 1. Process 1 opens \"foo\"\n> > 2. Process 2 opens \"foo\"\n> > 3. Process 1 calls MoveFile(\"foo\", \"foo2\");\n> > 4. Process 3 opens \"foo\" <- Successful?\n> > 5. Process 1 calls MoveFile(\"bar\", \"foo\");\n> > 6. Process 4 opens \"foo\" <- Successful?\n> > 7. Process 1 calls DeleteFile(\"foo2\");\n> > 8. Process 1, 2, 3, 4 all read from their respective handles.\n> \n> Process 1: \"FOO\"\n> Process 2: \"FOO\"\n> Process 3: Error - File does not exist\n> Process 4: \"BAR\"\n> \n> Its interesting in that it allows for Unix-style rename() and \n> unlink() behavior, but with a race condition. Without Stephan's \n> two MoveFile() trick and the FILE_SHARE_DELETE flag, however, \n> the result would be Access Denied. Are the places in the backend \n> that use rename() and unlink() renaming and unlinking files that \n> are only opened for a brief moment by other backends?\n\nI think we are better off looping over\nMoveFileEx(MOVEFILE_REPLACE_EXISTING) until the file isn't opened by\nanyone. That localizes the changes to rename only and not out to all\nthe opens.\n\nThe open failure loops when the file isn't there seem much worse.\n\nI am a little concerned about starving the rename when there is a lot of\nactivity but I don't see a better solution.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 20 Sep 2002 13:53:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Win32 rename()/unlink() questions"
}
] |
[
{
"msg_contents": "Hi,\n\nShould someone just go though contrib/ and add GRANT EXECUTE on everything?\nSeems pointless doing it ad hoc by the maintainer as it is at the moment...?\n\nChris\n\n",
"msg_date": "Thu, 19 Sep 2002 12:19:44 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "GRANT EXECUTE"
},
{
"msg_contents": "Christopher Kings-Lynne writes:\n\n> Should someone just go though contrib/ and add GRANT EXECUTE on everything?\n> Seems pointless doing it ad hoc by the maintainer as it is at the moment...?\n\nPlease.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 19 Sep 2002 23:38:25 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: GRANT EXECUTE"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> Hi,\n> \n> Should someone just go though contrib/ and add GRANT EXECUTE on everything?\n> Seems pointless doing it ad hoc by the maintainer as it is at the moment...?\n\nAdded to open item list:\n\n\tAdd GRANT EXECUTE to all /contrib functions\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 19 Sep 2002 18:01:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: GRANT EXECUTE"
}
] |
[
{
"msg_contents": "I just saw this in my logs:\n\n2002-09-18 12:13:10 ERROR: cannot open segment 1 of relation users_sessions\n(target block 1342198864): No such file or directory\n\nThis query caused it:\n\nDELETE FROM users_sessions WHERE changed < ('now'::timestamp - '1440\nminutes'::interval) AND name = 'fhnid';\n\nHowever, I cannot repeat the error now. Is this a bug in postgres\nsomewhere.\n\nAlso, what should I do to fix the table properly. I haven't vacuumed it or\nanything yet in case someone wants to analyze it.\n\nChris\n\n",
"msg_date": "Thu, 19 Sep 2002 15:16:36 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Postgres 7.2.2 Segment Error"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> I just saw this in my logs:\n> 2002-09-18 12:13:10 ERROR: cannot open segment 1 of relation users_sessions\n> (target block 1342198864): No such file or directory\n\n> This query caused it:\n\n> DELETE FROM users_sessions WHERE changed < ('now'::timestamp - '1440\n> minutes'::interval) AND name = 'fhnid';\n\nWhat does EXPLAIN show as the plan for that query? I'm guessing an\nindexscan, and that the error was caused by reading a broken item\npointer from the index. (1342198864 = hex 50005450, which sure looks\nlike the upper 5 shouldn't be there ... how big is the table, anyway?)\n\n> However, I cannot repeat the error now. Is this a bug in postgres\n> somewhere.\n\nIf the broken item pointer were indeed in the index, I'd expect it to be\n100% repeatable. I'm wondering about flaky memory or some such. Have\nyou run any hardware diagnostics?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Sep 2002 10:12:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 7.2.2 Segment Error "
},
{
"msg_contents": "> > DELETE FROM users_sessions WHERE changed < ('now'::timestamp - '1440\n> > minutes'::interval) AND name = 'fhnid';\n>\n> What does EXPLAIN show as the plan for that query? I'm guessing an\n> indexscan, and that the error was caused by reading a broken item\n> pointer from the index. (1342198864 = hex 50005450, which sure looks\n> like the upper 5 shouldn't be there ... how big is the table, anyway?)\n\nNOTICE: QUERY PLAN:\n\nIndex Scan using users_sessions_cha_name_idx on users_sessions\n(cost=0.00..12738.07 rows=1275 width=6) (actual time=231.74..239.39 rows=2\nloops=1)\nTotal runtime: 239.81 msec\n\nEXPLAIN\n\nThe size of the table:\n\ncanaveral# ls -al 44632\n-rw------- 1 pgsql pgsql 357130240 Sep 19 18:52 44632\n\nThe size of the index:\n\ncanaveral# ls -al 7331245\n-rw------- 1 pgsql pgsql 8151040 Sep 19 18:51 7331245\n\nHoly crap - that table is huge. It's like it's never had a vacuum full sort\nof thing. Going select count(*) takes _ages_ even though there's only 1451\nrows in it - and not particularly large rows. Actually, the longest text\nentry is 3832 characters and the average is 677.\n\nThe sessions table holds normal site session data, like a uid, username,\nsome other stuff, etc. However entries older than two hours or so get\ndeleted. We VACUUM everynight, so why is the on-disk relation growing so\nhuge?\n\n> > However, I cannot repeat the error now. Is this a bug in postgres\n> > somewhere.\n>\n> If the broken item pointer were indeed in the index, I'd expect it to be\n> 100% repeatable. I'm wondering about flaky memory or some such. Have\n> you run any hardware diagnostics?\n\nNo - the thought occured to me that there might be something wacky going on.\nWe've had problems with users_sessions before. Remember when I mailed about\nvacuum failing on it before? You suggested doing a select for update on the\nrelation and that fixed it.\n\nChris\n\n",
"msg_date": "Fri, 20 Sep 2002 10:05:06 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: Postgres 7.2.2 Segment Error "
},
{
"msg_contents": "On Fri, 20 Sep 2002, Christopher Kings-Lynne wrote:\n\n> > > DELETE FROM users_sessions WHERE changed < ('now'::timestamp - '1440\n> > > minutes'::interval) AND name = 'fhnid';\n> >\n> > What does EXPLAIN show as the plan for that query? I'm guessing an\n> > indexscan, and that the error was caused by reading a broken item\n> > pointer from the index. (1342198864 = hex 50005450, which sure looks\n> > like the upper 5 shouldn't be there ... how big is the table, anyway?)\n> \n> NOTICE: QUERY PLAN:\n> \n> Index Scan using users_sessions_cha_name_idx on users_sessions\n> (cost=0.00..12738.07 rows=1275 width=6) (actual time=231.74..239.39 rows=2\n> loops=1)\n> Total runtime: 239.81 msec\n> \n> EXPLAIN\n> \n> The size of the table:\n> \n> canaveral# ls -al 44632\n> -rw------- 1 pgsql pgsql 357130240 Sep 19 18:52 44632\n\nThis seems remarkably large. Does pg_filedump reveal anything of interest?\n\nGavin\n\n",
"msg_date": "Fri, 20 Sep 2002 13:47:40 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 7.2.2 Segment Error "
},
{
"msg_contents": "> > Index Scan using users_sessions_cha_name_idx on users_sessions\n> > (cost=0.00..12738.07 rows=1275 width=6) (actual\n> time=231.74..239.39 rows=2\n> > loops=1)\n> > Total runtime: 239.81 msec\n> >\n> > EXPLAIN\n> >\n> > The size of the table:\n> >\n> > canaveral# ls -al 44632\n> > -rw------- 1 pgsql pgsql 357130240 Sep 19 18:52 44632\n>\n> This seems remarkably large. Does pg_filedump reveal anything of interest?\n\nWhere on earth do I find that?\n\nBTW - I want to vacuum full this table but I'm holding off until someone\nlike Tom tells me there's nothing more to be gained from it...\n\nChris\n\n",
"msg_date": "Fri, 20 Sep 2002 12:18:39 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: Postgres 7.2.2 Segment Error "
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> The sessions table holds normal site session data, like a uid, username,\n> some other stuff, etc. However entries older than two hours or so get\n> deleted. We VACUUM everynight, so why is the on-disk relation growing so\n> huge?\n\nFSM not big enough, perhaps? Try doing a vacuum full, then looking to\nsee how big the table is (in physical blocks) after one day's normal\nusage. You need at least enough FSM space for that many blocks\n... unless you want to vacuum it more often.\n\n> However, I cannot repeat the error now.\n\nIf you can't reproduce the error then I'm pretty well convinced that\nthere is no problem in the stored data itself. This was either a\nhardware glitch or a software bug causing a memory stomp on the top byte\nof an item pointer retrieved from the index. Although I can't rule out\nthe latter, I find it unlikely given that we don't have similar reports\nfrom other people.\n\nYou may as well do the VACUUM FULL --- I doubt we can learn anything\nfrom examining the table.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 Sep 2002 10:24:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 7.2.2 Segment Error "
}
] |
[
{
"msg_contents": "\n> \"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> > Note that if you write, say,\n> > set numericcol = numericcol * 3.14159;\n> > my proposal would do the \"right thing\" since the constant would be typed\n> > as numeric to start with and would stay that way. To do what you want\n> > with a float variable, it'd be necessary to write\n> > set numericcol = numericcol * float4col::numeric;\n> \n> > Yes, that is the case where the new behavior would imho not be good (but you \n> > say spec compliant). I loose precision even though there is room to hold it.\n> \n> Lose what precision? It seems silly to imagine that the product of\n\nHave you seen my example ? If calculated in float4 the result of\n1.00000000000001*1000.0-1000.0 would be 0.0, no ? \n\n> a numeric and a float4 is good to more digits than there are in the\n> float4. This is exactly the spec's point: combining an exact and an\n> approximate input will give you an approximate result.\n\nDoes it actually say how approximate the result needs to be, or is it simply \napproximate by nature that one part was only approximate ?\nDo they really mean, that an approximate calculation with one float4 must be \ncalculated in float4 arithmetic ? If you e.g. calculate in float8 it would still \nbe an approximate result and thus imho conform.\n\n> (Unless of course the value in the float4 happens to be exact, eg,\n> an integer of not very many digits. But if you are relying on that\n> to be true, why aren't you using an exact format for storing it?)\n\nProbably because the approximate is more efficient in storage size,\nor the designer knew he only wants to store 6 significant digits ?\n\n> > Informix does the calculations in numeric, and then converts the result\n> > if no casts are supplied (would do set float4col = float4(float4col::numeric * numericcol)).\n> \n> I am not sure what the argument is for following Informix's lead rather\n> than the standard's lead; especially when Informix evidently doesn't\n> understand numerical analysis ;-)\n\nIt was only an example of how someone else does it and was why I asked what \nother db's do. I would e.g. suspect Oracle does it similarily.\nPlease, someone check another db !\n\nAndreas\n",
"msg_date": "Thu, 19 Sep 2002 10:40:46 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Proposal for resolving casting issues "
},
{
"msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> Yes, that is the case where the new behavior would imho not be good (but you \n> say spec compliant). I loose precision even though there is room to hold it.\n>> \n>> Lose what precision? It seems silly to imagine that the product of\n\n> Have you seen my example ? If calculated in float4 the result of\n> 1.00000000000001*1000.0-1000.0 would be 0.0, no ? \n\nSo? If you are storing one input as float4, then you cannot rationally\nsay that you know the result to better than 6 digits, because you don't\nknow the input to better than 6 digits. Claiming that 1000.00000000001\nis a more accurate answer for the product than 1000.0 is simply wishful\nthinking on your part: nothing to the right of the sixth digit actually\nmeans a darn thing, because you don't know whether the input was really\nexactly 1000, or should have been perhaps 1000.001.\n\n> Do they really mean, that an approximate calculation with one float4 must be \n> calculated in float4 arithmetic ? If you e.g. calculate in float8 it would still \n> be an approximate result and thus imho conform.\n\nAnd still the output would be illusory: if you think you'd get 16 digits\nof precision that way, then you are failing to grasp the problem.\n\n>> (Unless of course the value in the float4 happens to be exact, eg,\n>> an integer of not very many digits. But if you are relying on that\n>> to be true, why aren't you using an exact format for storing it?)\n\n> Probably because the approximate is more efficient in storage size,\n> or the designer knew he only wants to store 6 significant digits ?\n\nSeems an exceedingly uncompelling scenario. The only values that could\nbe expected to be stored exactly in a float4 (without very careful\nanalysis) are integers of up to 6 digits; you might as well store the\ncolumn as int4 if that's what you plan to keep in it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Sep 2002 09:53:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for resolving casting issues "
},
{
"msg_contents": "Will the new casting stuff address this kind of annoyance?\n\nusa=# select average(octet_length(val)) from users_sessions;\nERROR: Function 'average(int4)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\n\nChris\n\n",
"msg_date": "Fri, 20 Sep 2002 10:02:47 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for resolving casting issues "
},
{
"msg_contents": "Doh - I'm stupid. Ignore my question :)\n\nHelps if you spell 'average' as 'avg' :)\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Christopher\n> Kings-Lynne\n> Sent: Friday, 20 September 2002 10:03 AM\n> To: Tom Lane; Zeugswetter Andreas SB SD\n> Cc: Bruce Momjian; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] Proposal for resolving casting issues \n> \n> \n> Will the new casting stuff address this kind of annoyance?\n> \n> usa=# select average(octet_length(val)) from users_sessions;\n> ERROR: Function 'average(int4)' does not exist\n> Unable to identify a function that satisfies the given argument\n> types\n> You may need to add explicit typecasts\n> \n> Chris\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n",
"msg_date": "Fri, 20 Sep 2002 10:40:31 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for resolving casting issues "
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Will the new casting stuff address this kind of annoyance?\n> usa=# select average(octet_length(val)) from users_sessions;\n> ERROR: Function 'average(int4)' does not exist\n\nregression=# select * from pg_proc where proname = 'average';\n proname | pronamespace | proowner | prolang | proisagg | prosecdef | proisstrict | proretset | provolatile | pronargs | prorettype | proargtypes | prosrc | probin | proacl\n---------+--------------+----------+---------+----------+-----------+-------------+-----------+-------------+----------+------------+-------------+--------+--------+--------\n(0 rows)\n\n\nNo, I think you'll get the same error ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Sep 2002 22:41:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for resolving casting issues "
}
] |
[
{
"msg_contents": "The patch fix bug described in TODO:\n\n * to_char(0,'FM999.99') returns a period, to_char(1,'FM999.99') does not\n\n NOTE! Please, first apply Neil's patch with to_char() code clean up.\n\n\n Thanks\n\n Karel\n\n\ntest=# select to_char(0,'FM9.9');\n to_char \n---------\n 0.\n(1 row)\n\ntest=# select to_char(1,'FM9.9');\n to_char \n---------\n 1.\n(1 row)\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz",
"msg_date": "Thu, 19 Sep 2002 11:14:58 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": true,
"msg_subject": "to_char(FM9.9) bug fix"
},
{
"msg_contents": "\nPatch applied. Thanks.\n\nTODO item marked as completed.\n\n---------------------------------------------------------------------------\n\n\nKarel Zak wrote:\n> \n> The patch fix bug described in TODO:\n> \n> * to_char(0,'FM999.99') returns a period, to_char(1,'FM999.99') does not\n> \n> NOTE! Please, first apply Neil's patch with to_char() code clean up.\n> \n> \n> Thanks\n> \n> Karel\n> \n> \n> test=# select to_char(0,'FM9.9');\n> to_char \n> ---------\n> 0.\n> (1 row)\n> \n> test=# select to_char(1,'FM9.9');\n> to_char \n> ---------\n> 1.\n> (1 row)\n> \n> -- \n> Karel Zak <zakkr@zf.jcu.cz>\n> http://home.zf.jcu.cz/~zakkr/\n> \n> C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 19 Sep 2002 23:57:10 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: to_char(FM9.9) bug fix"
},
{
"msg_contents": "Karel Zak writes:\n\n> test=# select to_char(0,'FM9.9');\n> to_char\n> ---------\n> 0.\n> (1 row)\n>\n> test=# select to_char(1,'FM9.9');\n> to_char\n> ---------\n> 1.\n> (1 row)\n\nI find this highly bizzare. The FM modifier means to omit unnecessary\ntrailing stuff. There is no reasonable business or scientific custom to\nleave a trailing point after a number.\n\nOr perhaps a more pragmatic question is, how would I print a number\nwithout the trailing point?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 20 Sep 2002 21:24:00 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] to_char(FM9.9) bug fix"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Karel Zak writes:\n>> test=# select to_char(0,'FM9.9');\n>> to_char\n>> ---------\n>> 0.\n>> (1 row)\n>> \n>> test=# select to_char(1,'FM9.9');\n>> to_char\n>> ---------\n>> 1.\n>> (1 row)\n\n> I find this highly bizzare.\n\nNo doubt, but it's what Oracle does (see tests posted to the lists by\nseveral people) and to_char exists to duplicate Oracle behavior. This\nis hardly the silliest aspect of to_char's definition, IMHO ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 Sep 2002 17:32:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] to_char(FM9.9) bug fix "
},
{
"msg_contents": "On Fri, Sep 20, 2002 at 09:24:00PM +0200, Peter Eisentraut wrote:\n> Karel Zak writes:\n> \n> > test=# select to_char(0,'FM9.9');\n> > to_char\n> > ---------\n> > 0.\n> > (1 row)\n> >\n> > test=# select to_char(1,'FM9.9');\n> > to_char\n> > ---------\n> > 1.\n> > (1 row)\n> \n> I find this highly bizzare. The FM modifier means to omit unnecessary\n\n In the code it's commented as \"terrible Ora format\" :-)\n\n> trailing stuff. There is no reasonable business or scientific custom to\n> leave a trailing point after a number.\n\n I think so. I don't know who can use format number like '1.' or '.0'. \n Can somebody explain why Oracle implement it, who use it?\n\n> Or perhaps a more pragmatic question is, how would I print a number\n> without the trailing point?\n\n Don't use FM or use FM9.0\n\n Examples:\n\n 'SVRMGR' = Oracle8 Release 8.0.5.0.0\n 'test=#' = PostgreSQL 7.3b1\n\n test=# select to_char(1, 'FM9.9');\n to_char \n ---------\n 1.\n\n SVRMGR> select to_char(1, 'FM9.9') from dual;\n TO_C\n ----\n 1. \n \n test=# select to_char(1, '9.9');\n to_char \n ---------\n 1.0\n \n SVRMGR> select to_char(1, '9.9') from dual;\n TO_C\n ----\n 1.0\n\n test=# select to_char(1, 'FM9.0');\n to_char \n ---------\n 1.0\n\n SVRMGR> select to_char(1, 'FM9.0') from dual;\n TO_C\n ----\n 1.0 \n\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Mon, 23 Sep 2002 09:10:24 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] to_char(FM9.9) bug fix"
}
] |
[
{
"msg_contents": "\n> > PS: pg snapshot 09/11 does not compile on AIX (large files (don't want\n> > _LARGE_FILES),\n> \n> Please provide details.\n\nOn AIX we would only want to make the large file api visible (_LARGE_FILE_API)\nwhich automatically gets defined when xlc is used with -qlonglong.\n\n#ifdef _LARGE_FILE_API\nextern off64_t lseek64(int, off64_t, int);\n#endif\n\nconfigure somehow thinks it needs to #define _LARGE_FILES though, which \nthen clashes with pg_config.h's _LARGE_FILES. I think the test needs to \n#include unistd.h .\n\n> > and mb conversions (pg_ascii2mic and pg_mic2ascii not\n> > found in the postmaster and not included from elsewhere)\n\nshared libs on AIX need to be able to resolve all symbols at linkage time.\nThose two symbols are in backend/utils/SUBSYS.o but not in the postgres \nexecutable. \nMy guess is, that they are eliminated by the linker ? Do they need an extern \ndeclaration ?\n\nAndreas\n",
"msg_date": "Thu, 19 Sep 2002 11:21:50 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: AIX compilation problems (was Re: Proposal ...)"
},
{
"msg_contents": "Zeugswetter Andreas SB SD writes:\n\n> configure somehow thinks it needs to #define _LARGE_FILES though, which\n> then clashes with pg_config.h's _LARGE_FILES. I think the test needs to\n> #include unistd.h .\n\n_LARGE_FILES is defined because it's necessary to make off_t 64 bits. If\nyou disagree, please post compiler output.\n\n> > > and mb conversions (pg_ascii2mic and pg_mic2ascii not\n> > > found in the postmaster and not included from elsewhere)\n>\n> shared libs on AIX need to be able to resolve all symbols at linkage time.\n> Those two symbols are in backend/utils/SUBSYS.o but not in the postgres\n> executable.\n> My guess is, that they are eliminated by the linker ? Do they need an extern\n> declaration ?\n\nThey are defined in backend/utils/mb/conv.c and declared in\ninclude/mb/pg_wchar.h. They're also linked into the postmaster. I don't\nsee anything unusual.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 19 Sep 2002 23:37:48 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: AIX compilation problems (was Re: Proposal ...)"
}
] |
[
{
"msg_contents": "\nHello,\n\nI am trying to debug a problem involving DBD::PgSPI that crashes the\nbackend. It used to work fine util we installed perl-5.8. How can I get\na core file of a crashed backend on a debian-linux (unstable) machine?\n\nMy /etc/security/limits.conf is empty. When I login as root \"ulimit -c\"\nshows a limit of 0. If I set the limit to \"unlimited\" and logout/login\nthe limit is back to 0.\n\nIs it sufficient to set the proper limit and then restart postgres in\nthe same shell to obtain core files in case the backend crashes?\n\nThanks in advance, cheers,\n\n-- \nldm@apartia.org \n",
"msg_date": "Thu, 19 Sep 2002 12:18:11 +0200",
"msg_from": "Louis-David Mitterrand <vindex@apartia.org>",
"msg_from_op": true,
"msg_subject": "generating postgres core files on debian"
},
{
"msg_contents": "On Thu, 2002-09-19 at 11:18, Louis-David Mitterrand wrote:\n> \n> Hello,\n> \n> I am trying to debug a problem involving DBD::PgSPI that crashes the\n> backend. It used to work fine util we installed perl-5.8. How can I get\n> a core file of a crashed backend on a debian-linux (unstable) machine?\n> \n> My /etc/security/limits.conf is empty. When I login as root \"ulimit -c\"\n> shows a limit of 0. If I set the limit to \"unlimited\" and logout/login\n> the limit is back to 0.\n\nI think /etc/security/limits.conf is used to limit what you can set with\nulimit rather than dictate the settings. You probably need to put\n\"ulimit -c unlimited\" in ~postgres/.bash_profile.\n\n> Is it sufficient to set the proper limit and then restart postgres in\n> the same shell to obtain core files in case the backend crashes?\n\nYes.\n\nThe core file produced by postmaster from the binary package will not be\nvery useful to you, because the binary is stripped. You need to build\nthe package from source and use the binary from the source tree\n(.../src/backend/postmaster/postmaster), not the one copied into the\npackage tree (.../debian/usr/lib/postgresql/bin/postmaster) since the\nstripping is done on the package tree after the binaries are installed\nthere.\n\nTo build the package:\n\n cd /usr/local/src\n apt-get source postgresql # installs in postgresql-7.2.2\n apt-get build-dep postgresql # build dependencies\n apt-get install devscripts fakeroot # needed for building anything\n cd postgresql-7.2.2\n debuild\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Bring ye all the tithes into the storehouse, that \n there may be meat in mine house, and prove me now \n herewith, saith the LORD of hosts, if I will not open \n you the windows of heaven, and pour you out a \n blessing, that there shall not be room enough to \n receive it.\" Malachi 3:10 \n\n",
"msg_date": "19 Sep 2002 12:17:15 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: generating postgres core files on debian"
},
{
"msg_contents": "On Thu, Sep 19, 2002 at 12:17:15PM +0100, Oliver Elphick wrote:\n> On Thu, 2002-09-19 at 11:18, Louis-David Mitterrand wrote:\n> > \n> > I am trying to debug a problem involving DBD::PgSPI that crashes the\n> > backend. It used to work fine util we installed perl-5.8. How can I get\n> > a core file of a crashed backend on a debian-linux (unstable) machine?\n> > \n> > My /etc/security/limits.conf is empty. When I login as root \"ulimit -c\"\n> > shows a limit of 0. If I set the limit to \"unlimited\" and logout/login\n> > the limit is back to 0.\n> \n> I think /etc/security/limits.conf is used to limit what you can set with\n> ulimit rather than dictate the settings. \n\nAha, that makes sense.\n\n> You probably need to put \"ulimit -c unlimited\" in\n> ~postgres/.bash_profile.\n\nHmm, I hadn't thought of that \n\n> > Is it sufficient to set the proper limit and then restart postgres in\n> > the same shell to obtain core files in case the backend crashes?\n> \n> Yes.\n> \n> The core file produced by postmaster from the binary package will not be\n> very useful to you, because the binary is stripped. You need to build\n> the package from source and use the binary from the source tree\n> (.../src/backend/postmaster/postmaster), not the one copied into the\n> package tree (.../debian/usr/lib/postgresql/bin/postmaster) since the\n> stripping is done on the package tree after the binaries are installed\n> there.\n\nI also suspected that a stripped binary would not help much. Your\nindications will save me much time. \n\n> To build the package:\n> \n> cd /usr/local/src\n> apt-get source postgresql # installs in postgresql-7.2.2\n> apt-get build-dep postgresql # build dependencies\n> apt-get install devscripts fakeroot # needed for building anything\n> cd postgresql-7.2.2\n> debuild\n\nHey, debuild is nice, didn't know about it until now. Cleaner\n\"dpkg-buidpackage -us -uc\" or \"fakeroot debian/rules binary\" ;)\n\nThanks a lot for your help,\n\n-- \nldm@apartia.org \n",
"msg_date": "Thu, 19 Sep 2002 14:07:15 +0200",
"msg_from": "Louis-David Mitterrand <vindex@apartia.org>",
"msg_from_op": true,
"msg_subject": "Re: generating postgres core files on debian"
}
] |
[
{
"msg_contents": "On Wed, 2002-09-18 at 22:24, Marc G. Fournier wrote:\n> On Wed, 18 Sep 2002, Bruce Momjian wrote:\n> \n> > Sorry, I don't see the logic here. Using postgresql.conf, you set it\n> > once and it remains set until you change it again. With -X, you have to\n> > use it every time. I think that's where the votes came from.\n> \n> Ah, so you are saying that you type out your full command line each and\n> every time you start up the server? I know, in my case, I have a shell\n> script setup that I edit my changes in so that I don't have to remember\n> ...\n\nthe effort/danger of editing a shell script can't be less than editing\npostgresql.conf\n\n> \n> > You argued that -X and GUC make sense, but why add -X when can get it\n> > done at once in postgresql.conf. Also, consider changing the location\n> > does require moving the WAL files, so you already have this extra step.\n> > Adding to postgresql.conf is easy. I don't think you can just point it\n> > at a random empty directory on startup. Our goal was to reduce params\n> > to postmaster/postgres in favor of GUC, not add to them.\n> \n> I don't disagree that editing postgresql.conf is easy, but its not\n> something that ppl would naturally thing of ... if I want to move a\n> directory with most servers I run, I will generally do a man to find out\n> what command options are required to do this change, and, if none are\n> provided, just create a god-forsaken symlink ...\n\nI don't know if I agree with that. Most servers (apache for instance) have\nconfiguration variables on where files are going to live, not command line\noptions.\n\n> \n> The man page for postmaster should have something in it like:\n> \n> -X <directory> Specifies an alternate location for WAL files. Superseded\n> by setting xlog_path in postmaster.conf\n>\n\nWell, as with most (all?) GUC variables, wouldn't you have the option of doing\npostmaster -o \"pgxlog=/dev/null\" and have the same functionality as -X ?\n \n> Hell, if you are going to remove -X because its 'easier to do it in\n> postmaster.conf', you should be looking at removing *all* command line\n> args that are better represented in the postmaster.conf file ...\n> \n\nGenerally speaking people should be looking to avoid useing command line flags\nand useing whats in the postgresql.conf, IMHO.\n\n<snip>\n> \n> the GUC value should override the command line option, agreed ... but the\n> ability to use the command line should not be removed just because some\n> ppl aren't competent enough to adjust their startup scripts if they change\n> their system ...\n> \n\nShouldn't this work the other way around? Use what's in the conf file unless I\nexplicitly state otherwise? IIRC that's how it works with -i\n\nRobert Treat\n\n--\nLAMP :: Linux Apache {middleware} PostgreSQL\n",
"msg_date": "Thu, 19 Sep 2002 06:28:51 -0700 (PDT)",
"msg_from": "\"Robert Treat\" <xzilla@users.sourceforge.net>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PGXLOG variable worthwhile?"
},
{
"msg_contents": "On Thu, 19 Sep 2002, Robert Treat wrote:\n\n> I don't know if I agree with that. Most servers (apache for instance) have\n> configuration variables on where files are going to live, not command line\n> options.\n\nNot where it involves *critical* files:\n\nOPTIONS\n -R libexecdir\n This option is only available if Apache was\n built with the SHARED_CORE rule enabled which\n forces the Apache core code to be placed into\n a dynamic shared object (DSO) file. This file\n is searched in a hardcoded path under Server-\n Root per default. Use this option if you want\n to override it.\n\n> Well, as with most (all?) GUC variables, wouldn't you have the option of\n> doing postmaster -o \"pgxlog=/dev/null\" and have the same functionality\n> as -X ?\n\nTrue, but then that negates the whole argument about not having a command\nline option, no? Which I believe was the whole argument on this ... no?\n\n> Shouldn't this work the other way around? Use what's in the conf file\n> unless I explicitly state otherwise? IIRC that's how it works with -i\n\nGod, I wish I had thought to note it at the time ... one of the things I\ndid when I dove into this was to check how various Unix daemons were doing\nit, now I can't recall which I was looking at that mentioned the config\nfile overriding the command line options, but you are correct, the command\nline should override the conf file ...\n\n\n",
"msg_date": "Thu, 19 Sep 2002 13:37:09 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PGXLOG variable worthwhile?"
}
] |
[
{
"msg_contents": "\n> > Yes, that is the case where the new behavior would imho not be good (but you \n> > say spec compliant). I loose precision even though there is room to hold it.\n> >> \n> >> Lose what precision? It seems silly to imagine that the product of\n> \n> > Have you seen my example ? If calculated in float4 the result of\n> > 1.00000000000001*1000.0-1000.0 would be 0.0, no ? \n> \n> So? If you are storing one input as float4, then you cannot rationally\n> say that you know the result to better than 6 digits, because you don't\n> know the input to better than 6 digits. Claiming that 1000.00000000001\n> is a more accurate answer for the product than 1000.0 is simply wishful\n> thinking on your part: nothing to the right of the sixth digit actually\n> means a darn thing, because you don't know whether the input was really\n> exactly 1000, or should have been perhaps 1000.001.\n\nI still see 1E-10 as a better answer to above calculation than your 0,\nand my snapshot 9/11 does return that 1E-10.\n\nFor better understanding the test in pg:\ncreate table atab (a decimal(30,20), b float4, c decimal(30,20), d float4);\ninsert into atab values (1.000000000000001,100000.0,0, 0);\nupdate atab set c=a*b-b, d=a*b-b where 1=1;\ncreate view av as select a*b-b, 1, b, c,d from atab;\n\\d av\nView definition: SELECT ((atab.a * \"numeric\"(atab.b)) - \"numeric\"(atab.b)), atab.a, atab.b\n, atab.c, atab.d FROM atab;\n\nIf I understood your proposal that would now change to:\nView definition: SELECT ((\"float4\"(atab.a) * atab.b) - atab.b), atab.a, atab.b\n, atab.c, atab.d FROM atab;\n\n> \n> > Do they really mean, that an approximate calculation with one float4 must be \n> > calculated in float4 arithmetic ? If you e.g. calculate in float8 it would still \n> > be an approximate result and thus imho conform.\n> \n> And still the output would be illusory: if you think you'd get 16 digits\n> of precision that way, then you are failing to grasp the problem.\n\nI have not said 16 digits exact precision. I was saying, that an approximate \nresult calculated in numeric makes more sense, than your float4 calculated result,\nand does the correct thing more often than not in the db centric cases I can think \nof.\n\nI do think I grasp the problem :-)\n\n> >> (Unless of course the value in the float4 happens to be exact, eg,\n> >> an integer of not very many digits. But if you are relying on that\n> >> to be true, why aren't you using an exact format for storing it?)\n> \n> > Probably because the approximate is more efficient in storage size,\n> > or the designer knew he only wants to store 6 significant digits ?\n> \n> Seems an exceedingly uncompelling scenario. The only values that could\n> be expected to be stored exactly in a float4 (without very careful\n> analysis) are integers of up to 6 digits; you might as well store the\n> column as int4 if that's what you plan to keep in it.\n\nYou can store 6 significant digits and an exponent (iirc 10E+-38) ! \ne.g. 1.23456E-20 an int can't do that.\n\nI give up now. I voiced my concern, and that is as far as my interest goes on this\nactually. I still think fielding what other db's do in this area would be a good \nthing before proceeding further.\n\nAndreas\n",
"msg_date": "Thu, 19 Sep 2002 16:57:30 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Proposal for resolving casting issues "
},
{
"msg_contents": "On Thu, Sep 19, 2002 at 04:57:30PM +0200, Zeugswetter Andreas SB SD wrote:\n> \n> > \n> > > Have you seen my example ? If calculated in float4 the result of\n> > > 1.00000000000001*1000.0-1000.0 would be 0.0, no ? \n> > \n> > So? If you are storing one input as float4, then you cannot rationally\n> > say that you know the result to better than 6 digits, because you don't\n> > know the input to better than 6 digits. Claiming that 1000.00000000001\n> > is a more accurate answer for the product than 1000.0 is simply wishful\n> > thinking on your part: nothing to the right of the sixth digit actually\n> > means a darn thing, because you don't know whether the input was really\n> > exactly 1000, or should have been perhaps 1000.001.\n> \n> I still see 1E-10 as a better answer to above calculation than your 0,\n> and my snapshot 9/11 does return that 1E-10.\n\nWell, then you'd be wrong. Numerical analysis says you _can't_ get more\ninformation out than went in to the _least_ precise part of a calculation.\nWhat your suggesting is the equivalent of wanting to put up a shelf, so\nyou estimate the length of the wall by eyeballing it, then measure the\nwood for the shelf with a micrometer, to be sure it fits exactly right.\n\nWe teach this in intro science classes all the time: if you calculate with\n3.14 as an approximation to pi, you better not report the circumference\nof a circle as 2.45678932 cm, I'll take off points!\n\n> \n> I do think I grasp the problem :-)\n\nHmm, I'm not so sure. ;-)\n\n> \n> I give up now. I voiced my concern, and that is as far as my interest goes on this\n> actually. I still think fielding what other db's do in this area would be a good \n> thing before proceeding further.\n\nAh, sorry to drag this on, then. But this is one of those clear cases\nwere we must fo the right thing, not follow the crowd. PostgreSQL gets\nused by a lot of scientific projects (Have you noticed all the big\nbioinformatics databases being mentioned on the lists?). Partly because\nwe're always underfunded, partly because we're academics who like to\nhave the code. If we start getting basic maths wrong, that'll be a huge\nbalck eye for the project.\n\nRoss\n-- \nRoss Reedstrom, Ph.D. reedstrm@rice.edu\nExecutive Director phone: 713-348-6166\nGulf Coast Consortium for Bioinformatics fax: 713-348-6182\nRice University MS-39\nHouston, TX 77005\n",
"msg_date": "Thu, 19 Sep 2002 10:30:51 -0500",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for resolving casting issues"
},
{
"msg_contents": "On Thu, Sep 19, 2002 at 10:30:51AM -0500, Ross J. Reedstrom wrote:\n> \n> Ah, sorry to drag this on, then. But this is one of those clear cases\n> were we must fo the right thing, not follow the crowd. PostgreSQL gets\n do\n> used by a lot of scientific projects (Have you noticed all the big\n> bioinformatics databases being mentioned on the lists?). Partly because\n> we're always underfunded, partly because we're academics who like to\n ^^(scientific projects) ^^\n> have the code. If we start getting basic maths wrong, that'll be a huge\n ^^(PostgreSQL)\n> balck eye for the project.\n black\n\nClearly, it's time for an early lunch for me. I need sugar for my brain.\n\nRoss\n\n",
"msg_date": "Thu, 19 Sep 2002 10:42:23 -0500",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for resolving casting issues"
}
] |
[
{
"msg_contents": "--howdy:\n\n--not that the process is doing a lot or taking up\n--a lot of resources, it's just something\n--that i allow the users to kill and then\n--it get's passed to me for correction if the\n--simple 'kill <pid>' thing doesn't work.\n\n--what i'm trying to understand is if there\n--is a way to do this without having to restart\n--the database (remember, it's still production)\n--everytime there is a runaway process AND not\n--kill -9 <pid>.\n\n--how can i do this?\n\n-X\n\n-----Original Message-----\nFrom: Shridhar Daithankar [mailto:shridhar_daithankar@persistent.co.in]\nSent: Thursday, September 19, 2002 10:45 AM\nTo: 'pgsql-general@postgresql.org'\nSubject: Re: [GENERAL] killing process question\n\n\nOn 19 Sep 2002 at 10:39, Johnson, Shaunn wrote:\n\n> \n> --thanks for the reply:\n> \n> --no, I don't see anything like that. this is what I have:\n> \n> [snip]\n> \n> postgres 3488 5.6 0.0 11412 4 pts/4 T Sep18 88:53 postgres: \n> joetestdb 16.xx.xx.xx SELECT\n> [/snip]\n> \n> --this tells me that this proc had been running once upon a time (since\nthe \n> 18th) and \n> --has stopped (the 'T'). the user has said that he had since killed the\ntool\n> --that connected to the database and booted his machine.\n> \n> --so ... when I do a 'kill pid' or 'kill -TERM pid' ... *poof* ... nothing\n\n> happens ...\n\nDoes restarting database helps? It may just make the thing go away..\n\nOr stop the database, kill the pid with -9 and start it again.. Nothing\nlost..\n\nBye\n Shridhar\n\n--\nShedenhelm's Law:\tAll trails have more uphill sections than they have\ndownhill \nsections.\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n\n\n\n\nRE: [GENERAL] killing process question\n\n\n--howdy:\n\n--not that the process is doing a lot or taking up\n--a lot of resources, it's just something\n--that i allow the users to kill and then\n--it get's passed to me for correction if the\n--simple 'kill <pid>' thing doesn't work.\n\n--what i'm trying to understand is if there\n--is a way to do this without having to restart\n--the database (remember, it's still production)\n--everytime there is a runaway process AND not\n--kill -9 <pid>.\n\n--how can i do this?\n\n-X\n\n-----Original Message-----\nFrom: Shridhar Daithankar [mailto:shridhar_daithankar@persistent.co.in]\nSent: Thursday, September 19, 2002 10:45 AM\nTo: 'pgsql-general@postgresql.org'\nSubject: Re: [GENERAL] killing process question\n\n\nOn 19 Sep 2002 at 10:39, Johnson, Shaunn wrote:\n\n> \n> --thanks for the reply:\n> \n> --no, I don't see anything like that. this is what I have:\n> \n> [snip]\n> \n> postgres 3488 5.6 0.0 11412 4 pts/4 T Sep18 88:53 postgres: \n> joetestdb 16.xx.xx.xx SELECT\n> [/snip]\n> \n> --this tells me that this proc had been running once upon a time (since the \n> 18th) and \n> --has stopped (the 'T'). the user has said that he had since killed the tool\n> --that connected to the database and booted his machine.\n> \n> --so ... when I do a 'kill pid' or 'kill -TERM pid' ... *poof* ... nothing \n> happens ...\n\nDoes restarting database helps? It may just make the thing go away..\n\nOr stop the database, kill the pid with -9 and start it again.. Nothing lost..\n\nBye\n Shridhar\n\n--\nShedenhelm's Law: All trails have more uphill sections than they have downhill \nsections.\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org",
"msg_date": "Thu, 19 Sep 2002 11:19:03 -0400",
"msg_from": "\"Johnson, Shaunn\" <SJohnson6@bcbsm.com>",
"msg_from_op": true,
"msg_subject": "Re: killing process question"
},
{
"msg_contents": "On 19 Sep 2002 at 11:19, Johnson, Shaunn wrote:\n\n> \n> --howdy:\n> --not that the process is doing a lot or taking up\n> --a lot of resources, it's just something\n> --that i allow the users to kill and then\n> --it get's passed to me for correction if the\n> --simple 'kill <pid>' thing doesn't work.\n> --what i'm trying to understand is if there\n> --is a way to do this without having to restart\n> --the database (remember, it's still production)\n> --everytime there is a runaway process AND not\n> --kill -9 <pid>.\n> --how can i do this?\n\nI did a quick 'grep -rin' on postgresql source code I have(CVS, a week old). \nLooks like postgresql backend is ignoring the SISPIPE which is delivered to \nbackend process when other end is closed. Obviously this is going to cause \nhanging back-ends.\n\nI guess a backend should terminate as if connection is closed. What say? \n\n\n\nBye\n Shridhar\n\n--\nGuillotine, n.:\tA French chopping center.\n\n",
"msg_date": "Thu, 19 Sep 2002 20:58:29 +0530",
"msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>",
"msg_from_op": false,
"msg_subject": "Re: killing process question"
},
{
"msg_contents": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in> writes:\n> I guess a backend should terminate as if connection is closed. What say? \n\nNo.\n\nIt will terminate when it tries to read the next query from the client.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Sep 2002 11:49:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: killing process question "
},
{
"msg_contents": "On 19 Sep 2002 at 11:49, Tom Lane wrote:\n\n> \"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in> writes:\n> > I guess a backend should terminate as if connection is closed. What say? \n> \n> No.\n> \n> It will terminate when it tries to read the next query from the client.\n\nOK. But what if it never reads anything? I mean if the client dies after a \ncomplete transaction i.e. no input pending for either back end or client, will \nit just sit around waiting for select to signal that fd?(AFAIU, that's how \nthings goes in there..)\n\nClearly we have a case where backend is hung persumably. Either it has to have \nan explanation(OK client did aborted abruptly) and/or a possible corrective \naction..\n\nJust some thoughts..\n\n\nBye\n Shridhar\n\n--\nQOTD:\t\"I won't say he's untruthful, but his wife has to call the\tdog for \ndinner.\"\n\n",
"msg_date": "Thu, 19 Sep 2002 21:28:55 +0530",
"msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>",
"msg_from_op": false,
"msg_subject": "Re: killing process question "
}
] |
[
{
"msg_contents": "--okay, but the client has since terminated \n--it's session (if i understand you correctly).\n\n--is this just something that will just have to\n--hang around until i shutdown the database / boot\n--the machine?\n\n-X\n\n-----Original Message-----\nFrom: Tom Lane [mailto:tgl@sss.pgh.pa.us]\nSent: Thursday, September 19, 2002 11:50 AM\nTo: shridhar_daithankar@persistent.co.in\nCc: 'pgsql-general@postgresql.org'; pgsql-hackers@postgresql.org\nSubject: Re: [GENERAL] killing process question \n\n\n\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in> writes:\n> I guess a backend should terminate as if connection is closed. What say? \n\nNo.\n\nIt will terminate when it tries to read the next query from the client.\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n\n\n\n\nRE: [GENERAL] killing process question \n\n\n--okay, but the client has since terminated \n--it's session (if i understand you correctly).\n\n--is this just something that will just have to\n--hang around until i shutdown the database / boot\n--the machine?\n\n-X\n\n-----Original Message-----\nFrom: Tom Lane [mailto:tgl@sss.pgh.pa.us]\nSent: Thursday, September 19, 2002 11:50 AM\nTo: shridhar_daithankar@persistent.co.in\nCc: 'pgsql-general@postgresql.org'; pgsql-hackers@postgresql.org\nSubject: Re: [GENERAL] killing process question \n\n\n\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in> writes:\n> I guess a backend should terminate as if connection is closed. What say? \n\nNo.\n\nIt will terminate when it tries to read the next query from the client.\n\n regards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster",
"msg_date": "Thu, 19 Sep 2002 11:55:18 -0400",
"msg_from": "\"Johnson, Shaunn\" <SJohnson6@bcbsm.com>",
"msg_from_op": true,
"msg_subject": "Re: killing process question "
},
{
"msg_contents": "\"Johnson, Shaunn\" <SJohnson6@bcbsm.com> writes:\n> --okay, but the client has since terminated \n> --it's session (if i understand you correctly).\n> --is this just something that will just have to\n> --hang around until i shutdown the database / boot\n> --the machine?\n\nI dunno. Are you sure this is a backend process? What is it doing\n(or not doing) ... is it chewing any CPU cycles? What status does it\nshow in ps?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Sep 2002 12:07:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: killing process question "
}
] |
[
{
"msg_contents": "\n> > > > Have you seen my example ? If calculated in float4 the result of\n> > > > 1.00000000000001*1000.0-1000.0 would be 0.0, no ? \n> > > \n> > > So? If you are storing one input as float4, then you cannot rationally\n> > > say that you know the result to better than 6 digits, because you don't\n> > > know the input to better than 6 digits. Claiming that 1000.00000000001\n> > > is a more accurate answer for the product than 1000.0 is simply wishful\n> > > thinking on your part: nothing to the right of the sixth digit actually\n> > > means a darn thing, because you don't know whether the input was really\n> > > exactly 1000, or should have been perhaps 1000.001.\n> > \n> > I still see 1E-10 as a better answer to above calculation than your 0,\n> > and my snapshot 9/11 does return that 1E-10.\n> \n> Well, then you'd be wrong. Numerical analysis says you _can't_ get more\n> information out than went in to the _least_ precise part of a calculation.\n> What your suggesting is the equivalent of wanting to put up a shelf, so\n> you estimate the length of the wall by eyeballing it, then measure the\n> wood for the shelf with a micrometer, to be sure it fits \n> exactly right.\n \n> We teach this in intro science classes all the time: if you calculate with\n> 3.14 as an approximation to pi, you better not report the circumference\n> of a circle as 2.45678932 cm, I'll take off points!\n\nWhat if he must display 9 digits and says the result is approximately 2.45678932\nwould that be worse than 2.46000000 ? \nThat is what I am trying to say. Probably the standard is meant as a hint for db \nusers, that such results will be approximate, not where the first digit sits that \nis not exact any more.\n\nFor above calculation pg will in the future return 0.00000000000000000000 as an\nanswer to 1.00000000000001*1000.0-1000.0 when used in my example context, while\nit currently returns 0.000000000010 ... \nYou both are saying, that 0.00000000000000000000 is a better answer. \n\nAndreas\n",
"msg_date": "Thu, 19 Sep 2002 18:00:37 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Proposal for resolving casting issues"
},
{
"msg_contents": "On Thu, Sep 19, 2002 at 06:00:37PM +0200, Zeugswetter Andreas SB SD wrote:\n> \n> What if he must display 9 digits and says the result is approximately 2.45678932\n> would that be worse than 2.46000000 ? \n\nYup. Trailing zeros are not significant. That's why scientific notation is nice:\nyou don't fill in all those insignificant placeholders.\n\n> \n> For above calculation pg will in the future return 0.00000000000000000000 as an\n> answer to 1.00000000000001*1000.0-1000.0 when used in my example context, while\n> it currently returns 0.000000000010 ... \n> You both are saying, that 0.00000000000000000000 is a better answer. \n\nThat's right. And correct, as well.\n\nRoss\n",
"msg_date": "Thu, 19 Sep 2002 11:05:39 -0500",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for resolving casting issues"
},
{
"msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> For above calculation pg will in the future return 0.00000000000000000000 as an\n> answer to 1.00000000000001*1000.0-1000.0 when used in my example context, while\n> it currently returns 0.000000000010 ... \n> You both are saying, that 0.00000000000000000000 is a better answer. \n\nNot exactly: we are saying it is not a worse answer. There's no reason\nto prefer one over the other, because they are both within the range\nof uncertainty given the inherent uncertainty in the float4 input.\n\nIf you want exact results, you should be using exact datatypes.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Sep 2002 12:22:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for resolving casting issues "
}
] |
[
{
"msg_contents": "\"Ian Harding\" <ianh@tpchd.org> writes:\n> It is pltcl [not plpgsql]\n\nAh. I don't think we've done much of any work on plugging leaks in\npltcl :-(.\n\n> It hurts when I do this:\n\n> drop function memleak();\n> create function memleak() returns int as '\n> for {set counter 1} {$counter < 100000} {incr counter} {\n> set sql \"select ''foo''\"\n> spi_exec \"$sql\"\n> }\n> ' language 'pltcl';\n> select memleak();\n\nYeah, I see very quick memory exhaustion also :-(. Looks like the\nspi_exec call is the culprit, but I'm not sure exactly why ...\nanyone have time to look at this?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Sep 2002 12:18:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Memory Errors... "
},
{
"msg_contents": "I said:\n> Yeah, I see very quick memory exhaustion also :-(. Looks like the\n> spi_exec call is the culprit, but I'm not sure exactly why ...\n> anyone have time to look at this?\n\nOn looking a little more closely, it's clear that pltcl_SPI_exec()\nshould be, and is not, calling SPI_freetuptable() once it's done with\nthe tuple table returned by SPI_exec(). This needs to be done in all\nthe non-elog code paths after SPI_exec has returned SPI_OK_SELECT.\npltcl_SPI_execp() has a similar problem, and there may be comparable\nbugs in other pltcl routines (not to mention other sources of memory\nleaks, but I think this is the problem for your example).\n\nI have no time to work on this right now; any volunteers out there?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Sep 2002 12:52:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Memory Errors... "
},
{
"msg_contents": "Tom Lane wrote:\n> I said:\n> \n>>Yeah, I see very quick memory exhaustion also :-(. Looks like the\n>>spi_exec call is the culprit, but I'm not sure exactly why ...\n>>anyone have time to look at this?\n> \n> \n> On looking a little more closely, it's clear that pltcl_SPI_exec()\n> should be, and is not, calling SPI_freetuptable() once it's done with\n> the tuple table returned by SPI_exec(). This needs to be done in all\n> the non-elog code paths after SPI_exec has returned SPI_OK_SELECT.\n> pltcl_SPI_execp() has a similar problem, and there may be comparable\n> bugs in other pltcl routines (not to mention other sources of memory\n> leaks, but I think this is the problem for your example).\n> \n> I have no time to work on this right now; any volunteers out there?\n> \n\nI can give it a shot, but probably not until the weekend.\n\nI haven't really followed this thread closely, and don't know tcl very well, \nso it would help if someone can send me a minimal tcl function which triggers \nthe problem.\n\nThanks,\n\nJoe\n\n",
"msg_date": "Thu, 19 Sep 2002 10:41:20 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Memory Errors..."
},
{
"msg_contents": "\n> \"Ian Harding\" <ianh@tpchd.org> writes:\n> > It is pltcl [not plpgsql]\n\nQuick, minor point, in the manner of a question:\n\nWhy is the pltcl directory called tcl where all the other pls are pl<language>?\n\nThat's in src/pl of course. Also in my anoncvs fetch which is a few weeks old\nnow being from the day before beta freeze.\n\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants\n\n",
"msg_date": "Thu, 19 Sep 2002 21:55:53 +0100 (BST)",
"msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Memory Errors..."
},
{
"msg_contents": "Nigel J. Andrews wrote:\n> \n> > \"Ian Harding\" <ianh@tpchd.org> writes:\n> > > It is pltcl [not plpgsql]\n> \n> Quick, minor point, in the manner of a question:\n> \n> Why is the pltcl directory called tcl where all the other pls are pl<language>?\n\nI asked the same question a while ago. I asked about changing it but\nothers didn't want the change. It is hard to rename stuff in CVS and\nkeep proper history.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 19 Sep 2002 17:13:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Memory Errors..."
},
{
"msg_contents": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk> writes:\n> Why is the pltcl directory called tcl where all the other pls are pl<language>?\n\nConsistency? We don't need no steenking consistency!\n\nPersonally I'd prefer to remove the pl prefix from the other\nsubdirectories of src/pl/ ... it seems redundantly wasted excessive\ntyping ;-) And I'd have preferred to flatten out the src/ subdirectory\nof src/pl/[pl]pgsql, which is likewise redundant and inconsistent with\nthe other PLs.\n\nHowever, it's fairly painful to make any such change without losing\nthe CVS version history for the moved files, which is Not a Good Thing.\nOr breaking our ability to reconstitute old releases from the CVS tree,\nwhich is Much Worse. So I'm afraid we're stuck with this historical\nmischance.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Sep 2002 17:20:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Memory Errors... "
},
{
"msg_contents": "On Thu, 19 Sep 2002, Joe Conway wrote:\n\n> Tom Lane wrote:\n> > I said:\n> > \n> >>Yeah, I see very quick memory exhaustion also :-(. Looks like the\n> >>spi_exec call is the culprit, but I'm not sure exactly why ...\n> >>anyone have time to look at this?\n> > \n> > \n> > On looking a little more closely, it's clear that pltcl_SPI_exec()\n> > should be, and is not, calling SPI_freetuptable() once it's done with\n> > the tuple table returned by SPI_exec(). This needs to be done in all\n> > the non-elog code paths after SPI_exec has returned SPI_OK_SELECT.\n> > pltcl_SPI_execp() has a similar problem, and there may be comparable\n> > bugs in other pltcl routines (not to mention other sources of memory\n> > leaks, but I think this is the problem for your example).\n> > \n> > I have no time to work on this right now; any volunteers out there?\n> > \n> \n> I can give it a shot, but probably not until the weekend.\n> \n> I haven't really followed this thread closely, and don't know tcl very well, \n> so it would help if someone can send me a minimal tcl function which triggers \n> the problem.\n\n\nI can probably take a look at this tomorrow, already started by looking at the\npltcl_SPI_exec routine. I think a quick glance at ...init_unknown() also shows\na lack of tuptable freeing.\n\n\n-- \nNigel J. Andrews\n\n",
"msg_date": "Thu, 19 Sep 2002 22:39:50 +0100 (BST)",
"msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Memory Errors..."
},
{
"msg_contents": "Nigel J. Andrews wrote:\n> On Thu, 19 Sep 2002, Joe Conway wrote:\n>>I can give it a shot, but probably not until the weekend.\n>>\n>>I haven't really followed this thread closely, and don't know tcl very well, \n>>so it would help if someone can send me a minimal tcl function which triggers \n>>the problem.\n> \n> I can probably take a look at this tomorrow, already started by looking at the\n> pltcl_SPI_exec routine. I think a quick glance at ...init_unknown() also shows\n> a lack of tuptable freeing.\n> \n\nOK -- let me know if you can't find the time and I'll jump back in to it.\n\nJoe\n\n\n\n",
"msg_date": "Thu, 19 Sep 2002 14:58:38 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Memory Errors..."
},
{
"msg_contents": "On Thu, 19 Sep 2002, Tom Lane wrote:\n\n> \"Ian Harding\" <ianh@tpchd.org> writes:\n> > It is pltcl [not plpgsql]\n> \n> Ah. I don't think we've done much of any work on plugging leaks in\n> pltcl :-(.\n> \n> > It hurts when I do this:\n> \n> > drop function memleak();\n> > create function memleak() returns int as '\n> > for {set counter 1} {$counter < 100000} {incr counter} {\n> > set sql \"select ''foo''\"\n> > spi_exec \"$sql\"\n> > }\n> > ' language 'pltcl';\n> > select memleak();\n> \n> Yeah, I see very quick memory exhaustion also :-(. Looks like the\n> spi_exec call is the culprit, but I'm not sure exactly why ...\n> anyone have time to look at this?\n\nAttached is a patch that frees the SPI_tuptable in all post SPI_exec\nnon-elog paths in both pltcl_SPI_exec() and pltcl_SPI_execp().\n\nThe fault as triggered by the above code has been fixed by this patch but\nplease read my assumptions below to ensure they are correct.\n\nI have assumed that Tom's comment about this only being required in non-elog\npaths is correct, which seems a reasonable assumption to me.\n\nI have also assumed, rather than verified, that freeing the tuptable does\nindeed free the tuples as well. Tests with the above function show that the\nprocess does not increase it's memory footprint during it's operation, although\nif my assumption here is wrong this could be a feature of selecting\ninsignificantly sized tuples.\n\nI have not worried about other uses of SPI_exec for selects in pltcl.c on the\nbasis that those are not under the control of the function writer and the\nnormal function management will release the storage.\n\n\n-- \nNigel J. Andrews",
"msg_date": "Fri, 20 Sep 2002 12:38:42 +0100 (BST)",
"msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [GENERAL] Memory Errors..."
},
{
"msg_contents": "Tom Lane writes:\n\n> On looking a little more closely, it's clear that pltcl_SPI_exec()\n> should be, and is not, calling SPI_freetuptable() once it's done with\n> the tuple table returned by SPI_exec(). This needs to be done in all\n> the non-elog code paths after SPI_exec has returned SPI_OK_SELECT.\n\nThere's a note in the PL/Python documentation that it's leaking memory if\nSPI plans are used. Maybe that's related and someone could take a look at\nit.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 20 Sep 2002 18:39:26 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Memory Errors... "
},
{
"msg_contents": "I'll try to have a look-see by the end of the weekend. Any code that\ncan reproduce it or is it ANY code that uses SPI?\n\nGreg\n\n\nOn Fri, 2002-09-20 at 11:39, Peter Eisentraut wrote:\n> Tom Lane writes:\n> \n> > On looking a little more closely, it's clear that pltcl_SPI_exec()\n> > should be, and is not, calling SPI_freetuptable() once it's done with\n> > the tuple table returned by SPI_exec(). This needs to be done in all\n> > the non-elog code paths after SPI_exec has returned SPI_OK_SELECT.\n> \n> There's a note in the PL/Python documentation that it's leaking memory if\n> SPI plans are used. Maybe that's related and someone could take a look at\n> it.\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster",
"msg_date": "20 Sep 2002 12:57:34 -0500",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Memory Errors..."
},
{
"msg_contents": "On 20 Sep 2002, Greg Copeland wrote:\n\n> I'll try to have a look-see by the end of the weekend. Any code that\n> can reproduce it or is it ANY code that uses SPI?\n> \n> Greg\n> \n> \n> On Fri, 2002-09-20 at 11:39, Peter Eisentraut wrote:\n> > Tom Lane writes:\n> > \n> > > On looking a little more closely, it's clear that pltcl_SPI_exec()\n> > > should be, and is not, calling SPI_freetuptable() once it's done with\n> > > the tuple table returned by SPI_exec(). This needs to be done in all\n> > > the non-elog code paths after SPI_exec has returned SPI_OK_SELECT.\n> > \n> > There's a note in the PL/Python documentation that it's leaking memory if\n> > SPI plans are used. Maybe that's related and someone could take a look at\n> > it.\n\n\nI've added the call to free the tuptable just as in the pltcl patch I submited\nearlier (which I can't remember if I've seen in the list so I may well resend).\n\nHowever, the comments in the code imply there might be another leak with\nprepared plans. I'm looking into that so I won't be sending this patch just\nyet.\n\n\n-- \nNigel J. Andrews\n\n",
"msg_date": "Fri, 20 Sep 2002 19:17:47 +0100 (BST)",
"msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Memory Errors..."
},
{
"msg_contents": "Ok, below is the original email I sent, which I can not remember seeing come\nacross the patches list. Please do read the assumptions since they might throw\nup problems with what I have done.\n\nI have attached the pltcl patch again, just in case. For the sake of clarity\nlet's say this patch superscedes the previous one.\n\nI have also attached a patch addressing the similar memory leak problem in\nplpython. This includes a slight adjustment of the tests in the source\ndirectory. The patch also includes a cosmetic change to remove a compiler\nwarning although I think the change makes the code look worse though.\n\nOnce again, please read my text below and also take a quick look at the comment\nI've added in the plpython patch since it may well show that that\nparticular change is complete rubbish.\n\nBTW, by my reckoning the memory leak would occur with prepared plans and\nwithout. If that is not the case then I've been barking up the wrong tree.\n\nOf further note, I have not tested for the memory leak in plpython but the\nbuild passes the normal and big checks. However, I have tried testing using the\ntest.sh script in src/pl/plpython. This seems to be generating errors where\nbefore there were warnings. Can anyone comment on the correctness of this?\nReversing my changes doesn't really help matters so I presume it is something\nelse that is causing the different behaviour.\n\n\n-- \nNigel J. Andrews\n\n\nOn Fri, 20 Sep 2002, Nigel J. Andrews wrote:\n\n> On Thu, 19 Sep 2002, Tom Lane wrote:\n> \n> > \"Ian Harding\" <ianh@tpchd.org> writes:\n> > > It is pltcl [not plpgsql]\n> > \n> > Ah. I don't think we've done much of any work on plugging leaks in\n> > pltcl :-(.\n> > \n> > > It hurts when I do this:\n> > \n> > > drop function memleak();\n> > > create function memleak() returns int as '\n> > > for {set counter 1} {$counter < 100000} {incr counter} {\n> > > set sql \"select ''foo''\"\n> > > spi_exec \"$sql\"\n> > > }\n> > > ' language 'pltcl';\n> > > select memleak();\n> > \n> > Yeah, I see very quick memory exhaustion also :-(. Looks like the\n> > spi_exec call is the culprit, but I'm not sure exactly why ...\n> > anyone have time to look at this?\n> \n> Attached is a patch that frees the SPI_tuptable in all post SPI_exec\n> non-elog paths in both pltcl_SPI_exec() and pltcl_SPI_execp().\n> \n> The fault as triggered by the above code has been fixed by this patch but\n> please read my assumptions below to ensure they are correct.\n> \n> I have assumed that Tom's comment about this only being required in non-elog\n> paths is correct, which seems a reasonable assumption to me.\n> \n> I have also assumed, rather than verified, that freeing the tuptable does\n> indeed free the tuples as well. Tests with the above function show that the\n> process does not increase it's memory footprint during it's operation, although\n> if my assumption here is wrong this could be a feature of selecting\n> insignificantly sized tuples.\n> \n> I have not worried about other uses of SPI_exec for selects in pltcl.c on the\n> basis that those are not under the control of the function writer and the\n> normal function management will release the storage.",
"msg_date": "Fri, 20 Sep 2002 23:18:00 +0100 (BST)",
"msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: Memory Errors..."
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nNigel J. Andrews wrote:\n> On Thu, 19 Sep 2002, Tom Lane wrote:\n> \n> > \"Ian Harding\" <ianh@tpchd.org> writes:\n> > > It is pltcl [not plpgsql]\n> > \n> > Ah. I don't think we've done much of any work on plugging leaks in\n> > pltcl :-(.\n> > \n> > > It hurts when I do this:\n> > \n> > > drop function memleak();\n> > > create function memleak() returns int as '\n> > > for {set counter 1} {$counter < 100000} {incr counter} {\n> > > set sql \"select ''foo''\"\n> > > spi_exec \"$sql\"\n> > > }\n> > > ' language 'pltcl';\n> > > select memleak();\n> > \n> > Yeah, I see very quick memory exhaustion also :-(. Looks like the\n> > spi_exec call is the culprit, but I'm not sure exactly why ...\n> > anyone have time to look at this?\n> \n> Attached is a patch that frees the SPI_tuptable in all post SPI_exec\n> non-elog paths in both pltcl_SPI_exec() and pltcl_SPI_execp().\n> \n> The fault as triggered by the above code has been fixed by this patch but\n> please read my assumptions below to ensure they are correct.\n> \n> I have assumed that Tom's comment about this only being required in non-elog\n> paths is correct, which seems a reasonable assumption to me.\n> \n> I have also assumed, rather than verified, that freeing the tuptable does\n> indeed free the tuples as well. Tests with the above function show that the\n> process does not increase it's memory footprint during it's operation, although\n> if my assumption here is wrong this could be a feature of selecting\n> insignificantly sized tuples.\n> \n> I have not worried about other uses of SPI_exec for selects in pltcl.c on the\n> basis that those are not under the control of the function writer and the\n> normal function management will release the storage.\n> \n> \n> -- \n> Nigel J. Andrews\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 22 Sep 2002 21:46:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [GENERAL] Memory Errors..."
},
{
"msg_contents": "\n[ Previous version removed from patches queue..]\n\nThanks for doing both interfaces.\n\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nNigel J. Andrews wrote:\n> \n> Ok, below is the original email I sent, which I can not remember seeing come\n> across the patches list. Please do read the assumptions since they might throw\n> up problems with what I have done.\n> \n> I have attached the pltcl patch again, just in case. For the sake of clarity\n> let's say this patch superscedes the previous one.\n> \n> I have also attached a patch addressing the similar memory leak problem in\n> plpython. This includes a slight adjustment of the tests in the source\n> directory. The patch also includes a cosmetic change to remove a compiler\n> warning although I think the change makes the code look worse though.\n> \n> Once again, please read my text below and also take a quick look at the comment\n> I've added in the plpython patch since it may well show that that\n> particular change is complete rubbish.\n> \n> BTW, by my reckoning the memory leak would occur with prepared plans and\n> without. If that is not the case then I've been barking up the wrong tree.\n> \n> Of further note, I have not tested for the memory leak in plpython but the\n> build passes the normal and big checks. However, I have tried testing using the\n> test.sh script in src/pl/plpython. This seems to be generating errors where\n> before there were warnings. Can anyone comment on the correctness of this?\n> Reversing my changes doesn't really help matters so I presume it is something\n> else that is causing the different behaviour.\n> \n> \n> -- \n> Nigel J. Andrews\n> \n> \n> On Fri, 20 Sep 2002, Nigel J. Andrews wrote:\n> \n> > On Thu, 19 Sep 2002, Tom Lane wrote:\n> > \n> > > \"Ian Harding\" <ianh@tpchd.org> writes:\n> > > > It is pltcl [not plpgsql]\n> > > \n> > > Ah. I don't think we've done much of any work on plugging leaks in\n> > > pltcl :-(.\n> > > \n> > > > It hurts when I do this:\n> > > \n> > > > drop function memleak();\n> > > > create function memleak() returns int as '\n> > > > for {set counter 1} {$counter < 100000} {incr counter} {\n> > > > set sql \"select ''foo''\"\n> > > > spi_exec \"$sql\"\n> > > > }\n> > > > ' language 'pltcl';\n> > > > select memleak();\n> > > \n> > > Yeah, I see very quick memory exhaustion also :-(. Looks like the\n> > > spi_exec call is the culprit, but I'm not sure exactly why ...\n> > > anyone have time to look at this?\n> > \n> > Attached is a patch that frees the SPI_tuptable in all post SPI_exec\n> > non-elog paths in both pltcl_SPI_exec() and pltcl_SPI_execp().\n> > \n> > The fault as triggered by the above code has been fixed by this patch but\n> > please read my assumptions below to ensure they are correct.\n> > \n> > I have assumed that Tom's comment about this only being required in non-elog\n> > paths is correct, which seems a reasonable assumption to me.\n> > \n> > I have also assumed, rather than verified, that freeing the tuptable does\n> > indeed free the tuples as well. Tests with the above function show that the\n> > process does not increase it's memory footprint during it's operation, although\n> > if my assumption here is wrong this could be a feature of selecting\n> > insignificantly sized tuples.\n> > \n> > I have not worried about other uses of SPI_exec for selects in pltcl.c on the\n> > basis that those are not under the control of the function writer and the\n> > normal function management will release the storage.\n> \n\nContent-Description: \n\n[ Attachment, skipping... ]\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 22 Sep 2002 21:53:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Memory Errors..."
},
{
"msg_contents": "Well, it looks like it was already taken to the mat.\n\n;)\n\n\nGreg\n\n\nOn Thu, 2002-09-19 at 16:58, Joe Conway wrote:\n> Nigel J. Andrews wrote:\n> > On Thu, 19 Sep 2002, Joe Conway wrote:\n> >>I can give it a shot, but probably not until the weekend.\n> >>\n> >>I haven't really followed this thread closely, and don't know tcl very well, \n> >>so it would help if someone can send me a minimal tcl function which triggers \n> >>the problem.\n> > \n> > I can probably take a look at this tomorrow, already started by looking at the\n> > pltcl_SPI_exec routine. I think a quick glance at ...init_unknown() also shows\n> > a lack of tuptable freeing.\n> > \n> \n> OK -- let me know if you can't find the time and I'll jump back in to it.\n> \n> Joe\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html",
"msg_date": "23 Sep 2002 15:26:34 -0500",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Memory Errors..."
},
{
"msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\n\nNigel J. Andrews wrote:\n> \n> Ok, below is the original email I sent, which I can not remember seeing come\n> across the patches list. Please do read the assumptions since they might throw\n> up problems with what I have done.\n> \n> I have attached the pltcl patch again, just in case. For the sake of clarity\n> let's say this patch superscedes the previous one.\n> \n> I have also attached a patch addressing the similar memory leak problem in\n> plpython. This includes a slight adjustment of the tests in the source\n> directory. The patch also includes a cosmetic change to remove a compiler\n> warning although I think the change makes the code look worse though.\n> \n> Once again, please read my text below and also take a quick look at the comment\n> I've added in the plpython patch since it may well show that that\n> particular change is complete rubbish.\n> \n> BTW, by my reckoning the memory leak would occur with prepared plans and\n> without. If that is not the case then I've been barking up the wrong tree.\n> \n> Of further note, I have not tested for the memory leak in plpython but the\n> build passes the normal and big checks. However, I have tried testing using the\n> test.sh script in src/pl/plpython. This seems to be generating errors where\n> before there were warnings. Can anyone comment on the correctness of this?\n> Reversing my changes doesn't really help matters so I presume it is something\n> else that is causing the different behaviour.\n> \n> \n> -- \n> Nigel J. Andrews\n> \n> \n> On Fri, 20 Sep 2002, Nigel J. Andrews wrote:\n> \n> > On Thu, 19 Sep 2002, Tom Lane wrote:\n> > \n> > > \"Ian Harding\" <ianh@tpchd.org> writes:\n> > > > It is pltcl [not plpgsql]\n> > > \n> > > Ah. I don't think we've done much of any work on plugging leaks in\n> > > pltcl :-(.\n> > > \n> > > > It hurts when I do this:\n> > > \n> > > > drop function memleak();\n> > > > create function memleak() returns int as '\n> > > > for {set counter 1} {$counter < 100000} {incr counter} {\n> > > > set sql \"select ''foo''\"\n> > > > spi_exec \"$sql\"\n> > > > }\n> > > > ' language 'pltcl';\n> > > > select memleak();\n> > > \n> > > Yeah, I see very quick memory exhaustion also :-(. Looks like the\n> > > spi_exec call is the culprit, but I'm not sure exactly why ...\n> > > anyone have time to look at this?\n> > \n> > Attached is a patch that frees the SPI_tuptable in all post SPI_exec\n> > non-elog paths in both pltcl_SPI_exec() and pltcl_SPI_execp().\n> > \n> > The fault as triggered by the above code has been fixed by this patch but\n> > please read my assumptions below to ensure they are correct.\n> > \n> > I have assumed that Tom's comment about this only being required in non-elog\n> > paths is correct, which seems a reasonable assumption to me.\n> > \n> > I have also assumed, rather than verified, that freeing the tuptable does\n> > indeed free the tuples as well. Tests with the above function show that the\n> > process does not increase it's memory footprint during it's operation, although\n> > if my assumption here is wrong this could be a feature of selecting\n> > insignificantly sized tuples.\n> > \n> > I have not worried about other uses of SPI_exec for selects in pltcl.c on the\n> > basis that those are not under the control of the function writer and the\n> > normal function management will release the storage.\n> \n\nContent-Description: \n\n[ Attachment, skipping... ]\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 26 Sep 2002 01:23:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Memory Errors..."
}
] |
[
{
"msg_contents": "On Thu, 19 September 2002, \"Marc G. Fournier\" wrote:\n> On Thu, 19 Sep 2002, Robert Treat wrote:\n> > Well, as with most (all?) GUC variables, wouldn't you have the option of\n> > doing postmaster -o "pgxlog=/dev/null" and have the same\nfunctionality\n> > as -X ?\n> \n> True, but then that negates the whole argument about not having a command\n> line option, no? Which I believe was the whole argument on this ... no?\n>\n\nWell, I think it negates the the whole reason to have a specifc command line\noption for this. Personally I'd like to see all (well, most) of the command line\noptions to go away. We still get people emailing us because they cant get\nphpPgAdmin to work on a system because they forgot to start it with -i. I try to\nexplain to them to edit the tcpip setting in the postgresql.conf, but many have\nnever heard of that setting. \n \n> > Shouldn't this work the other way around? Use what's in the conf file\n> > unless I explicitly state otherwise? IIRC that's how it works with -i\n> \n> God, I wish I had thought to note it at the time ... one of the things I\n> did when I dove into this was to check how various Unix daemons were doing\n> it, now I can't recall which I was looking at that mentioned the config\n> file overriding the command line options, but you are correct, the command\n> line should override the conf file ...\n\nRobert Treat\n\n--\nLAMP :: Linux Apache {middleware} PostgreSQL\n",
"msg_date": "Thu, 19 Sep 2002 11:19:42 -0700 (PDT)",
"msg_from": "\"Robert Treat\" <xzilla@users.sourceforge.net>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PGXLOG variable worthwhile?"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nAm trying my hand at a bit of C code again. Specifically am trying to\nget Tatsuo's \"pgbench\" code to loop around more than once, but it keeps\non hanging forever at this line:\n\nif ((nsocks = select(maxsock + 1, &input_mask, (fd_set *) NULL,\n\t(fd_set *) NULL, (struct timeval *) NULL)) < 0)\n{\n\netc\n\nRunning this on a FreeBSD 4.6.2 system with PostgreSQL 7.2.2 and gcc\n2.95.3. Looking around the Net seems to say that hangs like this are\ncaused by the select blocking, but that's not helping me any with\nknowing what to do.\n\nDoes anyone have an idea of what I can do, or maybe have a few minutes\nto look at my code and point out the problem?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Fri, 20 Sep 2002 04:43:14 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Having no luck with getting pgbench to run multiple times"
},
{
"msg_contents": "Well, you'll probably want to pass in a valid timeval structure if you\ndon't want it to block.\n\nBasically, that snippet tells select on the list of sockets, looking for\nsockets that have data to be read while waiting forever. That means it\nwill block until something appears on one of the sockets your\nmonitoring.\n\n\nGreg\n\n\nOn Thu, 2002-09-19 at 13:43, Justin Clift wrote:\n> Hi everyone,\n> \n> Am trying my hand at a bit of C code again. Specifically am trying to\n> get Tatsuo's \"pgbench\" code to loop around more than once, but it keeps\n> on hanging forever at this line:\n> \n> if ((nsocks = select(maxsock + 1, &input_mask, (fd_set *) NULL,\n> \t(fd_set *) NULL, (struct timeval *) NULL)) < 0)\n> {\n> \n> etc\n> \n> Running this on a FreeBSD 4.6.2 system with PostgreSQL 7.2.2 and gcc\n> 2.95.3. Looking around the Net seems to say that hangs like this are\n> caused by the select blocking, but that's not helping me any with\n> knowing what to do.\n> \n> Does anyone have an idea of what I can do, or maybe have a few minutes\n> to look at my code and point out the problem?\n> \n> :-)\n> \n> Regards and best wishes,\n> \n> Justin Clift\n> \n> -- \n> \"My grandfather once told me that there are two kinds of people: those\n> who work and those who take the credit. He told me to try to be in the\n> first group; there was less competition there.\"\n> - Indira Gandhi\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html",
"msg_date": "20 Sep 2002 10:44:05 -0500",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: Having no luck with getting pgbench to run multiple"
},
{
"msg_contents": "Hi Greg,\n\nThat's cool. Played with it for a while longer, then found out that the\norder that it was being called in didn't work very well as the select()\nwas executed after all the required sockets had been closed/ended.\n\nSo, it just meant a re-ordering of things, and it's now working alright.\n\nAm just \"fine tuning\" this util, and it's looking to be pretty nifty. \nIt automatically tunes local or remote PostgreSQL databases (currently\nit's limited to the shared_buffers, sort_mem, and vacuum_mem\nvariables). But it's a start. :)\n\nRegards and best wishes,\n\nJustin Clift\n\n\nGreg Copeland wrote:\n> \n> Well, you'll probably want to pass in a valid timeval structure if you\n> don't want it to block.\n> \n> Basically, that snippet tells select on the list of sockets, looking for\n> sockets that have data to be read while waiting forever. That means it\n> will block until something appears on one of the sockets your\n> monitoring.\n> \n> Greg\n\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Sat, 21 Sep 2002 01:52:18 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: Having no luck with getting pgbench to run multipletimes"
}
] |
[
{
"msg_contents": "Hi,\n\nis a pl/pgSQL function completely parsed once? Or is only the next\nstatement parsed as with many interpreters? If it's the latter it would\nmean one has to run each branch just to see if the syntax is correct. Is\nthat true?\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Thu, 19 Sep 2002 20:54:27 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "PL/pgSQL question"
},
{
"msg_contents": "Michael Meskes wrote:\n\n\n> Hi,\n>\n> is a pl/pgSQL function completely parsed once? Or is only the next\n> statement parsed as with many interpreters? If it's the latter it would\n> mean one has to run each branch just to see if the syntax is correct. Is\n> that true?\n>\n> Michael\n\nIf the docs are true, than the plain PL/pgSQL code is parsed at once,\nbut SQL expressions and queries are not prepared until the branch is\nused. But read for yourself.\n\nTo quote from Programmers Guide (Chapter 23, Section 1):\n\n\"The PL/pgSQL call handler parses the function's source text and produces an\ninternal binary instruction tree the first time the function is called\n(within any one backend process). The instruction tree fully translates the\nPL/pgSQL statement structure, but individual SQL expressions and SQL queries\nused in the function are not translated immediately.\n\nAs each expression and SQL query is first used in the function, the PL/pgSQL\ninterpreter creates a prepared execution plan (using the SPI manager's\nSPI_prepare and SPI_saveplan functions). Subsequent visits to that\nexpression or query re-use the prepared plan. Thus, a function with\nconditional code that contains many statements for which execution plans\nmight be required, will only prepare and save those plans that are really\nused during the lifetime of the database connection. This can provide a\nconsiderable savings of parsing activity. A disadvantage is that errors in a\nspecific expression or query may not be detected until that part of the\nfunction is reached in execution.\"\n\nRegards,\nMichael\n\n\n\n",
"msg_date": "Thu, 19 Sep 2002 21:20:53 +0200",
"msg_from": "\"Michael Paesold\" <mpaesold@gmx.at>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL question"
},
{
"msg_contents": "Michael Paesold wrote:\n> \n> Michael Meskes wrote:\n> \n> > Hi,\n> >\n> > is a pl/pgSQL function completely parsed once? Or is only the next\n> > statement parsed as with many interpreters? If it's the latter it would\n> > mean one has to run each branch just to see if the syntax is correct. Is\n> > that true?\n> >\n> > Michael\n> \n> If the docs are true, than the plain PL/pgSQL code is parsed at once,\n> but SQL expressions and queries are not prepared until the branch is\n> used. But read for yourself.\n\nThat's the way I implemented it. Unless someone changed it, the\ndocumentation is correct.\n\nSomeone might think now it'd be at least handy to have a mechanism to\nenforce parsing of all expressions and queries for debugging purposes.\nBut that's not that easy. As soon as you use for example a record\nvariable, each reference to one of the result row columns is of unknown\ndatatype until that query is actually executed. You cannot parse an SQL\nquery with unknown parameters via SPI.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n",
"msg_date": "Thu, 19 Sep 2002 15:46:05 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL question"
},
{
"msg_contents": "Thanks for the explanation.\n\nOn Thu, Sep 19, 2002 at 03:46:05PM -0400, Jan Wieck wrote:\n> Someone might think now it'd be at least handy to have a mechanism to\n> enforce parsing of all expressions and queries for debugging purposes.\n> But that's not that easy. As soon as you use for example a record\n> variable, each reference to one of the result row columns is of unknown\n> datatype until that query is actually executed. You cannot parse an SQL\n> query with unknown parameters via SPI.\n\nThat's what I expected. I just wanted to be sure, before I tell\nsomething that's not correct.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Fri, 20 Sep 2002 14:44:05 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PL/pgSQL question"
}
] |
[
{
"msg_contents": "While adding schema support to the JDBC Driver, I came across a query \nwhich occasionally generates some spectacularly bad plans. I have \nattached the query and explain analyze outputs against today's cvs head \nfor queries that take between 9 and 845941 msec. In the JDBC Driver I \nwill specify a reasonable join order using explicit JOINs, but I thought \nsomeone might be interested in a test case for the optimizer.\n\nKris Jurka\n\nThe query tries to determine what foreign keys exists between the \nfollowing tables.\n\ncreate table people (id int4 primary key, name text);\ncreate table policy (id int4 primary key, name text);\ncreate table users (id int4 primary key, people_id int4,\n policy_id int4,\n CONSTRAINT people FOREIGN KEY (people_id) references people(id),\n constraint policy FOREIGN KEY (policy_id) references policy(id));",
"msg_date": "Thu, 19 Sep 2002 15:06:21 -0700",
"msg_from": "Kris Jurka <jurka@ejurka.com>",
"msg_from_op": true,
"msg_subject": "Optimizer generates bad plans."
},
{
"msg_contents": "\nCongratulations. That is the largest plan I have ever seen. ;-)\n\n---------------------------------------------------------------------------\n\nKris Jurka wrote:\n> While adding schema support to the JDBC Driver, I came across a query \n> which occasionally generates some spectacularly bad plans. I have \n> attached the query and explain analyze outputs against today's cvs head \n> for queries that take between 9 and 845941 msec. In the JDBC Driver I \n> will specify a reasonable join order using explicit JOINs, but I thought \n> someone might be interested in a test case for the optimizer.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 19 Sep 2002 18:09:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer generates bad plans."
},
{
"msg_contents": "\nWell I was really hoping pg_constraint would solve all my problems, but\nsince contrib/array is not installed by default the conkeys and confkeys\ncolumns aren't terribly useful because they can't be joined to\npg_attribute.\n\nAlso there is not a column to tell you the unique constraint that\nsupports a given foreign key constraint.\n\nSee my post to bugs:\n\nhttp://fts.postgresql.org/db/mw/msg.html?mid=1074855\n\nKris Jurka\n\n\nOn Thu, 19 Sep 2002, Bruce Momjian wrote:\n\n>\n> Congratulations. That is the largest plan I have ever seen. ;-)\n>\n> ---------------------------------------------------------------------------\n>\n> Kris Jurka wrote:\n> > While adding schema support to the JDBC Driver, I came across a query\n> > which occasionally generates some spectacularly bad plans. I have\n> > attached the query and explain analyze outputs against today's cvs head\n> > for queries that take between 9 and 845941 msec. In the JDBC Driver I\n> > will specify a reasonable join order using explicit JOINs, but I thought\n> > someone might be interested in a test case for the optimizer.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n",
"msg_date": "Thu, 19 Sep 2002 18:31:57 -0400 (EDT)",
"msg_from": "Kris Jurka <books@ejurka.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer generates bad plans."
},
{
"msg_contents": "Kris Jurka <jurka@ejurka.com> writes:\n> While adding schema support to the JDBC Driver, I came across a query \n> which occasionally generates some spectacularly bad plans.\n\nHm, does an ANALYZE help?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Sep 2002 18:34:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer generates bad plans. "
},
{
"msg_contents": "Maybe not nice, but there's only 32 (64 now?) of them...\n\nJOIN pg_attribute WHERE attnum IN (conkeys[1], conkeys[2], conkeys[3],\n..., conkeys[32])\n\nGreat fun...\n\nOn Thu, 2002-09-19 at 18:31, Kris Jurka wrote:\n> \n> Well I was really hoping pg_constraint would solve all my problems, but\n> since contrib/array is not installed by default the conkeys and confkeys\n> columns aren't terribly useful because they can't be joined to\n> pg_attribute.\n> \n> Also there is not a column to tell you the unique constraint that\n> supports a given foreign key constraint.\n> \n> See my post to bugs:\n> \n> http://fts.postgresql.org/db/mw/msg.html?mid=1074855\n> \n> Kris Jurka\n> \n> \n> On Thu, 19 Sep 2002, Bruce Momjian wrote:\n> \n> >\n> > Congratulations. That is the largest plan I have ever seen. ;-)\n> >\n> > ---------------------------------------------------------------------------\n> >\n> > Kris Jurka wrote:\n> > > While adding schema support to the JDBC Driver, I came across a query\n> > > which occasionally generates some spectacularly bad plans. I have\n> > > attached the query and explain analyze outputs against today's cvs head\n> > > for queries that take between 9 and 845941 msec. In the JDBC Driver I\n> > > will specify a reasonable join order using explicit JOINs, but I thought\n> > > someone might be interested in a test case for the optimizer.\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 359-1001\n> > + If your life is a hard drive, | 13 Roberts Road\n> > + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n-- \n Rod Taylor\n\n",
"msg_date": "19 Sep 2002 18:40:17 -0400",
"msg_from": "Rod Taylor <rbt@rbt.ca>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer generates bad plans."
},
{
"msg_contents": "Kris Jurka <jurka@ejurka.com> writes:\n> While adding schema support to the JDBC Driver, I came across a\n> query which occasionally generates some spectacularly bad plans.\n\nInteresting. The inconsistency you're seeing is a result of GEQO. I\nwould have hoped that it would have produced a better quality plan\nmore often, but apparently not. On my system, the regular query\noptimizer handily beats GEQO for this query: it produces more\nefficienty query plans 100% of the time and takes less time to do so.\n\nFor *this* query at least, raising geqo_threshold would be a good\nidea, but that may not be true universally.\n\n> I thought someone might be interested in a test case for the\n> optimizer.\n\nThanks, it's a useful query -- I've been meaning to take a look at\nGEQO for a while now...\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n",
"msg_date": "19 Sep 2002 18:43:27 -0400",
"msg_from": "Neil Conway <neilc@samurai.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer generates bad plans."
},
{
"msg_contents": "\n\nOn Thu, 19 Sep 2002, Tom Lane wrote:\n\n> Kris Jurka <jurka@ejurka.com> writes:\n> > While adding schema support to the JDBC Driver, I came across a query\n> > which occasionally generates some spectacularly bad plans.\n>\n> Hm, does an ANALYZE help?\n>\n\nYes, it does, but I don't understand why. The query is entirely against\npg_catalog tables which have had all of three tables added to them. How\ncan the new ANALYZE stats be significantly different than what came from\nthe ANALYZED template1.\n\nKris Jurka\n\n\n",
"msg_date": "Thu, 19 Sep 2002 18:52:03 -0400 (EDT)",
"msg_from": "Kris Jurka <books@ejurka.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer generates bad plans. "
},
{
"msg_contents": "Neil Conway <neilc@samurai.com> writes:\n> Interesting. The inconsistency you're seeing is a result of GEQO. I\n> would have hoped that it would have produced a better quality plan\n> more often, but apparently not. On my system, the regular query\n> optimizer handily beats GEQO for this query: it produces more\n> efficienty query plans 100% of the time and takes less time to do so.\n> For *this* query at least, raising geqo_threshold would be a good\n> idea, but that may not be true universally.\n\nThe current GEQO threshold was set some time ago; since then, the\nregular optimizer has been improved while the GEQO code hasn't been\ntouched. It might well be time to ratchet up the threshold.\n\nAnyone care to do some additional experiments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Sep 2002 19:45:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer generates bad plans. "
},
{
"msg_contents": "\n\nOn Thu, 19 Sep 2002, Kris Jurka wrote:\n>\n> On Thu, 19 Sep 2002, Tom Lane wrote:\n>\n> > Kris Jurka <jurka@ejurka.com> writes:\n> > > While adding schema support to the JDBC Driver, I came across a query\n> > > which occasionally generates some spectacularly bad plans.\n> >\n> > Hm, does an ANALYZE help?\n> >\n>\n> Yes, it does, but I don't understand why. The query is entirely against\n> pg_catalog tables which have had all of three tables added to them. How\n> can the new ANALYZE stats be significantly different than what came from\n> the ANALYZED template1.\n>\n> Kris Jurka\n>\n\nLooking at the differences in statistics before and after the ANALYZE the\nonly differences are in correlation. This comes from initdb around line\n1046...\n\n\"$PGPATH\"/postgres $PGSQL_OPT template1 >/dev/null <<EOF\nANALYZE;\nVACUUM FULL FREEZE;\nEOF\n\nCould this be done better in the one step VACUUM FULL FREEZE ANALYZE or\nANALYZING after the VACUUM FULL?\n\nKris Jurka\n\n",
"msg_date": "Fri, 20 Sep 2002 14:45:46 -0400 (EDT)",
"msg_from": "Kris Jurka <books@ejurka.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer generates bad plans. "
},
{
"msg_contents": "Kris Jurka <books@ejurka.com> writes:\n> Looking at the differences in statistics before and after the ANALYZE the\n> only differences are in correlation. This comes from initdb around line\n> 1046...\n\n> \"$PGPATH\"/postgres $PGSQL_OPT template1 >/dev/null <<EOF\n> ANALYZE;\n> VACUUM FULL FREEZE;\n> EOF\n\n> Could this be done better in the one step VACUUM FULL FREEZE ANALYZE or\n> ANALYZING after the VACUUM FULL?\n\nHm. We can't do it like that, because that would leave the pg_statistic\nrows unfrozen. I suppose we could do\n\nVACUUM FULL;\nANALYZE;\nVACUUM FREEZE;\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 Sep 2002 15:50:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer generates bad plans. "
},
{
"msg_contents": "Tom Lane wrote:\n> Neil Conway <neilc@samurai.com> writes:\n> > Interesting. The inconsistency you're seeing is a result of GEQO. I\n> > would have hoped that it would have produced a better quality plan\n> > more often, but apparently not. On my system, the regular query\n> > optimizer handily beats GEQO for this query: it produces more\n> > efficienty query plans 100% of the time and takes less time to do so.\n> > For *this* query at least, raising geqo_threshold would be a good\n> > idea, but that may not be true universally.\n> \n> The current GEQO threshold was set some time ago; since then, the\n> regular optimizer has been improved while the GEQO code hasn't been\n> touched. It might well be time to ratchet up the threshold.\n> \n> Anyone care to do some additional experiments?\n\nAdded to TODO:\n\n\t* Check GUC geqo_threshold to see if it is still accurate \n \n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 3 Oct 2002 15:20:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer generates bad plans."
}
] |
[
{
"msg_contents": "Hi,\n\nI just removed the prepare/execute/deallocate function from ecpg's\nparser so there are no conflicts anymore. But for the future (that is\nafter 7.3 is released) I'd like to work something out. The only problem\nI see with using the backend functions is that the backend prepare needs\nthe data type for each variable and I have no idea how to present it\ngiven the embedded syntax. I take it the backend prepare cannot work\nwithout this info, can it?\n\nAlso I still have some open bug reports but on the other hand few to no\navailable time. Shall we add these reports to the TODO list? I doubt\nI'll be able to fix them ntil release time, at least not all of them.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Fri, 20 Sep 2002 08:25:31 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "ECPG"
},
{
"msg_contents": "I had a thought about what to do with the ECPG grammar-too-big problem:\nrather than depending on a beta release of bison, we could attack the\nproblem directly by omitting some of the backend grammar from what ECPG\nsupports. Surely there are not many people using ECPG to issue obscure\nutility commands like, for example, DROP OPERATOR CLASS.\n\nI haven't tried this to see just how much we'd have to dike out, but\nmy guess is that we could push the ecpg grammar down to something that\nwould get through stock bison without omitting anything anyone's even\nremotely likely to miss.\n\nThis is, of course, an ugly hack that we'd want to undo once more\ncapable versions of bison are readily available. But I think it could\ntide us over for a release or two.\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 22 Sep 2002 16:18:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ECPG "
},
{
"msg_contents": "Tom Lane wrote:\n> I had a thought about what to do with the ECPG grammar-too-big problem:\n> rather than depending on a beta release of bison, we could attack the\n> problem directly by omitting some of the backend grammar from what ECPG\n> supports. Surely there are not many people using ECPG to issue obscure\n> utility commands like, for example, DROP OPERATOR CLASS.\n> \n> I haven't tried this to see just how much we'd have to dike out, but\n> my guess is that we could push the ecpg grammar down to something that\n> would get through stock bison without omitting anything anyone's even\n> remotely likely to miss.\n> \n> This is, of course, an ugly hack that we'd want to undo once more\n> capable versions of bison are readily available. But I think it could\n> tide us over for a release or two.\n> \n> Comments?\n\nI think we should just go with the bison beta for ecpg and be done with\nit. If we find bugs, we can ask the bison folks to fix it, or work\naround it ourselves.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 22 Sep 2002 20:48:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ECPG"
},
{
"msg_contents": "Michael Meskes wrote:\n> Hi,\n> \n> I just removed the prepare/execute/deallocate function from ecpg's\n> parser so there are no conflicts anymore. But for the future (that is\n> after 7.3 is released) I'd like to work something out. The only problem\n> I see with using the backend functions is that the backend prepare needs\n> the data type for each variable and I have no idea how to present it\n> given the embedded syntax. I take it the backend prepare cannot work\n> without this info, can it?\n> \n> Also I still have some open bug reports but on the other hand few to no\n> available time. Shall we add these reports to the TODO list? I doubt\n> I'll be able to fix them ntil release time, at least not all of them.\n\nYes, please send over additional TODO items. Thanks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 22 Sep 2002 21:46:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ECPG"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> I had a thought about what to do with the ECPG grammar-too-big problem:\n>> rather than depending on a beta release of bison, we could attack the\n>> problem directly by omitting some of the backend grammar from what ECPG\n>> supports.\n\n> I think we should just go with the bison beta for ecpg and be done with\n> it. If we find bugs, we can ask the bison folks to fix it, or work\n> around it ourselves.\n\nUsing the beta bison has a lot of disadvantages though, particularly if\nwe want to follow the conservative route of using it only for ecpg and\nnot for the other .y files. How exactly will you cause the build to\nwork that way? How will you make it work for everyone who pulls CVS\nrather than a prebuilt tar file?\n\nAlso, I was quite unthrilled when I experimented tonight with bison\n1.49b, and found it to be a factor of 16 slower than bison 1.28.\n(<2 seconds versus >32 seconds to process the backend gram.y file.)\nIf they don't fix that, bison 1.49+ will have roughly zero uptake\namong real users --- who's going to hold still for that much slowdown,\nto get a tool whose only obvious improvement is that it now rejects\noptional commas?\n\nBottom line is that I don't think we can require bison > 1.28 for a\ngood while yet.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 22 Sep 2002 23:54:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ECPG "
},
{
"msg_contents": "On Sun, Sep 22, 2002 at 04:18:23PM -0400, Tom Lane wrote:\n> I had a thought about what to do with the ECPG grammar-too-big problem:\n> rather than depending on a beta release of bison, we could attack the\n> problem directly by omitting some of the backend grammar from what ECPG\n> supports. Surely there are not many people using ECPG to issue obscure\n> utility commands like, for example, DROP OPERATOR CLASS.\n\nBut then there may be one. And I'd prefer to not remove features that\nused to exist.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Mon, 23 Sep 2002 12:30:16 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: ECPG"
},
{
"msg_contents": "Michael Meskes <meskes@postgresql.org> writes:\n> On Sun, Sep 22, 2002 at 04:18:23PM -0400, Tom Lane wrote:\n>> I had a thought about what to do with the ECPG grammar-too-big problem:\n>> rather than depending on a beta release of bison, we could attack the\n>> problem directly by omitting some of the backend grammar from what ECPG\n>> supports. Surely there are not many people using ECPG to issue obscure\n>> utility commands like, for example, DROP OPERATOR CLASS.\n\n> But then there may be one. And I'd prefer to not remove features that\n> used to exist.\n\nWhat about removing this feature that used to exist: being able to build\necpg with reasonably-standard tools?\n\nI think you should be setting more weight on that concern than on\nsupporting obscure backend commands (some of which didn't even exist in\n7.2, and therefore are certainly not depended on by any ecpg user...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Sep 2002 09:56:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ECPG "
},
{
"msg_contents": "On Mon, Sep 23, 2002 at 09:56:59AM -0400, Tom Lane wrote:\n> What about removing this feature that used to exist: being able to build\n> ecpg with reasonably-standard tools?\n\nHow many people do use bison themselves? Most people I talked to use the\nprecompiled preproc.c.\n\n> I think you should be setting more weight on that concern than on\n> supporting obscure backend commands (some of which didn't even exist in\n> 7.2, and therefore are certainly not depended on by any ecpg user...)\n\nWhich of course would also mean spending quite some time to remove\nfeatures that have to be added again once bison is released.\n\nI will try to get some info from the bison people about the release\ndate.\n\nMichael\n\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Tue, 24 Sep 2002 15:37:43 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: ECPG"
},
{
"msg_contents": "Michael Meskes <meskes@postgresql.org> writes:\n> On Mon, Sep 23, 2002 at 09:56:59AM -0400, Tom Lane wrote:\n>> What about removing this feature that used to exist: being able to build\n>> ecpg with reasonably-standard tools?\n\n> How many people do use bison themselves?\n\n*Everyone* who checks out from our CVS needs to build the bison output\nfiles. There seem to be quite a few such people; they will all be\nforced to upgrade their local bison installations when ecpg starts\nrequiring a newer bison.\n\n> I will try to get some info from the bison people about the release\n> date.\n\nI just this morning got this response from Akim Demaille concerning a\nportability problem in bison 1.49b:\n\n| Thanks for the report, this is addressed in 1.49c. We should upload\n| the latter soon.\n\nSo I'm guessing that a full release is not just around the corner :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 24 Sep 2002 09:53:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ECPG "
},
{
"msg_contents": "On Tue, Sep 24, 2002 at 09:53:10AM -0400, Tom Lane wrote:\n> *Everyone* who checks out from our CVS needs to build the bison output\n> files. There seem to be quite a few such people; they will all be\n\nI though time stamping is done to make sure the .c file is newer than\nthe .y one.\n\n> forced to upgrade their local bison installations when ecpg starts\n> requiring a newer bison.\n\nValid point.\n\n> | Thanks for the report, this is addressed in 1.49c. We should upload\n> | the latter soon.\n> \n> So I'm guessing that a full release is not just around the corner :-(\n\nArgh.\n\nBut when we remove features from ecpg I would prefer to just remove\npretty obscure stuff and stuff introduced after 7.2 was released so we\nwon't break much. Does anyone have a list of newly added commands? Or do\nI have to get the diff from CVS?\n\nMichael\n\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Wed, 25 Sep 2002 10:38:10 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: ECPG"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nHave gotten a new PostgreSQL utility together called \"pg_autotune\" that\nload tests using Tatsuo's pgbench code over multiple-iterations,\nattempting to determine decent buffer settings for a specified client\nload.\n\nIt's more a framework for adding stuff to later, but for now it just\nworks (albeit time consuming).\n\nWhere do I post it (here or PATCHES?) because if the code is rugged\nenough then it might be useful in contrib?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Fri, 20 Sep 2002 16:33:19 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Where to post a new PostgreSQL utility?"
},
{
"msg_contents": "On 20 Sep 2002 at 16:33, Justin Clift wrote:\n\n> Hi everyone,\n> \n> Have gotten a new PostgreSQL utility together called \"pg_autotune\" that\n> load tests using Tatsuo's pgbench code over multiple-iterations,\n> attempting to determine decent buffer settings for a specified client\n> load.\n> \n> It's more a framework for adding stuff to later, but for now it just\n> works (albeit time consuming).\n> \n> Where do I post it (here or PATCHES?) because if the code is rugged\n> enough then it might be useful in contrib?\n\nor at http://gborg.postgresql.org, \n\nBye\n Shridhar\n\n--\nThe Consultant's Curse:\tWhen the customer has beaten upon you long enough, give \nhim\twhat he asks for, instead of what he needs. This is very strong\tmedicine, \nand is normally only required once.\n\n",
"msg_date": "Fri, 20 Sep 2002 12:25:49 +0530",
"msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>",
"msg_from_op": false,
"msg_subject": "Re: Where to post a new PostgreSQL utility?"
},
{
"msg_contents": "\ngborg\n\nOn Fri, 20 Sep 2002, Justin Clift wrote:\n\n> Hi everyone,\n>\n> Have gotten a new PostgreSQL utility together called \"pg_autotune\" that\n> load tests using Tatsuo's pgbench code over multiple-iterations,\n> attempting to determine decent buffer settings for a specified client\n> load.\n>\n> It's more a framework for adding stuff to later, but for now it just\n> works (albeit time consuming).\n>\n> Where do I post it (here or PATCHES?) because if the code is rugged\n> enough then it might be useful in contrib?\n>\n> :-)\n>\n> Regards and best wishes,\n>\n> Justin Clift\n>\n> --\n> \"My grandfather once told me that there are two kinds of people: those\n> who work and those who take the credit. He told me to try to be in the\n> first group; there was less competition there.\"\n> - Indira Gandhi\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n",
"msg_date": "Fri, 20 Sep 2002 10:55:27 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Where to post a new PostgreSQL utility?"
}
] |
[
{
"msg_contents": "Hi Monty,\n\nWould you be able to update the MySQL manual?\n\nIn 1.9.2.3 of the MySQL manual\n(http://www.mysql.com/doc/en/MySQL-PostgreSQL_benchmarks.html) it\nmentions :\n\n\"The only Open Source benchmark that we know of that can be used to\nbenchmark MySQL Server and PostgreSQL (and other databases) is our own.\"\n\nIt would be great if you could update the manual to reflect the \"Open\nSource Database Benchmark\" (OSDB) as an option as well, as it's received\na decent amount of development from many database coders, is very cross\nplatform, it does multi-user testing (more real world suitable) and it's\nbased on the ANSI AS3AP database testing standard.\n\nThe main site for the Open Source Database Benchmark is:\n\nhttp://osdb.sf.net\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Fri, 20 Sep 2002 18:04:05 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Would you be able to update the MySQL manual?"
}
] |
[
{
"msg_contents": "% uname -a\nFreeBSD xor 4.6-STABLE FreeBSD 4.6-STABLE #2: Tue Jun 18 20:48:48 MSD 2002 \nteodor@xor:/usr/src/sys/compile/XOR i386\n...\n\ngmake[3]: `/spool/home/teodor/pgsql/src/backend/commands'\ngcc -g -O -Wall -Wmissing-prototypes -Wmissing-declarations \n-I../../../src/include -c -o aggregatecmds.o aggregatecmds.c\ngcc -g -O -Wall -Wmissing-prototypes -Wmissing-declarations \n-I../../../src/include -c -o analyze.o analyze.c\ngcc -g -O -Wall -Wmissing-prototypes -Wmissing-declarations \n-I../../../src/include -c -o async.o async.c\ngcc -g -O -Wall -Wmissing-prototypes -Wmissing-declarations \n-I../../../src/include -c -o cluster.o cluster.c\ngcc -g -O -Wall -Wmissing-prototypes -Wmissing-declarations \n-I../../../src/include -c -o comment.o comment.c\ngcc -g -O -Wall -Wmissing-prototypes -Wmissing-declarations \n-I../../../src/include -c -o conversioncmds.o conversioncmds.c\ngcc -g -O -Wall -Wmissing-prototypes -Wmissing-declarations \n-I../../../src/include -c -o copy.o copy.c\ncopy.c: In function `CopyFrom':\ncopy.c:1130: warning: passing arg 1 of `coerce_type_constraints' from \nincompatible pointer type\ncopy.c:1130: warning: passing arg 2 of `coerce_type_constraints' makes integer \nfrom pointer without a cast\ncopy.c:1130: too many arguments to function `coerce_type_constraints'\n\n-- \nTeodor Sigaev\nteodor@stack.net\n\n\n",
"msg_date": "Fri, 20 Sep 2002 12:54:19 +0400",
"msg_from": "Teodor Sigaev <teodor@stack.net>",
"msg_from_op": true,
"msg_subject": "Current CVS is broken"
},
{
"msg_contents": "Teodor Sigaev <teodor@stack.net> writes:\n> gcc -g -O -Wall -Wmissing-prototypes -Wmissing-declarations \n> -I../../../src/include -c -o copy.o copy.c\n> copy.c: In function `CopyFrom':\n> copy.c:1130: warning: passing arg 1 of `coerce_type_constraints' from \n> incompatible pointer type\n> copy.c:1130: warning: passing arg 2 of `coerce_type_constraints' makes integer \n> from pointer without a cast\n> copy.c:1130: too many arguments to function `coerce_type_constraints'\n\nLooks like Rod's domain-constraints-in-COPY patch was stale after my\nrecent casting changes. Will work on it ...\n\n(Bruce, you really oughta do some minimal testing on patches before\ncommitting 'em.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 Sep 2002 10:15:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Current CVS is broken "
},
{
"msg_contents": "Tom Lane wrote:\n> Teodor Sigaev <teodor@stack.net> writes:\n> > gcc -g -O -Wall -Wmissing-prototypes -Wmissing-declarations \n> > -I../../../src/include -c -o copy.o copy.c\n> > copy.c: In function `CopyFrom':\n> > copy.c:1130: warning: passing arg 1 of `coerce_type_constraints' from \n> > incompatible pointer type\n> > copy.c:1130: warning: passing arg 2 of `coerce_type_constraints' makes integer \n> > from pointer without a cast\n> > copy.c:1130: too many arguments to function `coerce_type_constraints'\n> \n> Looks like Rod's domain-constraints-in-COPY patch was stale after my\n> recent casting changes. Will work on it ...\n> \n> (Bruce, you really oughta do some minimal testing on patches before\n> committing 'em.)\n\nSorry, forgot this time. I do normally test.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 20 Sep 2002 11:21:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Current CVS is broken"
},
{
"msg_contents": "Tom Lane wrote:\n> Teodor Sigaev <teodor@stack.net> writes:\n> > gcc -g -O -Wall -Wmissing-prototypes -Wmissing-declarations \n> > -I../../../src/include -c -o copy.o copy.c\n> > copy.c: In function `CopyFrom':\n> > copy.c:1130: warning: passing arg 1 of `coerce_type_constraints' from \n> > incompatible pointer type\n> > copy.c:1130: warning: passing arg 2 of `coerce_type_constraints' makes integer \n> > from pointer without a cast\n> > copy.c:1130: too many arguments to function `coerce_type_constraints'\n> \n> Looks like Rod's domain-constraints-in-COPY patch was stale after my\n> recent casting changes. Will work on it ...\n> \n> (Bruce, you really oughta do some minimal testing on patches before\n> committing 'em.)\n\nOK, patch attached. Tom, what is the proper third parameter in COPY,\nCOERCE_DONTCARE?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: src/backend/commands/copy.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/commands/copy.c,v\nretrieving revision 1.172\ndiff -c -c -r1.172 copy.c\n*** src/backend/commands/copy.c\t20 Sep 2002 03:52:50 -0000\t1.172\n--- src/backend/commands/copy.c\t20 Sep 2002 15:28:42 -0000\n***************\n*** 1126,1133 ****\n \t\t\t\t\t\t\t\tfalse);\t\t/* not coerced */\n \n \t\t\t\t/* Process constraints */\n! \t\t\t\tnode = coerce_type_constraints(pstate, (Node *) con,\n! \t\t\t\t\t\t\t\t\t\t\t attr[m]->atttypid, true);\n \n \t\t\t\tvalues[m] = ExecEvalExpr(node, econtext,\n \t\t\t\t\t\t\t\t\t\t &isNull, NULL);\n--- 1126,1133 ----\n \t\t\t\t\t\t\t\tfalse);\t\t/* not coerced */\n \n \t\t\t\t/* Process constraints */\n! \t\t\t\tnode = coerce_type_constraints((Node *) con, attr[m]->atttypid,\n! \t\t\t\t\t\t\t\t\t\t\t\tCOERCE_DONTCARE);\n \n \t\t\t\tvalues[m] = ExecEvalExpr(node, econtext,\n \t\t\t\t\t\t\t\t\t\t &isNull, NULL);",
"msg_date": "Fri, 20 Sep 2002 11:29:23 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Current CVS is broken"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, patch attached. Tom, what is the proper third parameter in COPY,\n> COERCE_DONTCARE?\n\nIt would be COERCE_IMPLICIT_CAST. But I don't like the patch as it\nstands anyway, because it is repeating a ton of catalog lookups for\nevery input row. I have more extensive changes in mind ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 Sep 2002 11:40:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Current CVS is broken "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, patch attached. Tom, what is the proper third parameter in COPY,\n> > COERCE_DONTCARE?\n> \n> It would be COERCE_IMPLICIT_CAST. But I don't like the patch as it\n> stands anyway, because it is repeating a ton of catalog lookups for\n> every input row. I have more extensive changes in mind ...\n\nOK, I changed it to COERCE_IMPLICIT_CAST. The patch did fix a COPY\nfailure for NULL's and DOMAIN so I didn't remove the patch. Feel free\nto wack it around.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 20 Sep 2002 11:43:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Current CVS is broken"
}
] |
[
{
"msg_contents": "On Thu, 19 Sep 2002, Thomas Lockhart wrote:\n\n> Actually, a core member did implement this just a few weeks ago. The\n> same crew arguing this time rejected the changes and removed them from\n> the 7.3 feature set.\n\nThe change to make a PG_XLOG environment variable was rejected. Is that\nreally the change you were talking about?\n\n> So some folks have their heels dug in, and the vocal ones are not really\n> interested in understanding the issues which this feature is addressing.\n\nI was one of the vocal objectors, and I certainly understand the\nissues very well. Perhaps we should be saying the vocal supporters\nof the environment variable don't understand the issues.\n\nNone of the objectors I saw have any problem with enabling Windows NT to\nhave the log file somewhere else. In fact, I'm very strongly in support\nof this. But I object to doing it in a way that makes the system more\nfragile and susceptable to not starting properly, or even damage, when\nthere's a simple and obvious way of doing it right: put this in the\ndatabase configuration file rather than in an environment variable.\n\nWhy you object to that, and insist it must be an environment variable\ninstead (if that is indeed what you're doing), I'm not sure....\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Fri, 20 Sep 2002 18:56:42 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": true,
"msg_subject": "Re: PGXLOG variable worthwhile?"
}
] |
[
{
"msg_contents": "Hiya Lists.... \n\nSomebody could help me? I am with an error when the Postgresql makes Insert, \nDelete or Update \n\nkernel: SCSI disk error : host 2 channel 0 id 0 lun 0 return code = 70000\nkernel: I/O error: dev 08:08, sector 47938856\nkernel: SCSI disk error : host 2 channel 0 id 0 lun 0 return code = 70000\nkernel: I/O error: dev 08:08, sector 47938800\nkernel: SCSI disk error : host 2 channel 0 id 0 lun 0 return code = 70000\nkernel: I/O error: dev 08:08, sector 47938864\nkernel: SCSI disk error : host 2 channel 0 id 0 lun 0 return code = 70000\nkernel: I/O error: dev 08:08, sector 47938872\nkernel: SCSI disk error : host 2 channel 0 id 0 lun 0 return code = 70000\nkernel: I/O error: dev 08:08, sector 47938808\n\nVersion:\n\npostgresql-7.2.1-5\npostgresql-devel-7.2.1-5\npostgresql-libs-7.2.1-5\npostgresql-server-7.2.1-5\n\nServer:\n\nprocessor : 0\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 11\nmodel name : Intel(R) Pentium(R) III CPU family 1266MHz\nstepping : 1\ncpu MHz : 1258.309\ncache size : 512 KB\nfdiv_bug : no\nhlt_bug : no\nf00f_bug : no\ncoma_bug : no\nfpu : yes\nfpu_exception : yes\ncpuid level : 2\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca \ncmov pat pse36 mmx fxsr sse\nbogomips : 2510.02\n\nKernel:\n\nLinux version 2.4.7-10custom (gcc version 2.96 20000731 (Red Hat Linux 7.1 \n2.96-98)) #9 Mon\nSep 16 17:50:13 BRT 2002\n\nMem Info\n total: used: free: shared: buffers: cached:\nMem: 789540864 782589952 6950912 81764352 34406400 583122944\nSwap: 802873344 2957312 799916032\nMemTotal: 771036 kB\nMemFree: 6788 kB\nMemShared: 79848 kB\nBuffers: 33600 kB\nCached: 566568 kB\nSwapCached: 2888 kB\nActive: 14476 kB\nInact_dirty: 664852 kB\nInact_clean: 3576 kB\nInact_target: 136 kB\nHighTotal: 0 kB\nHighFree: 0 kB\nLowTotal: 771036 kB\nLowFree: 6788 kB\nSwapTotal: 784056 kB\nSwapFree: 781168 kB\nNrSwapPages: 195292 pages\n\npci\n\n Bus 2, device 9, function 0:\n SCSI storage controller: Adaptec 7899P (rev 1).\n IRQ 10.\n Master Capable. Latency=96. Min Gnt=40.Max Lat=25.\n I/O at 0x2100 [0x21ff].\n Non-prefetchable 64 bit memory at 0xedfff000 [0xedffffff].\n\nSwap\n\nFilename Type Size Used Priority\n/dev/sda6 partition 784056 2888 -1\n\nTkanksfull\n-- \nRicardo Fogliati\n4 Linux/Vesper\n\n\n",
"msg_date": "Fri, 20 Sep 2002 12:08:51 -0000",
"msg_from": "\"Ricardo Fogliati\" <ricardo@4linux.com.br>",
"msg_from_op": true,
"msg_subject": "SCSI Error"
},
{
"msg_contents": "On Fri, 20 Sep 2002, Ricardo Fogliati wrote:\n\n> Hiya Lists.... \n> \n> Somebody could help me? I am with an error when the Postgresql makes Insert, \n> Delete or Update \n> \n> kernel: SCSI disk error : host 2 channel 0 id 0 lun 0 return code = 70000\n> kernel: I/O error: dev 08:08, sector 47938856\n> kernel: SCSI disk error : host 2 channel 0 id 0 lun 0 return code = 70000\n> kernel: I/O error: dev 08:08, sector 47938800\n> kernel: SCSI disk error : host 2 channel 0 id 0 lun 0 return code = 70000\n> kernel: I/O error: dev 08:08, sector 47938864\n> kernel: SCSI disk error : host 2 channel 0 id 0 lun 0 return code = 70000\n> kernel: I/O error: dev 08:08, sector 47938872\n> kernel: SCSI disk error : host 2 channel 0 id 0 lun 0 return code = 70000\n> kernel: I/O error: dev 08:08, sector 47938808\n> \n> Version:\n> \n> postgresql-7.2.1-5\n> [deleted]...\n> Kernel:\n>\n> Linux version 2.4.7-10custom (gcc version 2.96 20000731 (Red Hat Linux 7.1 \n> 2.96-98)) #9 Mon\n> Sep 16 17:50:13 BRT 2002\n\n\nNot sure what you're asking for. That's a hardware error. Back up immediately,\nif you haven't already got decent backups, and fix the disk/controller.\n\nCould possibly be a filesystem error but even if so it's still casting doubt on\nthe hardware. On the other hand I do believe I saw a message recently saying\nthat some of the 2.4 series kernels had file system bugs. I don't know which,\nsomeone else might be able to expand.\n\n\n--\nNigel J. Andrews\n\n\n",
"msg_date": "Fri, 20 Sep 2002 13:32:52 +0100 (BST)",
"msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: SCSI Error"
},
{
"msg_contents": "On Fri, Sep 20, 2002 at 01:32:52PM +0100, Nigel J. Andrews wrote:\n\n> the hardware. On the other hand I do believe I saw a message\n> recently saying that some of the 2.4 series kernels had file system\n> bugs.\n\nI recall problems, offhand, with 2.4.5, 2.4.10, 2.4.11 (which was so\nbroken that you couldn't recover), and 2.4.15. I seem to recall a\nreport on 2.4.12, also. There's a page at\n<http://www.atnf.csiro.au/people/rgooch/linux/docs/kernel-newsflash.html>\nthat provides some summaries of known problems, and filesystem\ncorruption doesn't show up for all the ones I mentioned, so maybe my\nmemory is faulty (alas, I didn't get the ECC option).\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Fri, 20 Sep 2002 11:02:05 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: SCSI Error"
},
{
"msg_contents": "Ensure you don't have termination issues. Make sure your SCSI interface\nis configured correctly for your SCSI environment, especially on matters\nof termination. Make sure you have enough power to your drive and if\npossible, make sure your drives are hung off of distinct power segments\ncoming from your power supply. Of course, make sure you have enough\npower being supplied from your power supply to power all of your drives\nand *equipment* (cards, CPU, fans, etc). Last, be sure your cables are\nin good working order and that they are as far away from the power\nsupply and cables as is possible. SCSI is notorious for suffering from\nRF interference especially on long cables. If all else fails, you may\ntry swapping cables to see if this helps.\n\nIf at all possible, immediately fsck (!!!!don't make it perform any\nrepairs!!!!) and backup! The fsck will let you know if you should\nreasonably expect to have a meaningful backup. Requesting fsck to\nrepair in this situation could actually result in more/worse\n(unrecoverable) corruption. If the fsck does report potential FS\ndamage, accept the fact that some of your data (and resulting backup)\nmay be corrupt, perhaps even unrecoverably so.\n\nI'm using a 7880 here and have never had notable issues save only for\nbugs in the Linux SCSI drivers.\n\n\nGreg\n\n\nOn Fri, 2002-09-20 at 07:08, Ricardo Fogliati wrote:\n> Hiya Lists.... \n> \n> Somebody could help me? I am with an error when the Postgresql makes Insert, \n> Delete or Update \n> \n> kernel: SCSI disk error : host 2 channel 0 id 0 lun 0 return code = 70000\n> kernel: I/O error: dev 08:08, sector 47938856\n> kernel: SCSI disk error : host 2 channel 0 id 0 lun 0 return code = 70000\n> kernel: I/O error: dev 08:08, sector 47938800\n> kernel: SCSI disk error : host 2 channel 0 id 0 lun 0 return code = 70000\n> kernel: I/O error: dev 08:08, sector 47938864\n> kernel: SCSI disk error : host 2 channel 0 id 0 lun 0 return code = 70000\n> kernel: I/O error: dev 08:08, sector 47938872\n> kernel: SCSI disk error : host 2 channel 0 id 0 lun 0 return code = 70000\n> kernel: I/O error: dev 08:08, sector 47938808\n> \n> Version:\n> \n> postgresql-7.2.1-5\n> postgresql-devel-7.2.1-5\n> postgresql-libs-7.2.1-5\n> postgresql-server-7.2.1-5\n> \n> Server:\n> \n> processor : 0\n> vendor_id : GenuineIntel\n> cpu family : 6\n> model : 11\n> model name : Intel(R) Pentium(R) III CPU family 1266MHz\n> stepping : 1\n> cpu MHz : 1258.309\n> cache size : 512 KB\n> fdiv_bug : no\n> hlt_bug : no\n> f00f_bug : no\n> coma_bug : no\n> fpu : yes\n> fpu_exception : yes\n> cpuid level : 2\n> wp : yes\n> flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca \n> cmov pat pse36 mmx fxsr sse\n> bogomips : 2510.02\n> \n> Kernel:\n> \n> Linux version 2.4.7-10custom (gcc version 2.96 20000731 (Red Hat Linux 7.1 \n> 2.96-98)) #9 Mon\n> Sep 16 17:50:13 BRT 2002\n> \n> Mem Info\n> total: used: free: shared: buffers: cached:\n> Mem: 789540864 782589952 6950912 81764352 34406400 583122944\n> Swap: 802873344 2957312 799916032\n> MemTotal: 771036 kB\n> MemFree: 6788 kB\n> MemShared: 79848 kB\n> Buffers: 33600 kB\n> Cached: 566568 kB\n> SwapCached: 2888 kB\n> Active: 14476 kB\n> Inact_dirty: 664852 kB\n> Inact_clean: 3576 kB\n> Inact_target: 136 kB\n> HighTotal: 0 kB\n> HighFree: 0 kB\n> LowTotal: 771036 kB\n> LowFree: 6788 kB\n> SwapTotal: 784056 kB\n> SwapFree: 781168 kB\n> NrSwapPages: 195292 pages\n> \n> pci\n> \n> Bus 2, device 9, function 0:\n> SCSI storage controller: Adaptec 7899P (rev 1).\n> IRQ 10.\n> Master Capable. Latency=96. Min Gnt=40.Max Lat=25.\n> I/O at 0x2100 [0x21ff].\n> Non-prefetchable 64 bit memory at 0xedfff000 [0xedffffff].\n> \n> Swap\n> \n> Filename Type Size Used Priority\n> /dev/sda6 partition 784056 2888 -1\n> \n> Tkanksfull\n> -- \n> Ricardo Fogliati\n> 4 Linux/Vesper\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org",
"msg_date": "20 Sep 2002 11:01:12 -0500",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: SCSI Error"
},
{
"msg_contents": "> > the hardware. On the other hand I do believe I saw a message\n> > recently saying that some of the 2.4 series kernels had file system\n> > bugs.\n>\n> I recall problems, offhand, with 2.4.5, 2.4.10, 2.4.11 (which was so\n> broken that you couldn't recover), and 2.4.15. I seem to recall a\n> report on 2.4.12, also. There's a page at\n> <http://www.atnf.csiro.au/people/rgooch/linux/docs/kernel-newsflash.html>\n> that provides some summaries of known problems, and filesystem\n> corruption doesn't show up for all the ones I mentioned, so maybe my\n> memory is faulty (alas, I didn't get the ECC option).\n\nHehehe - those poor sad people who use Linux - try FreeBSD for a change, at\nleast it works ;)\n\nChris\n\n",
"msg_date": "Mon, 23 Sep 2002 10:45:53 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: SCSI Error"
}
] |
[
{
"msg_contents": "Hello!\n\ni'm using PostgreSQL 7.2.1 and got strange parse errors..\ncould somebody tell me what's wrong with this timestamp query example?\n\nPostgreSQL said: ERROR: parser: parse error at or near \"date\"\nYour query:\n\nselect timestamp(date '1998-02-24', time '23:07')\n\nexample is from PostgreSQL help and certainly worked in previous versions of\npgsql.. but in 7.2.1 it does not. had anything changed and not been updated\nin pgsql manuals or is it a bug?\n\nthanx for any help\n\nTomas Lehuta\n\n\n",
"msg_date": "Fri, 20 Sep 2002 14:30:08 +0200",
"msg_from": "\"Tomas Lehuta\" <lharp@aurius.sk>",
"msg_from_op": true,
"msg_subject": "timestamp parse error"
},
{
"msg_contents": "On Fri, 20 Sep 2002, Tomas Lehuta wrote:\n\n> Hello!\n>\n> i'm using PostgreSQL 7.2.1 and got strange parse errors..\n> could somebody tell me what's wrong with this timestamp query example?\n>\n> PostgreSQL said: ERROR: parser: parse error at or near \"date\"\n> Your query:\n>\n> select timestamp(date '1998-02-24', time '23:07')\n>\n> example is from PostgreSQL help and certainly worked in previous versions of\n> pgsql.. but in 7.2.1 it does not. had anything changed and not been updated\n> in pgsql manuals or is it a bug?\n\nPresumably it's a manual example that didn't get changed. Timestamp(...)\nis now a specifier for the type with a given precision. You can use\n\"timestamp\"(date '1998-02-24', time '23:07') or datetime math (probably\nsomething like date '1998-02-24' + time '23:07' and possibly a cast)\n\n\n",
"msg_date": "Fri, 20 Sep 2002 07:38:25 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: timestamp parse error"
},
{
"msg_contents": "\"Tomas Lehuta\" <lharp@aurius.sk> writes:\n> could somebody tell me what's wrong with this timestamp query example?\n\n> select timestamp(date '1998-02-24', time '23:07')\n> PostgreSQL said: ERROR: parser: parse error at or near \"date\"\n\n> example is from PostgreSQL help\n\n From where exactly? I don't see any such example in current sources.\n\nAlthough you could make this work by double-quoting the name \"timestamp\"\n(which is a reserved word now, per SQL spec), I'd recommend sidestepping\nthe problem by using the equivalent + operator instead:\n\nregression=# select \"timestamp\"(date '1998-02-24', time '23:07');\n timestamp\n---------------------\n 1998-02-24 23:07:00\n(1 row)\n\nregression=# select date '1998-02-24' + time '23:07';\n ?column?\n---------------------\n 1998-02-24 23:07:00\n(1 row)\n\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 Sep 2002 11:19:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timestamp parse error "
},
{
"msg_contents": "Is there any way to monitor a long running query?\n\nI have stats turned on and I can see my queries, but is there any better \nmeasure of the progress?\n\nThanks,\n-Aaron Held\n\nselect current_query from pg_stat_activity;\ncurrent_query\n\n<IDLE>\n<IDLE>\n<IDLE>\n<IDLE>\n<IDLE> in transaction\nFETCH ALL FROM PgSQL_470AEE94\n<IDLE> in transaction\nselect * from \"Calls\" WHERE \"DurationOfCall\" = 2.5 AND \"DateOfCall\" = \n'7/01/02' AND (\"GroupCode\" = 'MIAMI' OR \"GroupCode\" = 'Salt Lake');\n<IDLE>\n<IDLE>\n<IDLE>\n\n",
"msg_date": "Fri, 20 Sep 2002 11:34:27 -0400",
"msg_from": "Aaron Held <aaron@MetroNY.com>",
"msg_from_op": false,
"msg_subject": "Monitoring a Query"
},
{
"msg_contents": "\nThere is pgmonitor:\n\n\thttp://gborg.postgresql.org/project/pgmonitor\n\n---------------------------------------------------------------------------\n\nAaron Held wrote:\n> Is there any way to monitor a long running query?\n> \n> I have stats turned on and I can see my queries, but is there any better \n> measure of the progress?\n> \n> Thanks,\n> -Aaron Held\n> \n> select current_query from pg_stat_activity;\n> current_query\n> \n> <IDLE>\n> <IDLE>\n> <IDLE>\n> <IDLE>\n> <IDLE> in transaction\n> FETCH ALL FROM PgSQL_470AEE94\n> <IDLE> in transaction\n> select * from \"Calls\" WHERE \"DurationOfCall\" = 2.5 AND \"DateOfCall\" = \n> '7/01/02' AND (\"GroupCode\" = 'MIAMI' OR \"GroupCode\" = 'Salt Lake');\n> <IDLE>\n> <IDLE>\n> <IDLE>\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 20 Sep 2002 12:18:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring a Query"
},
{
"msg_contents": "Aaron Held wrote:\n> Is there any way to monitor a long running query?\n> \n> I have stats turned on and I can see my queries, but is there any better \n> measure of the progress?\n\nOh, sorry, you want to know how far the query has progressed. Gee, I\ndon't think there is any easy way to do that. Sorry.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 20 Sep 2002 12:19:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring a Query"
},
{
"msg_contents": "Hi all developpers,\n\nThis is just a idea.\n\nHow about making available the MVCC last version number just like oid is\navailable. This would simplify a lot of table design. You know, having\nto add a field \"updated::timestamp\" to detect when a record was updated\nwhile viewing it (a la pgaccess).\n\nThat way, if the version number do not match, one would know that the\nreccord was updated since last retrieved.\n\nWhat do think?\n\nJLL\n",
"msg_date": "Fri, 20 Sep 2002 14:22:37 -0400",
"msg_from": "Jean-Luc Lachance <jllachan@nsd.ca>",
"msg_from_op": false,
"msg_subject": "Getting acces to MVCC version number"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Aaron Held wrote:\n> > Is there any way to monitor a long running query?\n> \n> Oh, sorry, you want to know how far the query has progressed. Gee, I\n> don't think there is any easy way to do that.\n\nWould it be a good idea to add the time that the current query began\nexecution at to pg_stat_activity?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n",
"msg_date": "20 Sep 2002 15:54:45 -0400",
"msg_from": "Neil Conway <neilc@samurai.com>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring a Query"
},
{
"msg_contents": "Jean-Luc Lachance <jllachan@nsd.ca> writes:\n> How about making available the MVCC last version number just like oid is\n> available. This would simplify a lot of table design. You know, having\n> to add a field \"updated::timestamp\" to detect when a record was updated\n> while viewing it (a la pgaccess).\n> That way, if the version number do not match, one would know that the\n> reccord was updated since last retrieved.\n\n> What do think?\n\nI think it's already there: see xmin and cmin. Depending on your needs,\ntesting xmin might be enough (you'd only need to pay attention to cmin\nif you wanted to notice changes within your own transaction).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 Sep 2002 17:23:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Getting acces to MVCC version number "
},
{
"msg_contents": "Neil Conway wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Aaron Held wrote:\n> > > Is there any way to monitor a long running query?\n> > \n> > Oh, sorry, you want to know how far the query has progressed. Gee, I\n> > don't think there is any easy way to do that.\n> \n> Would it be a good idea to add the time that the current query began\n> execution at to pg_stat_activity?\n\nWhat do people think about this? It seems like a good idea to me.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 22 Sep 2002 21:51:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring a Query"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Neil Conway wrote:\n> \n>>Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>>\n>>>Aaron Held wrote:\n>>>\n>>>>Is there any way to monitor a long running query?\n>>>\n>>>Oh, sorry, you want to know how far the query has progressed. Gee, I\n>>>don't think there is any easy way to do that.\n>>\n>>Would it be a good idea to add the time that the current query began\n>>execution at to pg_stat_activity?\n> \n> \n> What do people think about this? It seems like a good idea to me.\n> \n\nMy application marks the start time of each query and I have found it \nvery useful. The users like to see how long each query took, and the \nadmin can take a quick look and see how many queries are running and how \nlong each has been active for. Good for debugging and billing.\n\n-Aaron Held\n\n",
"msg_date": "Mon, 23 Sep 2002 09:24:38 -0400",
"msg_from": "Aaron Held <aaron@MetroNY.com>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring a Query"
},
{
"msg_contents": "On Sun, Sep 22, 2002 at 09:51:55PM -0400, Bruce Momjian wrote:\n> > \n> > Would it be a good idea to add the time that the current query began\n> > execution at to pg_stat_activity?\n> \n> What do people think about this? It seems like a good idea to me.\n\nOpenACS has a package called \"Developer Support\" that shows you (among\nother things) how long a query took to be executed. Very good to finding \nout slow-running queries that need to be optimized.\n\n-Roberto\n\n-- \n+----| Roberto Mello - http://www.brasileiro.net/ |------+\n+ USU Free Software & GNU/Linux Club - http://fslc.usu.edu/ +\n",
"msg_date": "Mon, 23 Sep 2002 07:47:56 -0600",
"msg_from": "Roberto Mello <rmello@cc.usu.edu>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Monitoring a Query"
},
{
"msg_contents": "It looks like that just timestamps things in its connection pool, that \nis what I do now.\n\nWhat I would like is to know about queries that have not finished yet.\n\n-Aaron\n\nRoberto Mello wrote:\n> On Sun, Sep 22, 2002 at 09:51:55PM -0400, Bruce Momjian wrote:\n> \n>>>Would it be a good idea to add the time that the current query began\n>>>execution at to pg_stat_activity?\n>>\n>>What do people think about this? It seems like a good idea to me.\n> \n> \n> OpenACS has a package called \"Developer Support\" that shows you (among\n> other things) how long a query took to be executed. Very good to finding \n> out slow-running queries that need to be optimized.\n> \n> -Roberto\n> \n\n\n",
"msg_date": "Mon, 23 Sep 2002 10:31:18 -0400",
"msg_from": "Aaron Held <aaron@MetroNY.com>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Monitoring a Query"
},
{
"msg_contents": "Roberto Mello wrote:\n> On Sun, Sep 22, 2002 at 09:51:55PM -0400, Bruce Momjian wrote:\n> > > \n> > > Would it be a good idea to add the time that the current query began\n> > > execution at to pg_stat_activity?\n> > \n> > What do people think about this? It seems like a good idea to me.\n> \n> OpenACS has a package called \"Developer Support\" that shows you (among\n> other things) how long a query took to be executed. Very good to finding \n> out slow-running queries that need to be optimized.\n\n7.3 will have GUC 'log_duration' which will show query duration.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 23 Sep 2002 10:48:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Monitoring a Query"
},
{
"msg_contents": "Aaron Held wrote:\n> It looks like that just timestamps things in its connection pool, that \n> is what I do now.\n> \n> What I would like is to know about queries that have not finished yet.\n\nOK, added to TODO:\n\n\t* Add start time to pg_stat_activity\n\nShould we supply the current duration too? That value would change on\neach call. Seems redundant.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 23 Sep 2002 10:49:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Monitoring a Query"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, added to TODO:\n> \t* Add start time to pg_stat_activity\n\nIt would be nearly free to include the start time of the current\ntransaction, because we already save that for use by now(). Is\nthat good enough, or do we need start time of the current query?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Sep 2002 11:03:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Monitoring a Query "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, added to TODO:\n> > \t* Add start time to pg_stat_activity\n> \n> It would be nearly free to include the start time of the current\n> transaction, because we already save that for use by now(). Is\n> that good enough, or do we need start time of the current query?\n\nCurrent query, I am afraid. We could optimize it so single-query\ntransactions wouldn't need to call that again.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 23 Sep 2002 11:06:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Monitoring a Query"
},
{
"msg_contents": "That is great! Thanks for the info.\n\nTom Lane wrote:\n> \n> Jean-Luc Lachance <jllachan@nsd.ca> writes:\n> > How about making available the MVCC last version number just like oid is\n> > available. This would simplify a lot of table design. You know, having\n> > to add a field \"updated::timestamp\" to detect when a record was updated\n> > while viewing it (a la pgaccess).\n> > That way, if the version number do not match, one would know that the\n> > reccord was updated since last retrieved.\n> \n> > What do think?\n> \n> I think it's already there: see xmin and cmin. Depending on your needs,\n> testing xmin might be enough (you'd only need to pay attention to cmin\n> if you wanted to notice changes within your own transaction).\n> \n> regards, tom lane\n",
"msg_date": "Mon, 23 Sep 2002 11:47:01 -0400",
"msg_from": "Jean-Luc Lachance <jllachan@nsd.ca>",
"msg_from_op": false,
"msg_subject": "Re: Getting acces to MVCC version number"
},
{
"msg_contents": "Hi all,\n\nI just read it's possible to get the MVCC last version numbers. Is it also\npossible to get the current transaction id? Would it be possible to check\nlater if that transaction has been commited? This would be nice for a distributed\napplication to enforce an \"exactly once\" semantics for transactions (even if\nthere are network related errors while the server sends ack for commiting a\ntransaction).\nAnd if it's possible, how long would that information be valid, i.e. when do\ntransaction id's get reused?\nIf it's not working I will have to implement my own transactions table.\n\nThanks in advance,\nMichael Paesold\n\n\n-- \nWerden Sie mit uns zum \"OnlineStar 2002\"! Jetzt GMX w�hlen -\nund tolle Preise absahnen! http://www.onlinestar.de\n\n",
"msg_date": "Mon, 23 Sep 2002 18:03:54 +0200 (MEST)",
"msg_from": "Michael Paesold <mpaesold@gmx.at>",
"msg_from_op": false,
"msg_subject": "Getting current transaction id"
},
{
"msg_contents": "On Mon, 23 Sep 2002 11:06:19 -0400 (EDT), Bruce Momjian\n<pgman@candle.pha.pa.us> wrote:\n>Tom Lane wrote:\n>> It would be nearly free to include the start time of the current\n>> transaction, because we already save that for use by now(). Is\n>> that good enough, or do we need start time of the current query?\n>\n>Current query, I am afraid. We could optimize it so single-query\n>transactions wouldn't need to call that again.\n\nThis has been discussed before and I know I'm going to get flamed for\nthis, but IMHO having now() (which is a synonym for CURRENT_TIMESTAMP)\nreturn the start time of the current transaction is a bug, or at least\nit is not conforming to the standard.\n\nSQL92 says in 6.8 <datetime value function>:\n\n General Rules\n\n 1) The <datetime value function>s CURRENT_DATE, CURRENT_TIME, and\n CURRENT_TIMESTAMP respectively return the current date, current\n time, and current timestamp [...]\n ^^^^^^^\n\n 3) If an SQL-statement generally contains more than one reference\n ^^^^^^^^^\n to one or more <datetime value function>s, then all such ref-\n erences are effectively evaluated simultaneously. The time of\n evaluation of the <datetime value function> during the execution\n ^^^^^^\n of the SQL-statement is implementation-dependent.\n\nSQL99 says in 6.19 <datetime value function>:\n\n 3) Let S be an <SQL procedure statement> that is not generally\n contained in a <triggered action>. All <datetime value\n function>s that are generally contained, without an intervening\n <routine invocation> whose subject routines do not include an\n SQL function, in <value expression>s that are contained either\n in S without an intervening <SQL procedure statement> or in an\n <SQL procedure statement> contained in the <triggered action>\n of a trigger activated as a consequence of executing S, are\n effectively evaluated simultaneously. The time of evaluation of\n a <datetime value function> during the execution of S and its\n activated triggers is implementation-dependent.\n\nI cannot say that I fully understand the second sentence (guess I have\nto read it for another 100 times), but \"during the execution of S\"\nseems to mean \"not before the start and not after the end of S\".\n\nWhat do you think?\n\nServus\n Manfred\n",
"msg_date": "Mon, 23 Sep 2002 19:01:01 +0200",
"msg_from": "Manfred Koizar <mkoi-pg@aon.at>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Monitoring a Query"
},
{
"msg_contents": "Manfred Koizar <mkoi-pg@aon.at> writes:\n> This has been discussed before and I know I'm going to get flamed for\n> this, but IMHO having now() (which is a synonym for CURRENT_TIMESTAMP)\n> return the start time of the current transaction is a bug, or at least\n> it is not conforming to the standard.\n\nAs you say, it's been discussed before. We concluded that the spec\ndefines the behavior as implementation-dependent, and therefore we\ncan pretty much do what we want.\n\nIf you want exact current time, there's always timeofday().\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Sep 2002 13:05:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Monitoring a Query "
},
{
"msg_contents": "On Mon, 23 Sep 2002 13:05:42 -0400, Tom Lane <tgl@sss.pgh.pa.us>\nwrote:\n>Manfred Koizar <mkoi-pg@aon.at> writes:\n>> This has been discussed before and I know I'm going to get flamed for\n>> this, but IMHO having now() (which is a synonym for CURRENT_TIMESTAMP)\n>> return the start time of the current transaction is a bug, or at least\n>> it is not conforming to the standard.\n>\n>As you say, it's been discussed before.\n\nYes, and I hate to be annoying.\n\n>We concluded that the spec defines the behavior as\n>implementation-dependent,\n\nAFAICT the spec requires the returned value to meet two conditions.\n\nC1: If a statement contains more than one <datetime value function>,\nthey all have to return (maybe different formats of) the same value.\n\nC2: The returned value has to represent a point in time *during* the\nexecution of the SQL-statement.\n\nThe only thing an implementor is free to choose is which point in time\n\"during the execution of the SQL-statement\" is to be returned, i.e. a\ntimestamp in the interval between the start of the statement and the\nfirst time when the value is needed.\n\nThe current implementation only conforms to C1.\n\n>and therefore we can pretty much do what we want.\n\nStart time of the statement, ... of the transaction, ... of the\nsession, ... of the postmaster, ... of the century?\n\nI understand that with subselects, functions, triggers, rules etc. it\nis not easy to implement the specification. If we can't do it now, we\nshould at least add a todo and make clear in the documentation that\nCURRENT_DATE/TIME/TIMESTAMP is not SQL92/99 compliant.\n\nServus\n Manfred\n",
"msg_date": "Mon, 23 Sep 2002 21:02:00 +0200",
"msg_from": "Manfred Koizar <mkoi-pg@aon.at>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "\nManfred,\n\n> C2: The returned value has to represent a point in time *during* the\n> execution of the SQL-statement.\n> \n> The only thing an implementor is free to choose is which point in time\n> \"during the execution of the SQL-statement\" is to be returned, i.e. a\n> timestamp in the interval between the start of the statement and the\n> first time when the value is needed.\n> \n> The current implementation only conforms to C1.\n\nI, for one, would judge that the start time of the statement is \"during the \nexecution\"; it would only NOT be \"during the execution\" if it was a value \n*before* the start time of the statement. It's a semantic argument.\n\nThe spec is, IMHO, rather vague on how this would relate to transactions. I \ndo not find it at all inconsitent that Bruce, Thomas, and co. interpreted a \ntransaction to be an extension of an individual SQL statement for this \npurpose (at least, that's what I guess they did).\n\nThus, if you accept the postulates that:\n1) \"During\" a SQL statement includes the start time of the statement, and\n2) A Transaction is the equivalent of a single SQL statement for many \npurposes, \nThen the current behavior is a logical conclusion.\n\nFurther, we could not change that behaviour without breaking many people's \napplications.\n\nIdeally, since we get this question a lot, that a compile-time or \nexecution-time switch to change the behavior of current_timestamp \ncontextually would be nice. We just need someone who;s interested enough in \nwriting one.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Mon, 23 Sep 2002 13:36:59 -0700",
"msg_from": "Josh Berkus <josh@agliodbs.com>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "Josh Berkus wrote:\n> I, for one, would judge that the start time of the statement is \"during the \n> execution\"; it would only NOT be \"during the execution\" if it was a value \n> *before* the start time of the statement. It's a semantic argument.\n> \n> The spec is, IMHO, rather vague on how this would relate to transactions. I \n> do not find it at all inconsitent that Bruce, Thomas, and co. interpreted a \n> transaction to be an extension of an individual SQL statement for this \n> purpose (at least, that's what I guess they did).\n> \n> Thus, if you accept the postulates that:\n> 1) \"During\" a SQL statement includes the start time of the statement, and\n> 2) A Transaction is the equivalent of a single SQL statement for many \n> purposes, \n> Then the current behavior is a logical conclusion.\n> \n> Further, we could not change that behaviour without breaking many people's \n> applications.\n\nI don't see how we can defend returning the start of the transaction as\nthe current_timestamp. In a multi-statement transaction, that doesn't\nseem very current to me. I know there are some advantages to returning\nthe same value for all queries in a transaction, but is that value worth\nreturning such stale time information?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 23 Sep 2002 16:41:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "\nBruce,\n\n> I don't see how we can defend returning the start of the transaction as\n> the current_timestamp. In a multi-statement transaction, that doesn't\n> seem very current to me. I know there are some advantages to returning\n> the same value for all queries in a transaction, but is that value worth\n> returning such stale time information?\n\nThen what *was* the reasoning behind the current behavior?\n\n-- \n-Josh Berkus\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology \tjosh@agliodbs.com\n and data management solutions \t(415) 565-7293\n for law firms, small businesses \t fax 621-2533\n and non-profit organizations. \tSan Francisco\n\n",
"msg_date": "Mon, 23 Sep 2002 13:49:27 -0700",
"msg_from": "Josh Berkus <josh@agliodbs.com>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "Josh Berkus wrote:\n> \n> Bruce,\n> \n> > I don't see how we can defend returning the start of the transaction as\n> > the current_timestamp. In a multi-statement transaction, that doesn't\n> > seem very current to me. I know there are some advantages to returning\n> > the same value for all queries in a transaction, but is that value worth\n> > returning such stale time information?\n> \n> Then what *was* the reasoning behind the current behavior?\n\nI thought the spec required it, but now that I see it doesn't, I don't\nknow why it was done that way.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 23 Sep 2002 16:53:20 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I don't see how we can defend returning the start of the transaction as\n> the current_timestamp.\n\nHere's an example:\n\nCREATE RULE foo AS ON INSERT TO mytable DO\n( INSERT INTO log1 VALUES (... , now(), ...);\n INSERT INTO log2 VALUES (... , now(), ...) );\n\nI think it's important that these commands store the same timestamp in\nboth log tables (not to mention that any now() being stored into mytable\nitself generate that same timestamp).\n\nIf you scale that up just a little bit, you can devise scenarios where\nsuccessive client-issued commands (within a single transaction) want to\nstore the same timestamp. After all, it's only a minor implementation\ndetail that you chose to fire these logging operations via a rule and\nnot by client-side logic.\n\nIn short, there are plenty of situations where it's critical for\napplication correctness that a series of commands all be able to operate\nwith the same value of now(). I don't think that it's wise for Postgres\nto try to decide where within a transaction it's safe to advance now().\nThat will inevitably break some applications, and it's not obvious what\nthe benefit is.\n\nIn short: if you want exact current time, there's timeofday(). If you\nwant start of transaction time, we've got that. If you want start of\ncurrent statement time, I have two questions: why, and exactly how do\nyou want to define current statement, considering functions, rules,\ntriggers, and all that other stuff that makes it interesting?\n\nISTM that if a client or function wants to record intratransaction\ntimes, it can call timeofday() at the appropriate points for itself.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Sep 2002 16:55:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] CURRENT_TIMESTAMP "
},
{
"msg_contents": "On Mon, Sep 23, 2002 at 10:48:30AM -0400, Bruce Momjian wrote:\n> > > > \n> > > > Would it be a good idea to add the time that the current query began\n> > > > execution at to pg_stat_activity?\n> > > \n> > > What do people think about this? It seems like a good idea to me.\n> > \n> > OpenACS has a package called \"Developer Support\" that shows you (among\n> > other things) how long a query took to be executed. Very good to finding \n> > out slow-running queries that need to be optimized.\n> \n> 7.3 will have GUC 'log_duration' which will show query duration.\n\nForgive my ignorance here, but what is GUC? And how would I access the\nquery duration?\n\n-Roberto\n\n-- \n+----| Roberto Mello - http://www.brasileiro.net/ |------+\n+ Computer Science Graduate Student, Utah State University +\n+ USU Free Software & GNU/Linux Club - http://fslc.usu.edu/ +\nQ:\tWhat is purple and commutes?\nA:\tA boolean grape.\n",
"msg_date": "Mon, 23 Sep 2002 16:01:16 -0600",
"msg_from": "Roberto Mello <rmello@cc.usu.edu>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Monitoring a Query"
},
{
"msg_contents": "\nI see what you are saying now --- that even single user statements can\ntrigger multiple statements, so you would have to say transaction start\ntime is time the user query starts. I can see how that seems a little\narbitrary. However, don't we have separate paths for user queries and\nqueries sent as part of a rule?\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I don't see how we can defend returning the start of the transaction as\n> > the current_timestamp.\n> \n> Here's an example:\n> \n> CREATE RULE foo AS ON INSERT TO mytable DO\n> ( INSERT INTO log1 VALUES (... , now(), ...);\n> INSERT INTO log2 VALUES (... , now(), ...) );\n> \n> I think it's important that these commands store the same timestamp in\n> both log tables (not to mention that any now() being stored into mytable\n> itself generate that same timestamp).\n> \n> If you scale that up just a little bit, you can devise scenarios where\n> successive client-issued commands (within a single transaction) want to\n> store the same timestamp. After all, it's only a minor implementation\n> detail that you chose to fire these logging operations via a rule and\n> not by client-side logic.\n> \n> In short, there are plenty of situations where it's critical for\n> application correctness that a series of commands all be able to operate\n> with the same value of now(). I don't think that it's wise for Postgres\n> to try to decide where within a transaction it's safe to advance now().\n> That will inevitably break some applications, and it's not obvious what\n> the benefit is.\n> \n> In short: if you want exact current time, there's timeofday(). If you\n> want start of transaction time, we've got that. If you want start of\n> current statement time, I have two questions: why, and exactly how do\n> you want to define current statement, considering functions, rules,\n> triggers, and all that other stuff that makes it interesting?\n> \n> ISTM that if a client or function wants to record intratransaction\n> times, it can call timeofday() at the appropriate points for itself.\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 23 Sep 2002 20:11:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "Roberto Mello wrote:\n> On Mon, Sep 23, 2002 at 10:48:30AM -0400, Bruce Momjian wrote:\n> > > > > \n> > > > > Would it be a good idea to add the time that the current query began\n> > > > > execution at to pg_stat_activity?\n> > > > \n> > > > What do people think about this? It seems like a good idea to me.\n> > > \n> > > OpenACS has a package called \"Developer Support\" that shows you (among\n> > > other things) how long a query took to be executed. Very good to finding \n> > > out slow-running queries that need to be optimized.\n> > \n> > 7.3 will have GUC 'log_duration' which will show query duration.\n> \n> Forgive my ignorance here, but what is GUC? And how would I access the\n> query duration?\n\nGUC is postgresql.conf and SET commands. They are variables that can be\nset.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 23 Sep 2002 20:27:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Monitoring a Query"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I see what you are saying now --- that even single user statements can\n> trigger multiple statements, so you would have to say transaction start\n> time is time the user query starts. I can see how that seems a little\n> arbitrary. However, don't we have separate paths for user queries and\n> queries sent as part of a rule?\n\nWe could use \"time of arrival of the latest client command string\",\nif we wanted to do something like this. My point is that that very\narbitrarily assumes that those are the significant points within a\ntransaction, and that the client has no need to send multiple commands\nthat want to insert the same timestamp into different tables. This is\nan unwarranted assumption about the client's control structure, IMHO.\n\nA possible compromise is to dissociate now() and current_timestamp,\nallowing the former to be start of transaction and the latter to be\nstart of client command.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Sep 2002 20:32:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] CURRENT_TIMESTAMP "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I see what you are saying now --- that even single user statements can\n> > trigger multiple statements, so you would have to say transaction start\n> > time is time the user query starts. I can see how that seems a little\n> > arbitrary. However, don't we have separate paths for user queries and\n> > queries sent as part of a rule?\n> \n> We could use \"time of arrival of the latest client command string\",\n> if we wanted to do something like this. My point is that that very\n> arbitrarily assumes that those are the significant points within a\n> transaction, and that the client has no need to send multiple commands\n> that want to insert the same timestamp into different tables. This is\n> an unwarranted assumption about the client's control structure, IMHO.\n> \n> A possible compromise is to dissociate now() and current_timestamp,\n> allowing the former to be start of transaction and the latter to be\n> start of client command.\n\nI was thinking 'transaction_timestamp' for the transaction start time, and\ncurrent_timestamp for the statement start time. I would equate now()\nwith current_timestamp.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 23 Sep 2002 20:37:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "On Mon, Sep 23, 2002 at 09:02:00PM +0200, Manfred Koizar wrote:\n> On Mon, 23 Sep 2002 13:05:42 -0400, Tom Lane <tgl@sss.pgh.pa.us>\n> >We concluded that the spec defines the behavior as\n> >implementation-dependent,\n> \n> AFAICT the spec requires the returned value to meet two conditions.\n> \n> C1: If a statement contains more than one <datetime value function>,\n> they all have to return (maybe different formats of) the same value.\n> \n> C2: The returned value has to represent a point in time *during* the\n> execution of the SQL-statement.\n> \n> The only thing an implementor is free to choose is which point in time\n> \"during the execution of the SQL-statement\" is to be returned, i.e. a\n> timestamp in the interval between the start of the statement and the\n> first time when the value is needed.\n\nWell, what I would suggest is that when you wrap several statements into a\nsingle transaction with begin/commit, the whole lot could be considered a\nsingle statement (since they form an atomic transaction so in a sense they\nare all executed simultaneously). And hence Postgresql is perfectly\ncompliant.\n\nMy second point would be: what is the point of a timestamp that keeps\nchanging during a transaction? If you want that, there are other functions\nthat serve that purpose.\n\n> I understand that with subselects, functions, triggers, rules etc. it\n> is not easy to implement the specification. If we can't do it now, we\n> should at least add a todo and make clear in the documentation that\n> CURRENT_DATE/TIME/TIMESTAMP is not SQL92/99 compliant.\n\nThe current definition is, I would say, the most useful definition. Can you\ngive an example where your definition would be more useful?\n-- \nMartijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/\n> There are 10 kinds of people in the world, those that can do binary\n> arithmetic and those that can't.\n",
"msg_date": "Tue, 24 Sep 2002 11:19:12 +1000",
"msg_from": "Martijn van Oosterhout <kleptog@svana.org>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "\nTom, Bruce,\n\n> > A possible compromise is to dissociate now() and current_timestamp,\n> > allowing the former to be start of transaction and the latter to be\n> > start of client command.\n> \n> I was thinking 'transaction_timestamp' for the transaction start time, and\n> current_timestamp for the statement start time. I would equate now()\n> with current_timestamp.\n\nMay I point out that this will break compatibility for those used to the \ncurrent behavior?\n\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Mon, 23 Sep 2002 18:53:36 -0700",
"msg_from": "Josh Berkus <josh@agliodbs.com>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "Josh Berkus wrote:\n> \n> Tom, Bruce,\n> \n> > > A possible compromise is to dissociate now() and current_timestamp,\n> > > allowing the former to be start of transaction and the latter to be\n> > > start of client command.\n> > \n> > I was thinking 'transaction_timestamp' for the transaction start time, and\n> > current_timestamp for the statement start time. I would equate now()\n> > with current_timestamp.\n> \n> May I point out that this will break compatibility for those used to the \n> current behavior?\n\nI am not saying we have to make that change. My point is that our\ncurrent behavior may not be the most intuitive, and that most people may\nprefer a change. Any such change would be documented in the release\nnotes.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 23 Sep 2002 22:01:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "Bruce Momjian dijo: \n\n> Roberto Mello wrote:\n\n> > Forgive my ignorance here, but what is GUC? And how would I access the\n> > query duration?\n> \n> GUC is postgresql.conf and SET commands. They are variables that can be\n> set.\n\nJust for the record, GUC is an acronym for \"Grand Unified\nConfiguration\".\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"El hombre nunca sabe de lo que es capaz hasta que lo intenta\" (C. Dickens)\n\n",
"msg_date": "Mon, 23 Sep 2002 22:11:48 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Monitoring a Query"
},
{
"msg_contents": "Bruce Momjian writes:\n> My point is that our current behavior may not be the most intuitive, and\n> that most people may prefer a change.\n\nI would prefer a change.\n-- \nJohn Hasler\njohn@dhh.gt.org\nDancing Horse Hill\nElmwood, Wisconsin\n",
"msg_date": "23 Sep 2002 21:20:52 -0500",
"msg_from": "John Hasler <john@dhh.gt.org>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "John Hasler wrote:\n> Bruce Momjian writes:\n> > My point is that our current behavior may not be the most intuitive, and\n> > that most people may prefer a change.\n> \n> I would prefer a change.\n\nYes, I guess that is my point, that we want to make transaction _and_\nstatement timestamp values available, but most people are going to use\ncurrent_timestamp, and most people are going to assume it is statement\ntime, not transaction time.\n\nCan I add TODO items for this:\n\n\to Make CURRENT_TIMESTAMP/now() return statement start time\n\to Add TRANSACTION_TIMESTAMP to return transaction start time\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 23 Sep 2002 23:17:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "Alvaro Herrera wrote:\n> Bruce Momjian dijo: \n> \n> > Roberto Mello wrote:\n> \n> > > Forgive my ignorance here, but what is GUC? And how would I access the\n> > > query duration?\n> > \n> > GUC is postgresql.conf and SET commands. They are variables that can be\n> > set.\n> \n> Just for the record, GUC is an acronym for \"Grand Unified\n> Configuration\".\n\nThanks. I couldn't remember that.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 23 Sep 2002 23:18:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Monitoring a Query"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I was thinking 'transaction_timestamp' for the transaction start time, and\n> current_timestamp for the statement start time. I would equate now()\n> with current_timestamp.\n\nSo you want to both (a) invent even more nonstandard syntax than we\nalready have, and (b) break as many traditional-Postgres applications\nas you possibly can?\n\n'transaction_timestamp' has no reason to live. It's not in the spec.\nAnd AFAIK the behavior of now() has been well-defined since the\nbeginning of Postgres. If you want to change 'current_timestamp' to\nconform to a rather debatable reading of the spec, then fine --- but\nkeep your hands off of now().\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Sep 2002 23:35:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] CURRENT_TIMESTAMP "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I was thinking 'transaction_timestamp' for the transaction start time, and\n> > current_timestamp for the statement start time. I would equate now()\n> > with current_timestamp.\n> \n> So you want to both (a) invent even more nonstandard syntax than we\n> already have, and (b) break as many traditional-Postgres applications\n> as you possibly can?\n\nNo, but I would like to see you stop makeing condescending replies to\nemails. How is that!\n\n> 'transaction_timestamp' has no reason to live. It's not in the spec.\n> And AFAIK the behavior of now() has been well-defined since the\n> beginning of Postgres. If you want to change 'current_timestamp' to\n> conform to a rather debatable reading of the spec, then fine --- but\n> keep your hands off of now().\n\nOh, really. When you get down off your chair we can vote on it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 23 Sep 2002 23:37:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Can I add TODO items for this:\n> \to Make CURRENT_TIMESTAMP/now() return statement start time\n> \to Add TRANSACTION_TIMESTAMP to return transaction start time\n\nI object to both of those as phrased. If you have already unilaterally\ndetermined the design of this feature change, then go ahead and put that\nin. But I'd suggest\n\n\to Revise current-time functions to allow access to statement\n\t start time\n\nwhich doesn't presuppose the vote about how to do it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Sep 2002 23:44:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] CURRENT_TIMESTAMP "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Can I add TODO items for this:\n> > \to Make CURRENT_TIMESTAMP/now() return statement start time\n> > \to Add TRANSACTION_TIMESTAMP to return transaction start time\n> \n> I object to both of those as phrased. If you have already unilaterally\n> determined the design of this feature change, then go ahead and put that\n> in. But I'd suggest\n> \n> \to Revise current-time functions to allow access to statement\n> \t start time\n> \n> which doesn't presuppose the vote about how to do it.\n\nOK, I am still just throwing out ideas. I am not sure we even have\nenough people who want statement_timestamp to put it in TODO. I do think\nwe have a standards issue.\n\nMy personal opinion is that most people think current_timestamp and\nnow() are statement start time, not transaction start time. In the past\nwe have told them the standard requires that but now I think we are not\neven sure if that is correct.\n\nSo, I have these concerns:\n\n\tour CURRENT_TIMESTAMP may not be standards compliant\n\teven if it is, it is probably not returning the value most people want\n\tmost people don't know it is returning the transaction start time\n\nSo, we can just throw the TODO item you mentioned above with a question\nmark, or we can try to figure out what to return for CURRENT_TIMESTAMP,\nnow(), and perhaps create a TRANSACTION_TIMESTAMP.\n\nSo, do people want to discuss it or should we just throw it in TODO with\na question mark?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 23 Sep 2002 23:52:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "On Tue, 24 Sep 2002 11:19:12 +1000, Martijn van Oosterhout\n<kleptog@svana.org> wrote:\n>Well, what I would suggest is that when you wrap several statements into a\n>single transaction with begin/commit, the whole lot could be considered a\n>single statement (since they form an atomic transaction so in a sense they\n>are all executed simultaneously).\n\nThe people who wrote the specification knew about transactions. If\nthey had wanted what you describe above, they would have written:\n\n 3) If a transaction generally contains more than one reference\n to one or more <datetime value function>s, then all such ref-\n erences are effectively evaluated simultaneously. The time of\n evaluation of the <datetime value function> during the execution\n of the transaction is implementation-dependent.\n\nBut they wrote \"SQL-statement\", not \"transaction\".\n\n>And hence Postgresql is perfectly compliant.\n\nI'm not so sure.\n\n>The current definition is, I would say, the most useful definition. Can you\n>give an example where your definition would be more useful?\n\nI did not write the standard, I'm only reading it. I have no problem\nwith an implementation that deviates from the standard \"because we\nknow better\". But we should users warn about this fact and not tell\nthem it is compliant.\n\nServus\n Manfred\n",
"msg_date": "Tue, 24 Sep 2002 10:33:51 +0200",
"msg_from": "Manfred Koizar <mkoi-pg@aon.at>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "On Mon, 23 Sep 2002 13:36:59 -0700, Josh Berkus <josh@agliodbs.com>\nwrote:\n>I, for one, would judge that the start time of the statement is \"during the \n>execution\"; it would only NOT be \"during the execution\" if it was a value \n>*before* the start time of the statement. It's a semantic argument.\n\nJosh, you're right, I meant closed interval.\n\n>Further, we could not change that behaviour without breaking many people's \n>applications.\n>\n>Ideally, since we get this question a lot, that a compile-time or \n>execution-time switch to change the behavior of current_timestamp \n>contextually would be nice.\n\nYes, GUC!\n\n>We just need someone who;s interested enough in \n>writing one.\n\nFirst we need someone who decyphers SQL99's wording.\n\nServus\n Manfred\n",
"msg_date": "Tue, 24 Sep 2002 11:16:20 +0200",
"msg_from": "Manfred Koizar <mkoi-pg@aon.at>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "On Mon, 23 Sep 2002 16:55:48 -0400, Tom Lane <tgl@sss.pgh.pa.us>\nwrote:\n>Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>Here's an example:\n>\n>CREATE RULE foo AS ON INSERT TO mytable DO\n>( INSERT INTO log1 VALUES (... , now(), ...);\n> INSERT INTO log2 VALUES (... , now(), ...) );\n>\n>I think it's important that these commands store the same timestamp in\n>both log tables (not to mention that any now() being stored into mytable\n>itself generate that same timestamp).\n\nI agree. SQL99 mentions this requirement for triggers and I think we\ncan apply it to rules as well.\n\nHere is another example:\n\nBEGIN;\nINSERT INTO foo VALUES (..., CURRENT_TIMESTAMP, ...);\n-- wait a few seconds\nINSERT INTO foo VALUES (..., CURRENT_TIMESTAMP, ...);\nCOMMIT;\n\nPlease don't ask me, why I would want that, but the standard demands\nthe timestamps to be different.\n\n>After all, it's only a minor implementation\n>detail that you chose to fire these logging operations via a rule and\n>not by client-side logic.\n\nNo, it's fundamentally different whether you do something in one\nSQL-statment or per a sequence of statements.\n\nServus\n Manfred\n",
"msg_date": "Tue, 24 Sep 2002 11:37:30 +0200",
"msg_from": "Manfred Koizar <mkoi-pg@aon.at>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] CURRENT_TIMESTAMP "
},
{
"msg_contents": "On Mon, 23 Sep 2002 23:35:13 -0400, Tom Lane <tgl@sss.pgh.pa.us>\nwrote:\n>If you want to change 'current_timestamp' to\n>conform to a rather debatable reading of the spec, [...]\n\nWell the spec may be debatable, but could you please explain why my\nreading of the spec is debatable. The spec says \"during the execution\nof the SQL-statement\". You know English is not my first language, but\nas far as I have learned \"during\" does not mean \"at any time before\".\n\nServus\n Manfred\n",
"msg_date": "Tue, 24 Sep 2002 11:44:42 +0200",
"msg_from": "Manfred Koizar <mkoi-pg@aon.at>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] CURRENT_TIMESTAMP "
},
{
"msg_contents": "Manfred Koizar <mkoi-pg@aon.at> writes:\n> On Mon, 23 Sep 2002 13:36:59 -0700, Josh Berkus <josh@agliodbs.com>\n> wrote:\n>> Ideally, since we get this question a lot, that a compile-time or \n>> execution-time switch to change the behavior of current_timestamp \n>> contextually would be nice.\n\n> Yes, GUC!\n\nI think a GUC variable is overkill, in fact potentially dangerous\n(what if it's been changed without your app noticing)? I'm fine with\nchanging current_timestamp to be start-of-current-interactive-command,\nthough I'd not want to try to chop it more finely than that, for the\nreasons already discussed. But I strongly feel that we should leave\nthe historical behavior of now() alone. There is no spec-based argument\nfor changing now(), since it isn't in the spec, and its behavior has\nbeen set *and documented* in Postgres since Berkeley days.\n\nIf we leave now() alone then there's no need to create another\nnon-spec-compliant syntax like 'transaction_timestamp', either.\n(I really don't want to see us do that, because without parens\nit would mean making a new, not-in-the-spec fully-reserved word.)\n\nBTW, as long as we are dorking with the current-time family, does\nanyone want to vote for changing timeofday() to return a timestamptz\ninstead of a text string? There's no good argument except slavish\nbackward compatibility for having it return text, and we seem to be\nquite willing to ignore backwards compatibility in this thread ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 24 Sep 2002 09:26:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] CURRENT_TIMESTAMP "
},
{
"msg_contents": ">>>>> \"Martijn\" == Martijn van Oosterhout <kleptog@svana.org> writes:\n\n Martijn> Well, what I would suggest is that when you wrap several\n Martijn> statements into a single transaction with begin/commit,\n Martijn> the whole lot could be considered a single statement\n Martijn> (since they form an atomic transaction so in a sense they\n Martijn> are all executed simultaneously). And hence Postgresql is\n Martijn> perfectly compliant.\n\nFWIW, and not that I am an Oracle fan :-), Oracle seems to interpret\nthis the same way when using a \"select sysdate from dual\" inside a\ntransaction.\n\nroland\n-- \n\t\t PGP Key ID: 66 BC 3B CD\nRoland B. Roberts, PhD RL Enterprises\nroland@rlenter.com 76-15 113th Street, Apt 3B\nroland@astrofoto.org Forest Hills, NY 11375\n",
"msg_date": "24 Sep 2002 10:55:41 -0400",
"msg_from": "Roland Roberts <roland@astrofoto.org>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "Tom,\n\n> If we leave now() alone then there's no need to create another\n> non-spec-compliant syntax like 'transaction_timestamp', either.\n> (I really don't want to see us do that, because without parens\n> it would mean making a new, not-in-the-spec fully-reserved word.)\n\nSo, if I've got this straight:\n\n-- current_timestamp will return the timestamp for the beginning of the\nSQL statement.\n\n-- now() will return the timestamp for the beginning of the\ntransaction.\n\n-- timeofday() will return the timestamp of the exact time the function\nis called.\n\n... thus changing only current_timestamp.\n\nThis looks fine to me, as a search-and-replace on current_timestamp is\neasy. However, we need to do a better job of warning people about the\nchange than we did with interval() to \"interval\"(). \n\nActually, can I make the proposal that *any* change that breaks\nbackward compatibility be mentioned in both the new version\nannouncement and on the download page? This would prevent a lot of\ngrief. If I'm kept informed of these changes, I'll be happy to write\nup a user-friendly announcement/instructions on how to cope with the\nchange.\n\n> BTW, as long as we are dorking with the current-time family, does\n> anyone want to vote for changing timeofday() to return a timestamptz\n> instead of a text string? There's no good argument except slavish\n> backward compatibility for having it return text, and we seem to be\n> quite willing to ignore backwards compatibility in this thread ...\n\nNo, I don't see any reason to do this. It's not like timeofday() is a\nparticularly logical name, anyway. Why not introduce a new function,\nrightnow(), that returns timestamptz?\n\nBetter yet, how about we introduce a parameter to now()? Example:\n\nnow() or now('transaction') returns the transaction timestamp.\nnow('statement') returns the statement timestamp\nnow('immediate') returns the timestamp at the exact time the function\nis called.\n\nThis would seem to me much more consistent than having 3 different\ntime-calls, whose names have nothing to do with the difference between\nthem. And it has the advantage of not breaking backward compatibility.\n\nWe could introduce the new version of now() in 7.4, encourage everyone\nto use it instead of other timestamp calls, and then in 7.5 change the\nbehavior of current_timestamp for SQL92 compliance.\n\n-Josh Berkus\n\n",
"msg_date": "Tue, 24 Sep 2002 08:05:59 -0700",
"msg_from": "\"Josh Berkus\" <josh@agliodbs.com>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] CURRENT_TIMESTAMP "
},
{
"msg_contents": "On Tue, Sep 24, 2002 at 10:33:51AM +0200, Manfred Koizar wrote:\n> \n> The people who wrote the specification knew about transactions. If\n> they had wanted what you describe above, they would have written:\n> \n> 3) If a transaction generally contains more than one reference\n> to one or more <datetime value function>s, then all such ref-\n> erences are effectively evaluated simultaneously. The time of\n> evaluation of the <datetime value function> during the execution\n> of the transaction is implementation-dependent.\n> \n> But they wrote \"SQL-statement\", not \"transaction\".\n> \n> >And hence Postgresql is perfectly compliant.\n> \n> I'm not so sure.\n> \n> >The current definition is, I would say, the most useful definition. Can you\n> >give an example where your definition would be more useful?\n> \n> I did not write the standard, I'm only reading it. I have no problem\n> with an implementation that deviates from the standard \"because we\n> know better\". But we should users warn about this fact and not tell\n> them it is compliant.\n\nAt first, I also found the idea of now() freezing during a transaction\nodd. But now I seems the right thing to do - I can't really come up with\na use-case for current_timestamp to vary. \n\nFor the relational algebra and transactional logic purists out there,\nhaving current_timetamp be a fixed transaction time reinforces the\n'atomicity' of a transaction - it's _supposed_ to happen all at once,\nas far as the rest of the system is concerned. Many parts of the the\nstandard deviate from the ideals, however, probably due to the desire\nof those with existing software to make it 'standards compliant' by\nbending the standard, instead of fixing the software. There are places\nin SQL92, especially, where if you know the exact feature set of some of\nthe big DBs from that era, you can imagine the conversation that lead\nto inserting specific ambiguities into the document.\n\nAs you've probably noticed, SQL92 (and '99, from what I've look at in it)\nare _not_ examples of the clearest, most pristine english in the world.\nI sometimes wonder if the committee was actually an early attempt at\nmachine generated natural language, then I realize if that were true,\nit would be clearer and more self-consistent. ;-)\n\nAll this is a very longwinded way for me to say leave now() as transaction\ntime, and get Peter to interpret this passage, to see what should happen\nwith current_timestamp. He seems to be one of the best at disentagling\nthe standards verbiage.\n\nRoss\n\n\n\n",
"msg_date": "Tue, 24 Sep 2002 10:07:51 -0500",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "On Tue, Sep 24, 2002 at 10:55:41AM -0400, Roland Roberts wrote:\n> >>>>> \"Martijn\" == Martijn van Oosterhout <kleptog@svana.org> writes:\n> \n> Martijn> Well, what I would suggest is that when you wrap several\n> Martijn> statements into a single transaction with begin/commit,\n> Martijn> the whole lot could be considered a single statement\n> Martijn> (since they form an atomic transaction so in a sense they\n> Martijn> are all executed simultaneously). And hence Postgresql is\n> Martijn> perfectly compliant.\n> \n> FWIW, and not that I am an Oracle fan :-), Oracle seems to interpret\n> this the same way when using a \"select sysdate from dual\" inside a\n> transaction.\n\nOh, interesting datapoint. Let me get this clear - on oracle, the\nequivalent of:\n\nBEGIN;\nSELECT current_timestamp;\n<go off to lunch, come back>\nSELECT current_timestamp;\nEND;\n\nwill give two identical timestamps?\n\nRoss\n",
"msg_date": "Tue, 24 Sep 2002 10:10:03 -0500",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "On Tue, Sep 24, 2002 at 08:05:59AM -0700, Josh Berkus wrote:\n> \n> This looks fine to me, as a search-and-replace on current_timestamp is\n> easy. However, we need to do a better job of warning people about the\n> change than we did with interval() to \"interval\"(). \n> \n> Actually, can I make the proposal that *any* change that breaks\n> backward compatibility be mentioned in both the new version\n> announcement and on the download page? This would prevent a lot of\n> grief. If I'm kept informed of these changes, I'll be happy to write\n> up a user-friendly announcement/instructions on how to cope with the\n> change.\n\nI'd suggest we (for values of we that probably resolve to Bruce\nor a Bruce triggered Josh ;-) start a new doc, right now, for\n7.4_USER_VISIBLE_CHANGES, or some other, catchy title. In it, document,\nwith example SQL snippets, if need be, the change from previous behavior,\n_when the patch is committed_. In fact, y'all could be hardnosed about\nnot accepting a user visible syntax changing patch without it touching\nthis file. Such a document would be invaluable for database migration.\n\nOn another note, this discussion is happening on GENERAL and SQL, but\nis getting pretty technical - should someone more it to HACKERS to get\ninput from developers who don't hang out here?\n\nRoss\n",
"msg_date": "Tue, 24 Sep 2002 10:19:49 -0500",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "Josh Berkus writes:\n> now() or now('transaction') returns the transaction timestamp.\n> now('statement') returns the statement timestamp now('immediate') returns\n> the timestamp at the exact time the function is called.\n\nI like that.\n\nIMHO \"the exact time the function is called\" is what most people would\nexpect to get from now(), but it's too late for that.\n-- \nJohn Hasler\njohn@dhh.gt.org\nDancing Horse Hill\nElmwood, Wisconsin\n",
"msg_date": "24 Sep 2002 10:27:01 -0500",
"msg_from": "John Hasler <john@dhh.gt.org>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "\"Josh Berkus\" <josh@agliodbs.com> writes:\n> So, if I've got this straight:\n> [ snip ]\n> ... thus changing only current_timestamp.\n\nYeah, that's more or less what I was thinking. The argument for\nchanging current_timestamp seems to be really just spec compliance;\nthat doesn't apply to now() or timeofday().\n\n> Better yet, how about we introduce a parameter to now()? Example:\n> now() or now('transaction') returns the transaction timestamp.\n> now('statement') returns the statement timestamp\n> now('immediate') returns the timestamp at the exact time the function\n> is called.\n\nI like this.\n\n> We could introduce the new version of now() in 7.4, encourage everyone\n> to use it instead of other timestamp calls, and then in 7.5 change the\n> behavior of current_timestamp for SQL92 compliance.\n\nI'd be inclined to just do it; we have not been very good about\nfollowing through on multi-version sequences of changes. And the\nfolks who want a standard-compliant current_timestamp aren't going\nto want to migrate to now('statement') instead ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 24 Sep 2002 12:00:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] CURRENT_TIMESTAMP "
},
{
"msg_contents": ">>>>> \"Ross\" == Ross J Reedstrom <reedstrm@rice.edu> writes:\n\n Ross> Oh, interesting datapoint. Let me get this clear - on\n Ross> oracle, the equivalent of:\n\nWell, I've never gone off to lunch in the middle, but in Oracle 7, I\nhad transactions which definitely took as much as a few minutes to\ncomplete where the timestamp on every row committed was the same.\n\nroland\n-- \n\t\t PGP Key ID: 66 BC 3B CD\nRoland B. Roberts, PhD RL Enterprises\nroland@rlenter.com 76-15 113th Street, Apt 3B\nroland@astrofoto.org Forest Hills, NY 11375\n",
"msg_date": "24 Sep 2002 17:48:21 -0400",
"msg_from": "Roland Roberts <roland@astrofoto.org>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "Roland Roberts wrote:\n> >>>>> \"Ross\" == Ross J Reedstrom <reedstrm@rice.edu> writes:\n> \n> Ross> Oh, interesting datapoint. Let me get this clear - on\n> Ross> oracle, the equivalent of:\n> \n> Well, I've never gone off to lunch in the middle, but in Oracle 7, I\n> had transactions which definitely took as much as a few minutes to\n> complete where the timestamp on every row committed was the same.\n\nCan you run a test:\n\n\tBEGIN;\n\tSELECT CURRENT_TIMESTAMP;\n\twait 5 seconds\n\tSELECT CURRENT_TIMESTAMP;\n\nAre the two times the same?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 24 Sep 2002 17:56:51 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "On Tue, 24 Sep 2002 17:56:51 -0400 (EDT), Bruce Momjian\n<pgman@candle.pha.pa.us> wrote:\n>Can you run a test:\n>\n>\tBEGIN;\n>\tSELECT CURRENT_TIMESTAMP;\n>\twait 5 seconds\n>\tSELECT CURRENT_TIMESTAMP;\n>\n>Are the two times the same?\n\nMS SQL 7:\n\tbegin transaction\n\tinsert into tst values (CURRENT_TIMESTAMP)\n\t-- wait\n\tinsert into tst values (CURRENT_TIMESTAMP)\n\tcommit\n\tselect * from tst\n\n\tt \n\t--------------------------- \n\t2002-09-24 09:49:58.777\n\t2002-09-24 09:50:14.100\n\nInterbase 6:\n\tSQL> select current_timestamp from rdb$database;\n\n\t=========================\n\t2002-09-24 22:30:13.0000\n\n\tSQL> select current_timestamp from rdb$database;\n\n\t=========================\n\t2002-09-24 22:30:18.0000\n\n\tSQL> commit;\n\nServus\n Manfred\n",
"msg_date": "Wed, 25 Sep 2002 08:54:56 +0200",
"msg_from": "Manfred Koizar <mkoi-pg@aon.at>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "Michael Paesold <mpaesold@gmx.at> writes:\n> I just read it's possible to get the MVCC last version numbers. Is it also\n> possible to get the current transaction id?\n\nWell, there's the brute force way: insert a tuple in some table and look\nat its xmin. Offhand I don't think we provide a SQL function to read\ncurrent transaction id, though it'd surely be a trivial addition.\n\n> Would it be possible to check\n> later if that transaction has been commited? This would be nice for a distributed\n> application to enforce an \"exactly once\" semantics for transactions (even if\n> there are network related errors while the server sends ack for commiting a\n> transaction).\n\nAgain, it's not an exported operation, though you could add a SQL function\nthat called TransactionIdDidCommit().\n\n> And if it's possible, how long would that information be valid, i.e. when do\n> transaction id's get reused?\n\nThat would be the tricky part. The ID would be reused after 4 billion\ntransactions, which is long enough that you probably don't care ... but\nthe segment of the transaction log that has the associated commit bit\nwill be recycled as soon as the server has no internal use for it\nanymore, which could be as early as the next database-wide VACUUM.\nIf you tried to call TransactionIdDidCommit() after that, you'd get the\ninfamous \"can't open pg_clog/nnnn\" error.\n\n> If it's not working I will have to implement my own transactions table.\n\nThat's what I'd recommend. Transaction IDs are internal to the database\nand are not designed for users to rely on.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Sep 2002 13:12:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Getting current transaction id "
},
{
"msg_contents": "Tom Lane wrote:\n\n\n> Michael Paesold <mpaesold@gmx.at> writes:\n[snip]\n> > If it's not working I will have to implement my own transactions table.\n> \n> That's what I'd recommend. Transaction IDs are internal to the database\n> and are not designed for users to rely on.\n> \n> regards, tom lane\n\nWell, after reading your explanation I agree with you that it is better\nto have my own transaction table. I appreciate your detailed response.\n\nThanks very much!\n\nBest Regards,\nMichael Paesold\n\n\n",
"msg_date": "Wed, 25 Sep 2002 22:21:24 +0200",
"msg_from": "\"Michael Paesold\" <mpaesold@gmx.at>",
"msg_from_op": false,
"msg_subject": "Re: Getting current transaction id "
},
{
"msg_contents": "\nSQL> create table rbr_foo (a date);\n\nTable created.\n\nSQL> begin\n 2 insert into rbr_foo select sysdate from dual;\n[...wait about 10 seconds...]\n 3 insert into rbr_foo select sysdate from dual;\n 4 end;\n 5 /\n\nPL/SQL procedure successfully completed.\n\nSQL> select * from rbr_foo;\n\nA\n---------------------\nSEP 27, 2002 12:57:27\nSEP 27, 2002 12:57:27\n\nNote that, as near as I can tell, Oracle 8 does NOT have timestamp or\ncurrent_timestamp. Online docs say both are present in Oracle 9i.\n\nroland\n-- \n\t\t PGP Key ID: 66 BC 3B CD\nRoland B. Roberts, PhD RL Enterprises\nroland@rlenter.com 76-15 113th Street, Apt 3B\nroland@astrofoto.org Forest Hills, NY 11375\n",
"msg_date": "27 Sep 2002 13:29:03 -0400",
"msg_from": "Roland Roberts <roland@astrofoto.org>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "\nOK, we have two db's returning statement start time, and Oracle 8 not\nhaving CURRENT_TIMESTAMP.\n\nHave we agreed to make CURRENT_TIMESTAMP statement start, and now()\ntransaction start? Is this an open item or TODO item?\n\n---------------------------------------------------------------------------\n\nManfred Koizar wrote:\n> On Tue, 24 Sep 2002 17:56:51 -0400 (EDT), Bruce Momjian\n> <pgman@candle.pha.pa.us> wrote:\n> >Can you run a test:\n> >\n> >\tBEGIN;\n> >\tSELECT CURRENT_TIMESTAMP;\n> >\twait 5 seconds\n> >\tSELECT CURRENT_TIMESTAMP;\n> >\n> >Are the two times the same?\n> \n> MS SQL 7:\n> \tbegin transaction\n> \tinsert into tst values (CURRENT_TIMESTAMP)\n> \t-- wait\n> \tinsert into tst values (CURRENT_TIMESTAMP)\n> \tcommit\n> \tselect * from tst\n> \n> \tt \n> \t--------------------------- \n> \t2002-09-24 09:49:58.777\n> \t2002-09-24 09:50:14.100\n> \n> Interbase 6:\n> \tSQL> select current_timestamp from rdb$database;\n> \n> \t=========================\n> \t2002-09-24 22:30:13.0000\n> \n> \tSQL> select current_timestamp from rdb$database;\n> \n> \t=========================\n> \t2002-09-24 22:30:18.0000\n> \n> \tSQL> commit;\n> \n> Servus\n> Manfred\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sat, 28 Sep 2002 23:28:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "On Sat, Sep 28, 2002 at 11:28:03PM -0400, Bruce Momjian wrote:\n> \n> OK, we have two db's returning statement start time, and Oracle 8 not\n> having CURRENT_TIMESTAMP.\n> \n> Have we agreed to make CURRENT_TIMESTAMP statement start, and now()\n> transaction start? Is this an open item or TODO item?\n\nWell, I'd rather it didn't change at all. IMHO it's a feature, not a bug. In\nany case, if it does get changed we'll have to go through the documentation\nand work out whether we mean current_timestamp or now(). I think most people\nactually want now().\n\nFortunatly where I work we only use now() so it won't really matter too\nmuch. Is there a compelling reason to change?\n\n> ---------------------------------------------------------------------------\n> \n> Manfred Koizar wrote:\n> > On Tue, 24 Sep 2002 17:56:51 -0400 (EDT), Bruce Momjian\n> > <pgman@candle.pha.pa.us> wrote:\n> > >Can you run a test:\n> > >\n> > >\tBEGIN;\n> > >\tSELECT CURRENT_TIMESTAMP;\n> > >\twait 5 seconds\n> > >\tSELECT CURRENT_TIMESTAMP;\n> > >\n> > >Are the two times the same?\n> > \n> > MS SQL 7:\n> > \tbegin transaction\n> > \tinsert into tst values (CURRENT_TIMESTAMP)\n> > \t-- wait\n> > \tinsert into tst values (CURRENT_TIMESTAMP)\n> > \tcommit\n> > \tselect * from tst\n> > \n> > \tt \n> > \t--------------------------- \n> > \t2002-09-24 09:49:58.777\n> > \t2002-09-24 09:50:14.100\n> > \n> > Interbase 6:\n> > \tSQL> select current_timestamp from rdb$database;\n> > \n> > \t=========================\n> > \t2002-09-24 22:30:13.0000\n> > \n> > \tSQL> select current_timestamp from rdb$database;\n> > \n> > \t=========================\n> > \t2002-09-24 22:30:18.0000\n> > \n> > \tSQL> commit;\n> > \n> > Servus\n> > Manfred\n> > \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \nMartijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/\n> There are 10 kinds of people in the world, those that can do binary\n> arithmetic and those that can't.\n",
"msg_date": "Sun, 29 Sep 2002 13:47:06 +1000",
"msg_from": "Martijn van Oosterhout <kleptog@svana.org>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "Martijn van Oosterhout wrote:\n> On Sat, Sep 28, 2002 at 11:28:03PM -0400, Bruce Momjian wrote:\n> > \n> > OK, we have two db's returning statement start time, and Oracle 8 not\n> > having CURRENT_TIMESTAMP.\n> > \n> > Have we agreed to make CURRENT_TIMESTAMP statement start, and now()\n> > transaction start? Is this an open item or TODO item?\n> \n> Well, I'd rather it didn't change at all. IMHO it's a feature, not a bug. In\n> any case, if it does get changed we'll have to go through the documentation\n> and work out whether we mean current_timestamp or now(). I think most people\n> actually want now().\n\nWell, I think we have to offer statement start time somewhere, and it\nseems the standard probably requires that. Two other databases do it\nthat way. Oracle doesn't have CURRENT_TIMESTAMP in 8.X. Can anyone\ntest on 9.X?\n\n> Fortunatly where I work we only use now() so it won't really matter too\n> much. Is there a compelling reason to change?\n\nYes, it will split now() and CURRENT_TIMESTAMP. I personally would be\nhappy with STATEMENT_TIMESTAMP, but because the standard requires it we\nmay just have to fix CURRENT_TIMESTAMP.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sat, 28 Sep 2002 23:51:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "On Sat, Sep 28, 2002 at 11:51:32PM -0400, Bruce Momjian wrote:\n> Martijn van Oosterhout wrote:\n> > Well, I'd rather it didn't change at all. IMHO it's a feature, not a bug. In\n> > any case, if it does get changed we'll have to go through the documentation\n> > and work out whether we mean current_timestamp or now(). I think most people\n> > actually want now().\n> \n> Well, I think we have to offer statement start time somewhere, and it\n> seems the standard probably requires that. Two other databases do it\n> that way. Oracle doesn't have CURRENT_TIMESTAMP in 8.X. Can anyone\n> test on 9.X?\n\nHmm, well having a statement start time could be conceivably useful.\n\n> > Fortunatly where I work we only use now() so it won't really matter too\n> > much. Is there a compelling reason to change?\n> \n> Yes, it will split now() and CURRENT_TIMESTAMP. I personally would be\n> happy with STATEMENT_TIMESTAMP, but because the standard requires it we\n> may just have to fix CURRENT_TIMESTAMP.\n\nWell, my vote would be for STATEMENT_TIMESTAMP. Is there really no other\ndatabase that does it the way we do? Perhaps it could be matched with a\nTRANSACTION_TIMESTAMP and we can sort out CURRENT_TIMESTAMP some other way.\n\n-- \nMartijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/\n> There are 10 kinds of people in the world, those that can do binary\n> arithmetic and those that can't.\n",
"msg_date": "Sun, 29 Sep 2002 14:10:07 +1000",
"msg_from": "Martijn van Oosterhout <kleptog@svana.org>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "Martijn van Oosterhout <kleptog@svana.org> writes:\n> On Sat, Sep 28, 2002 at 11:51:32PM -0400, Bruce Momjian wrote:\n>> Yes, it will split now() and CURRENT_TIMESTAMP. I personally would be\n>> happy with STATEMENT_TIMESTAMP, but because the standard requires it we\n>> may just have to fix CURRENT_TIMESTAMP.\n\n> Well, my vote would be for STATEMENT_TIMESTAMP.\n\nOne problem with inventing STATEMENT_TIMESTAMP is that (if spelled that\nway, without parens) it would have to become a fully-reserved keyword,\nthus possibly breaking some applications that use that name now.\n\nBut the real point, I think, is that the folks pushing for this think\nthat the standard requires CURRENT_TIMESTAMP to be statement timestamp.\nInventing some other keyword isn't going to satisfy them.\n\nI don't personally find the \"it's required by the spec\" argument\ncompelling, because the spec specifically says that the exact behavior\nis implementation-dependent --- so anyone who assumes CURRENT_TIMESTAMP\nwill behave as start-of-statement timestamp is going to have portability\nproblems anyway. Oracle didn't seem to find the argument compelling\neither; at last report they have no statement-timestamp function.\n\nI'd be happier with the whole thing if anyone had exhibited a convincing\nuse-case for statement timestamp. So far I've not seen any actual\nexamples of situations that are not better served by either transaction\ntimestamp or true current time. And the spec is perfectly clear that\nCURRENT_TIMESTAMP does not mean true current time...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 29 Sep 2002 00:35:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] CURRENT_TIMESTAMP "
},
{
"msg_contents": "\nTom,\n\n> I'd be happier with the whole thing if anyone had exhibited a convincing\n> use-case for statement timestamp. So far I've not seen any actual\n> examples of situations that are not better served by either transaction\n> timestamp or true current time. And the spec is perfectly clear that\n> CURRENT_TIMESTAMP does not mean true current time...\n\nAre we still planning on putting the three different versions of now() on the \nTODO? I.e.,\nnow('transaction'),\nnow('statement'), and\nnow('immediate')\nWith now() = now('transaction')?\n\nI still think it's a good idea, provided that we have some easy means to \ndetermine now('statement').\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Sun, 29 Sep 2002 12:43:45 -0700",
"msg_from": "Josh Berkus <josh@agliodbs.com>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "Josh Berkus wrote:\n> \n> Tom,\n> \n> > I'd be happier with the whole thing if anyone had exhibited a convincing\n> > use-case for statement timestamp. So far I've not seen any actual\n> > examples of situations that are not better served by either transaction\n> > timestamp or true current time. And the spec is perfectly clear that\n> > CURRENT_TIMESTAMP does not mean true current time...\n> \n> Are we still planning on putting the three different versions of now() on the \n> TODO? I.e.,\n> now('transaction'),\n> now('statement'), and\n> now('immediate')\n> With now() = now('transaction')?\n> \n> I still think it's a good idea, provided that we have some easy means to \n> determine now('statement').\n\nI did a little more research on CURRENT_TIMESTAMP. I read the Oracle\ndocs, and while they mention it, they don't say if the date is xact,\nstatement, or timeofday. They do mention it was only added in their\nnewest product, 9.X, so it isn't surpising no one is using it.\n\nI also researched the SQL99 standards and found a much more specific\ndefinition:\n\n 3) Let S be an <SQL procedure statement> that is not generally\n contained in a <triggered action>. All <datetime value\n function>s that are generally contained, without an intervening\n <routine invocation> whose subject routines do not include an\n SQL function, in <value expression>s that are contained either\n in S without an intervening <SQL procedure statement> or in an\n <SQL procedure statement> contained in the <triggered action>\n of a trigger activated as a consequence of executing S, are\n effectively evaluated simultaneously. The time of evaluation of\n a <datetime value function> during the execution of S and its\n activated triggers is implementation-dependent.\n\nThey basically seem to be saying that CURRENT_TIMESTAMP has to be the\nsame for all triggers as it is for the submitted SQL statement. When\nthey say \"the time of evaluation ... is implementation-dependent\" they\nmean that is can be the beginning of the statement, or the end of the\nstatement. In fact, you can make a strong argument that it should be\nthe statement end time that is the proper time, but for implementation\nreasons, it is certainly easier to make it start.\n\nNow, they are _not_ saying the statement can't have the same time as\nother statements in the transaction, but I don't see why they would\nexplicitly have to state that. They say statement, so I think we need\nto follow that if we want to be standard-compliant. We already have two\nother databases who are doing this timing at statement level.\n\nIf we change CURRENT_TIMESTAMP to statement time, I don't think we need\nnow(\"\"), but if we don't change it, I think we do --- somehow we should\nallow users to access statement time.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 29 Sep 2002 16:38:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "Bruce,\n\n> If we change CURRENT_TIMESTAMP to statement time, I don't think we need\n> now(\"\"), but if we don't change it, I think we do --- somehow we should\n> allow users to access statement time.\n\nI'd argue that we need the 3 kinds of now() regardless, just to limit user \nconfusion. If we set things up as:\n\nnow() = transaction time\ncurrent_timestamp = statement time\ntimeofday() = exact time\n\nThat does give users access to all 3 timestamps, but using a competely \nnon-intuitive nomenclature. It's likely that the three types of now() would \njust be pointers to other time functions, but would provide nomenative \nclarity.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Sun, 29 Sep 2002 13:47:49 -0700",
"msg_from": "Josh Berkus <josh@agliodbs.com>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "Josh Berkus wrote:\n> Bruce,\n> \n> > If we change CURRENT_TIMESTAMP to statement time, I don't think we need\n> > now(\"\"), but if we don't change it, I think we do --- somehow we should\n> > allow users to access statement time.\n> \n> I'd argue that we need the 3 kinds of now() regardless, just to limit user \n> confusion. If we set things up as:\n> \n> now() = transaction time\n> current_timestamp = statement time\n> timeofday() = exact time\n> \n> That does give users access to all 3 timestamps, but using a competely \n> non-intuitive nomenclature. It's likely that the three types of now() would \n> just be pointers to other time functions, but would provide nomenative \n> clarity.\n\nI agree, having now() as a central place for time information is a good\nidea. Maybe we need to vote on these issues.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 29 Sep 2002 17:27:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "On Sun, 29 Sep 2002, Bruce Momjian wrote:\n\nApologies in advance if there is a more appropriate list. \n\nWe are currently developing a database to host some complicated, XMl\nlayered data. We have chosen postgres because of its ability to store\nmultidimensional arrays. We feel that using these will allow us to\nsimplify the database structure considerably by storing some data in\nmultidimensional arrays. \n\nHowever, we currently have some dissenters who believe that using the\nmultidimensional arrays will make queries slower and unneccesarily\ncomplicated. Its hard for us to evaluate in advance because none of us\nhave much experience with postgres (we are web based and have relied on\nMySQL for most projects up to this point). \n\nI have several questions related to the scenario above. \n\n1) are SQL queries slower when extracting data from multidimensional\narrays\n2) are table joins more difficult or unneccesarily complicated\n3) can you do selects on only a portion of a multidimensional array. That\nis, if you were storing multilanguage titles in a two dimensional array, \n\n[en], \"english title\"\n[fr], \"french title\"\n\ncould you select where title[0] = 'en'\n\nI know these may sound like terribily stupid questions. but we need some\nquick guidance before proceeding with a schema that relies on these\nadvanced data features of postgres\n\ntia\n\nmike\n\n\n___\n This communication is intended for the use of the recipient to whom it\n is addressed, and may contain confidential, personal, and or privileged\n information. Please contact us immediately if you are not the intended\n recipient of this communication, and do not copy, distribute, or take\n action relying on it. Any communications received in error, or\n subsequent reply, should be deleted or destroyed.\n---\n",
"msg_date": "Sun, 29 Sep 2002 18:12:55 -0600 (MDT)",
"msg_from": "Mike Sosteric <mikes@athabascau.ca>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] arrays"
},
{
"msg_contents": "Josh Berkus <josh@agliodbs.com> writes:\n> Are we still planning on putting the three different versions of now() on the\n> TODO? I.e.,\n> now('transaction'),\n> now('statement'), and\n> now('immediate')\n> With now() = now('transaction')?\n\nI have no objection to doing that. What seems to be contentious is\nwhether we should change the current behavior of CURRENT_TIMESTAMP.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 29 Sep 2002 23:53:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] CURRENT_TIMESTAMP "
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Now, they are _not_ saying the statement can't have the same time as\n> other statements in the transaction, but I don't see why they would\n> explicitly have to state that.\n\nAllow me to turn that around: given that they clearly do NOT state that,\nhow can you argue that \"the spec requires it\"? AFAICS the spec does not\nrequire it. In most places they are considerably more explicit than\nthis about stating what is required.\n\n> We already have two other databases who are doing this timing at\n> statement level.\n\nThe behavior of CURRENT_TIMESTAMP is clearly stated by the spec to be\nimplementation-dependent. We are under no compulsion to follow any\nspecific other implementation. If we were going to follow some other\nlead, I'd look to Oracle first...\n\n> If we change CURRENT_TIMESTAMP to statement time, I don't think we need\n> now(\"\"), but if we don't change it, I think we do --- somehow we should\n> allow users to access statement time.\n\nI have no problem with providing a function to access statement time,\nand now('something') seems a reasonable spelling of that function.\nBut I think the argument that we should change our historical behavior\nof CURRENT_TIMESTAMP is very weak.\n\nOne reason why I have a problem with the notion that the spec requires\nCURRENT_TIMESTAMP to mean \"time of arrival of the current interactive\ncommand\" (which is the only specific definition I've seen mentioned\nhere) is that the spec does not truly have a notion of interactive\ncommand to begin with. AFAICT the spec's model of command execution\nis ecpg-like: you have commands embedded in a calling language with\nall sorts of opportunities for pre-planning, pre-execution, etc.\nThe notion of command arrival time is extremely fuzzy in this model.\nIt could very well be the time you compiled the ecpg application, or\nthe time you started the application running.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 30 Sep 2002 00:36:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] CURRENT_TIMESTAMP "
},
{
"msg_contents": "On Sun, Sep 29, 2002 at 18:12:55 -0600,\n Mike Sosteric <mikes@athabascau.ca> wrote:\n> On Sun, 29 Sep 2002, Bruce Momjian wrote:\n> \n> 3) can you do selects on only a portion of a multidimensional array. That\n> is, if you were storing multilanguage titles in a two dimensional array, \n> \n> [en], \"english title\"\n> [fr], \"french title\"\n> \n> could you select where title[0] = 'en'\n\nIt is unusual to want to store arrays in a database. Normally you want to\nuse additional tables instead. For example multilanguage titles is something\nI would expect to be in a table that had a column referencing back to\nanother table defining the object a title was for, a column with the\ntitle and a column with the language.\n",
"msg_date": "Mon, 30 Sep 2002 07:29:26 -0500",
"msg_from": "Bruno Wolff III <bruno@wolff.to>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] arrays"
},
{
"msg_contents": "On Mon, 30 Sep 2002, Bruno Wolff III wrote:\n\n> > 3) can you do selects on only a portion of a multidimensional array. That\n> > is, if you were storing multilanguage titles in a two dimensional array, \n> > \n> > [en], \"english title\"\n> > [fr], \"french title\"\n> > \n> > could you select where title[0] = 'en'\n> \n> It is unusual to want to store arrays in a database. Normally you want to\n> use additional tables instead. For example multilanguage titles is something\n> I would expect to be in a table that had a column referencing back to\n> another table defining the object a title was for, a column with the\n> title and a column with the language.\n> \n\nThe chances are very very good that in 99% of the cases we'd only ever\nhave a single title. multiple titles would be rare. and, to make it worse,\nthere are several instances of this where you need a table but its seems\noverkill for the odd 1% time when you actually need teh extra row.\n\nof course, the there'd be a language lookup table.\n\nwhat about the speed and query issue?\nm\n\n\n___\n This communication is intended for the use of the recipient to whom it\n is addressed, and may contain confidential, personal, and or privileged\n information. Please contact us immediately if you are not the intended\n recipient of this communication, and do not copy, distribute, or take\n action relying on it. Any communications received in error, or\n subsequent reply, should be deleted or destroyed.\n---\n",
"msg_date": "Mon, 30 Sep 2002 06:38:56 -0600 (MDT)",
"msg_from": "Mike Sosteric <mikes@athabascau.ca>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] arrays"
},
{
"msg_contents": "On Mon, 30 Sep 2002, Bruno Wolff III wrote:\n\n>\n> It is unusual to want to store arrays in a database. Normally you want to\n> use additional tables instead. For example multilanguage titles is something\n> I would expect to be in a table that had a column referencing back to\n> another table defining the object a title was for, a column with the\n> title and a column with the language.\n\nI think arrays are one of the cool features of postgres\n(along with gist indexes).\n\nHere are some common uses:\n\n- Tree representation (the genealogical from child to ancestors approach)\n- Storing of polynomial formulae of arbitary degree\n\ncheckout the intarray package in contrib for further info.\n\nI think pgsql arrays provide a natural solution to certain problems\nwhere it fits.\n\n\n\n==================================================================\nAchilleus Mantzios\nS/W Engineer\nIT dept\nDynacom Tankers Mngmt\nNikis 4, Glyfada\nAthens 16610\nGreece\ntel: +30-10-8981112\nfax: +30-10-8981877\nemail: achill@matrix.gatewaynet.com\n mantzios@softlab.ece.ntua.gr\n\n",
"msg_date": "Mon, 30 Sep 2002 16:18:54 +0300 (EEST)",
"msg_from": "Achilleus Mantzios <achill@matrix.gatewaynet.com>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] arrays"
},
{
"msg_contents": "On Mon, Sep 30, 2002 at 06:38:56 -0600,\n Mike Sosteric <mikes@athabascau.ca> wrote:\n> On Mon, 30 Sep 2002, Bruno Wolff III wrote:\n> \n> The chances are very very good that in 99% of the cases we'd only ever\n> have a single title. multiple titles would be rare. and, to make it worse,\n> there are several instances of this where you need a table but its seems\n> overkill for the odd 1% time when you actually need teh extra row.\n> \n> of course, the there'd be a language lookup table.\n> \n> what about the speed and query issue?\n\nThe book or movie or whatever table should have an index on something\n(say bookid). Then make an index on the title table on bookid. This\nmakes getting the titles for a specific book fairly efficient.\n\nI think using a simpler design (i.e. tables in preference to arrays)\nwill make doing the project easier. This may override any speed up\nyou get using arrays.\n",
"msg_date": "Mon, 30 Sep 2002 08:57:33 -0500",
"msg_from": "Bruno Wolff III <bruno@wolff.to>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] arrays"
},
{
"msg_contents": "Mike Sosteric <mikes@athabascau.ca> writes:\n> could you select where title[0] = 'en'\n\nYou certainly could ... but bear in mind that there's no convenient way\nto make such a query be indexed, at present. So any values that you\nactually want to use as search keys had better be in their own fields.\n\nNow, if you are just using this as an extra search condition that picks\none row out of a small number that are identified by another WHERE\nclause, then it's good enough to index for the other clause, and so the\nlack of an index for title[0] isn't an issue. In this case, with only\na small number of possible values for title[0], it seems that an index\nwouldn't be helpful anyway.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 30 Sep 2002 10:42:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] arrays "
},
{
"msg_contents": "Mike,\n\n> We are currently developing a database to host some complicated, XMl\n> layered data. We have chosen postgres because of its ability to store\n> multidimensional arrays. We feel that using these will allow us to\n> simplify the database structure considerably by storing some data in\n> multidimensional arrays. \n\nHmmm ... I'm curious; what kind of data do you feel could be\n*simplified* by multi-dimensional arrays? \n\n> However, we currently have some dissenters who believe that using the\n> multidimensional arrays will make queries slower and unneccesarily\n> complicated. \n\nThey're correct, especially about the latter.\n\n> 1) are SQL queries slower when extracting data from multidimensional\n> arrays\n\nYes, but this is fixable; see the Intarray package in /contrib.\n\n> 2) are table joins more difficult or unneccesarily complicated\n\nYes.\n\n> 3) can you do selects on only a portion of a multidimensional array.\n\nYes.\n\n> That\n> is, if you were storing multilanguage titles in a two dimensional\n> array, \n> \n> [en], \"english title\"\n> [fr], \"french title\"\n> \n> could you select where title[0] = 'en'\n\nYes.\n\n> I know these may sound like terribily stupid questions. but we need\n> some\n> quick guidance before proceeding with a schema that relies on these\n> advanced data features of postgres\n\nThe problem you will be facing is that Arrays are one of the\nfundamentally *Non-Relational* features that Postgresql supports for a\nlimited set of specialized purposes (mostly buffer tables, procedures,\nand porting from MySQL). As such, incorporating arrays into any kind\nof complex schema will drive you to drink ... and is 95% likely more\neasily done through tables and sub-tables, in any case. \n\nLet's take your example of \"title\", and say we wanted to use it in a\njoin:\n\nSELECT movie.name, movie.show_date, movie.title_lang, title.translation\nFROM movies JOIN title_langs ON (\n\tmovie.title_lang[1] = title_langs.lang OR movie.title_lang[2] =\ntitle_langs.lang OR movie.title_lang[3] = title_langs.lang ... )\n\n... as you can see, the join is extremely painful. Let alone\nconstructing a query like \"Select all movies with titles only in\nEnglish and French and one other language.\" (try it, really)\n\nThen there's the not insignificant annoyance of getting data into and\nout of multi-dimensional arrays, which must constantly be parsed into\ntext strings. And the fact that you will have to keep track, in your\nmiddleware code, of what the ordinal numbers of arrays mean, since\narray elements are fundamentally ordered. (BTW, Postgres arrays begin\nat 1, not 0)\n\nNow, I know at least one person who is using arrays to store scientific\ndata. However, that data arrives in his lab in the form of matrices,\nand is not used for joins or query criteria beyond a simple \"where\"\nclause.\n\nAs such, I'd reccommend one of two approaches for you:\n\n1) Post some of your schema ideas here, and let us show you how they\nare better done relationally. The relational data model has 30 years\nof thought behind it -- it can solve a lot of problems.\n\n2) Shift over to an XML database or a full-blown OODB (like Cache').\n\nGood luck.\n\n-Josh Berkus\n\n\n\n",
"msg_date": "Mon, 30 Sep 2002 08:54:31 -0700",
"msg_from": "\"Josh Berkus\" <josh@agliodbs.com>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] arrays"
},
{
"msg_contents": "On 30 Sep 2002 at 8:54, Josh Berkus wrote:\n\n> As such, I'd reccommend one of two approaches for you:\n> \n> 1) Post some of your schema ideas here, and let us show you how they\n> are better done relationally. The relational data model has 30 years\n> of thought behind it -- it can solve a lot of problems.\n\nMike,\n\nJust in case you or others think Josh is some crazed lunatic[1] who \ndoesn't know what he's talking about, I support his views on this \ntopic. Avoid arrays. Normalize your data.\n\n[1] - Actually, I don't think I know anything about Josh, except that \nhe's right about normalizing your data.\n-- \nDan Langille\nI'm looking for a computer job:\nhttp://www.freebsddiary.org/dan_langille.php\n\n",
"msg_date": "Mon, 30 Sep 2002 11:59:12 -0400",
"msg_from": "\"Dan Langille\" <dan@langille.org>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] arrays"
},
{
"msg_contents": "Dan Langille wrote:\n> On 30 Sep 2002 at 8:54, Josh Berkus wrote:\n> \n> > As such, I'd reccommend one of two approaches for you:\n> > \n> > 1) Post some of your schema ideas here, and let us show you how they\n> > are better done relationally. The relational data model has 30 years\n> > of thought behind it -- it can solve a lot of problems.\n> \n> Mike,\n> \n> Just in case you or others think Josh is some crazed lunatic[1] who \n> doesn't know what he's talking about, I support his views on this \n> topic. Avoid arrays. Normalize your data.\n> \n> [1] - Actually, I don't think I know anything about Josh, except that \n> he's right about normalizing your data.\n\nYes, arrays have a very small window of usefulness, but the window does\nexist, so we haven't removed them.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 30 Sep 2002 12:09:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] arrays"
},
{
"msg_contents": "On 30 Sep 2002 at 12:09, Bruce Momjian wrote:\n\n> Dan Langille wrote:\n> > On 30 Sep 2002 at 8:54, Josh Berkus wrote:\n> > \n> > > As such, I'd reccommend one of two approaches for you:\n> > > \n> > > 1) Post some of your schema ideas here, and let us show you how they\n> > > are better done relationally. The relational data model has 30 years\n> > > of thought behind it -- it can solve a lot of problems.\n> > \n> > Mike,\n> > \n> > Just in case you or others think Josh is some crazed lunatic[1] who \n> > doesn't know what he's talking about, I support his views on this \n> > topic. Avoid arrays. Normalize your data.\n> > \n> > [1] - Actually, I don't think I know anything about Josh, except that \n> > he's right about normalizing your data.\n> \n> Yes, arrays have a very small window of usefulness, but the window does\n> exist, so we haven't removed them.\n\nI do not advocate removing them. I do advocate data normalization. \nLet's say it's a matter of Do The Right Thing(tm) unless you know \nwhat you're doing.\n-- \nDan Langille\nI'm looking for a computer job:\nhttp://www.freebsddiary.org/dan_langille.php\n\n",
"msg_date": "Mon, 30 Sep 2002 12:10:29 -0400",
"msg_from": "\"Dan Langille\" <dan@langille.org>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] arrays"
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Now, they are _not_ saying the statement can't have the same time as\n> > other statements in the transaction, but I don't see why they would\n> > explicitly have to state that.\n> \n> Allow me to turn that around: given that they clearly do NOT state that,\n> how can you argue that \"the spec requires it\"? AFAICS the spec does not\n> require it. In most places they are considerably more explicit than\n> this about stating what is required.\n\nI just looked at the SQL99 spec again:\n\n 3) Let S be an <SQL procedure statement> that is not generally\n contained in a <triggered action>. All <datetime value\n function>s that are generally contained, without an intervening\n <routine invocation> whose subject routines do not include an\n SQL function, in <value expression>s that are contained either\n in S without an intervening <SQL procedure statement> or in an\n <SQL procedure statement> contained in the <triggered action>\n of a trigger activated as a consequence of executing S, are\n effectively evaluated simultaneously. The time of evaluation of\n a <datetime value function> during the execution of S and its\n ^^^^^^^^^^^^^^^^^^^^^^^^^\n activated triggers is implementation-dependent.\n\nNotice the part I highlighted. The time returned is\nimplementation-dependent \"during the execution of S\". Now, if we do:\n\n\tBEGIN;\n\tSELECT CURRENT_TIMESTAMP;\n\tSELECT CURRENT_TIMESTAMP;\n\nthe time currently returned for the second query is _not_ during the\nduration of S (S being an SQL procedure statement) so I don't see how we\ncan be viewed as spec-compliant.\n\n> > We already have two other databases who are doing this timing at\n> > statement level.\n> \n> The behavior of CURRENT_TIMESTAMP is clearly stated by the spec to be\n> implementation-dependent. We are under no compulsion to follow any\n> specific other implementation. If we were going to follow some other\n> lead, I'd look to Oracle first...\n\nOnly \"implementation-dependent\" during the execution of the statement. \nWe can't just return the session start time or 1970-01-01 for every\ninvocation of CURRENT_TIMESTAMP.\n\n> > If we change CURRENT_TIMESTAMP to statement time, I don't think we need\n> > now(\"\"), but if we don't change it, I think we do --- somehow we should\n> > allow users to access statement time.\n> \n> I have no problem with providing a function to access statement time,\n> and now('something') seems a reasonable spelling of that function.\n> But I think the argument that we should change our historical behavior\n> of CURRENT_TIMESTAMP is very weak.\n\nHard to see how it is \"very weak\". What do you base that on? \nEverything I have seen looks pretty strong that we are wrong in our\ncurrent implementation.\n\n> One reason why I have a problem with the notion that the spec requires\n> CURRENT_TIMESTAMP to mean \"time of arrival of the current interactive\n> command\" (which is the only specific definition I've seen mentioned\n> here) is that the spec does not truly have a notion of interactive\n> command to begin with. AFAICT the spec's model of command execution\n> is ecpg-like: you have commands embedded in a calling language with\n> all sorts of opportunities for pre-planning, pre-execution, etc.\n> The notion of command arrival time is extremely fuzzy in this model.\n> It could very well be the time you compiled the ecpg application, or\n> the time you started the application running.\n\nThe spec says \"during the execution of S\" so that is what I think we\nhave to follow.\n\nHopefully we will get an Oracle 9 tester soon.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 30 Sep 2002 12:20:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "Dan,\n\n> Just in case you or others think Josh is some crazed lunatic[1] who \n> doesn't know what he's talking about, I support his views on this \n> topic. Avoid arrays. Normalize your data.\n\nAnd just because I'm a crazed lunatic, that doesn't mean that I don't\nknow what I'm talking about.\n\nUm. I mean, \"Even if I were a crazed lunatic, that wouldn't mean that\nI don't know what I'm talking about.\"\n\n<grin>\n\n-Josh \"Relational Mania\" Berkus\n",
"msg_date": "Mon, 30 Sep 2002 09:30:33 -0700",
"msg_from": "\"Josh Berkus\" <josh@agliodbs.com>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] arrays"
},
{
"msg_contents": "On Mon, 30 Sep 2002, Josh Berkus wrote:\n\n\nI have a very good sense of the strengths of relational databases. But\nthey are also limited when it comes to object orientaed data (like XML\nrecords). I though arrays would be a way to simply the complexity you get\nwhen you try and map objects to relations. \n\nso a couple more questions then\n\nIs Cache open source?\nare the XML databases that are evolved and sophisticated enough to use in\nproduction environments. \n\nm\n\n> of thought behind it -- it can solve a lot of problems.\n> \n> 2) Shift over to an XML database or a full-blown OODB (like Cache').\n> \n> Good luck.\n> \n> -Josh Berkus\n> \n> \n> \n> \n\nMike Sosteric <mikes@athabascau.ca> Managing Editor, EJS <http://www.sociology.org/>\nDepartment of Global and Social Analysis Executive Director, ICAAP <http://www.icaap.org/>\nAthabasca University Cell: 1 780 909 1418\nSimon Fraser University Adjunct Professor \n\t\t\t\t\t Masters of Publishing Program \n--\nThis troubled planet is a place of the most violent contrasts. \nThose that receive the rewards are totally separated from those who\nshoulder the burdens. It is not a wise leadership - Spock, \"The Cloud Minders.\"\n\n\n___\n This communication is intended for the use of the recipient to whom it\n is addressed, and may contain confidential, personal, and or privileged\n information. Please contact us immediately if you are not the intended\n recipient of this communication, and do not copy, distribute, or take\n action relying on it. Any communications received in error, or\n subsequent reply, should be deleted or destroyed.\n---\n",
"msg_date": "Mon, 30 Sep 2002 11:04:48 -0600 (MDT)",
"msg_from": "Mike Sosteric <mikes@athabascau.ca>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] arrays"
},
{
"msg_contents": "\nMike,\n\n> I have a very good sense of the strengths of relational databases. But\n> they are also limited when it comes to object orientaed data (like XML\n> records). I though arrays would be a way to simply the complexity you get\n> when you try and map objects to relations. \n\nIn my experience, most XML records are, in fact, simple tree structures that \nare actually easy to represent in SQL. But I don't know about yours.\n\nCertainly the translation of XML --> SQL Tree Structure is no more complex \nthan XML --> Array, that I can see.\n\n> Is Cache open source?\n\nNo. It's a proprietary, and probably very expensive, database. There are no \nopen source OODBs that I know of, partly because of the current lack of \ninternational standards for OODBs. \n\n> are the XML databases that are evolved and sophisticated enough to use in\n> production environments. \n\nI don't know. The last time I evaluated XML databases was a year ago, when \nthere was nothing production-quality in existence. But I don't know what \nthe situation is now.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Mon, 30 Sep 2002 10:18:48 -0700",
"msg_from": "Josh Berkus <josh@agliodbs.com>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] arrays"
},
{
"msg_contents": "On Mon, 30 Sep 2002, Josh Berkus wrote:\n\nthanks for this. we will stick with the relational model. \n\nm\n\n> \n> Mike,\n> \n> > I have a very good sense of the strengths of relational databases. But\n> > they are also limited when it comes to object orientaed data (like XML\n> > records). I though arrays would be a way to simply the complexity you get\n> > when you try and map objects to relations. \n> \n> In my experience, most XML records are, in fact, simple tree structures that \n> are actually easy to represent in SQL. But I don't know about yours.\n> \n> Certainly the translation of XML --> SQL Tree Structure is no more complex \n> than XML --> Array, that I can see.\n> \n> > Is Cache open source?\n> \n> No. It's a proprietary, and probably very expensive, database. There are no \n> open source OODBs that I know of, partly because of the current lack of \n> international standards for OODBs. \n> \n> > are the XML databases that are evolved and sophisticated enough to use in\n> > production environments. \n> \n> I don't know. The last time I evaluated XML databases was a year ago, when \n> there was nothing production-quality in existence. But I don't know what \n> the situation is now.\n> \n> -- \n> -Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\nMike Sosteric <mikes@athabascau.ca> Managing Editor, EJS <http://www.sociology.org/>\nDepartment of Global and Social Analysis Executive Director, ICAAP <http://www.icaap.org/>\nAthabasca University Cell: 1 780 909 1418\nSimon Fraser University Adjunct Professor \n\t\t\t\t\t Masters of Publishing Program \n--\nThis troubled planet is a place of the most violent contrasts. \nThose that receive the rewards are totally separated from those who\nshoulder the burdens. It is not a wise leadership - Spock, \"The Cloud Minders.\"\n\n\n___\n This communication is intended for the use of the recipient to whom it\n is addressed, and may contain confidential, personal, and or privileged\n information. Please contact us immediately if you are not the intended\n recipient of this communication, and do not copy, distribute, or take\n action relying on it. Any communications received in error, or\n subsequent reply, should be deleted or destroyed.\n---\n",
"msg_date": "Mon, 30 Sep 2002 11:24:19 -0600 (MDT)",
"msg_from": "Mike Sosteric <mikes@athabascau.ca>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] arrays"
},
{
"msg_contents": "\nMike,\n\n> thanks for this. we will stick with the relational model. \n\nHey, don't make your decision entirely based on my advice. Do some \nresearch! I'm just responding \"off the cuff\" to your questions.\n\nIf you do take the relational approach, post some sample problems here and \npeople can help you with how to represent XML data relationally.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Mon, 30 Sep 2002 10:29:34 -0700",
"msg_from": "Josh Berkus <josh@agliodbs.com>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] arrays"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Notice the part I highlighted. The time returned is\n> implementation-dependent \"during the execution of S\". Now, if we do:\n\n> \tBEGIN;\n> \tSELECT CURRENT_TIMESTAMP;\n> \tSELECT CURRENT_TIMESTAMP;\n\n> the time currently returned for the second query is _not_ during the\n> duration of S (S being an SQL procedure statement)\n\nNot so fast. What is an \"SQL procedure statement\"?\n\nOur interactive commands do not map real well to the spec's definitions.\nConsider for example SQL92 section 4.17:\n\n 4.17 Procedures\n\n A <procedure> consists of a <procedure name>, a sequence of <pa-\n rameter declaration>s, and a single <SQL procedure statement>.\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n A <procedure> in a <module> is invoked by a compilation unit as-\n sociated with the <module> by means of a host language \"call\"\n statement that specifies the <procedure name> of the <procedure>\n and supplies a sequence of parameter values corresponding in number\n and in <data type> to the <parameter declaration>s of the <proce-\n dure>. A call of a <procedure> causes the <SQL procedure statement>\n that it contains to be executed.\n\nThe only thing you can easily map this onto in Postgres is stored\nfunctions; your reading would then say that each Postgres function call\nrequires its own evaluation of current_timestamp, which I think we are\nall agreed would be a disastrous interpretation.\n\nIt would be pretty easy to make the case that an ECPG module represents\na \"procedure\" in the spec's meaning, in which case it is *necessary* for\nspec compliance that the ECPG module be able to execute all its commands\nwith the same value of current_timestamp. This would look like a series\nof interactive commands to the backend.\n\nSo I do not think that the spec provides clear support for your position.\nThe only thing that is really clear is that there is a minimum unit\nof execution in which current_timestamp is not supposed to change.\nIt does not clearly define any maximum unit; and it is even less clear\nthat our interactive commands should be equated to \"SQL procedure\nstatement\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 30 Sep 2002 13:59:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] CURRENT_TIMESTAMP "
},
{
"msg_contents": "On Mon, 30 Sep 2002, Josh Berkus wrote:\n\nDon't worry. \n\nOur biggest problem is that each XML data entry, say \n\n<title=en>This is the title</title>\n\nhas an language attribute. if there are, say 67 seperate items, each with\nmultiple languages, then the comlexity of the table structure skyrockets\nbecause you have to allow for multiple titles, multiple names, multiple\neverything. \n\nthe resulting relational model is icky to say the least. The question, is\nhow to simplify that. I had thought arrays would help because you can\nstore the multiple language strings in a single table along with other\nrecords..\n\nany ideas?\n\nm\n\n> \n> Mike,\n> \n> > thanks for this. we will stick with the relational model. \n> \n> Hey, don't make your decision entirely based on my advice. Do some \n> research! I'm just responding \"off the cuff\" to your questions.\n> \n> If you do take the relational approach, post some sample problems here and \n> people can help you with how to represent XML data relationally.\n> \n> -- \n> -Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n> \n> \n\nMike Sosteric <mikes@athabascau.ca> Managing Editor, EJS <http://www.sociology.org/>\nDepartment of Global and Social Analysis Executive Director, ICAAP <http://www.icaap.org/>\nAthabasca University Cell: 1 780 909 1418\nSimon Fraser University Adjunct Professor \n\t\t\t\t\t Masters of Publishing Program \n--\nThis troubled planet is a place of the most violent contrasts. \nThose that receive the rewards are totally separated from those who\nshoulder the burdens. It is not a wise leadership - Spock, \"The Cloud Minders.\"\n\n\n___\n This communication is intended for the use of the recipient to whom it\n is addressed, and may contain confidential, personal, and or privileged\n information. Please contact us immediately if you are not the intended\n recipient of this communication, and do not copy, distribute, or take\n action relying on it. Any communications received in error, or\n subsequent reply, should be deleted or destroyed.\n---\n",
"msg_date": "Mon, 30 Sep 2002 12:11:36 -0600 (MDT)",
"msg_from": "Mike Sosteric <mikes@athabascau.ca>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] arrays"
},
{
"msg_contents": "\nMike,\n\n> has an language attribute. if there are, say 67 seperate items, each with\n> multiple languages, then the comlexity of the table structure skyrockets\n> because you have to allow for multiple titles, multiple names, multiple\n> everything. \n\nThis looks soluable several ways. \n\nQuestion #1: If each record has 67 fields, and each field may appear in \nseveral languages, is it possible for some fields to be in more languages \nthan others? I.e. if \"title-en\" and \"title-de\" exist, does it follow that \n\"content-en\" and \"content-de\" exist as well? Or not?\n\nQuestion #2: Does your XML schema allow locall defined attributes? That is, \ndo some records have entire attributes (\"fields\" ) that other records do not?\n\nSuggestion #1: Joe Celko's \"SQL for Smarties, 2nd Ed.\" is an excellent book \nfor giving you ideas on how to adapt SQL structures to odd purposes.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Mon, 30 Sep 2002 11:20:09 -0700",
"msg_from": "Josh Berkus <josh@agliodbs.com>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] arrays"
},
{
"msg_contents": "On Mon, 30 Sep 2002, Josh Berkus wrote:\n\n> \n> Question #1: If each record has 67 fields, and each field may appear in \n> several languages, is it possible for some fields to be in more languages \n> than others? I.e. if \"title-en\" and \"title-de\" exist, does it follow that \n> \"content-en\" and \"content-de\" exist as well? Or not?\n\nyes. \n\n> \n> Question #2: Does your XML schema allow locall defined attributes? That is, \n> do some records have entire attributes (\"fields\" ) that other records do not?\n\nyes. \n\n> \n> Suggestion #1: Joe Celko's \"SQL for Smarties, 2nd Ed.\" is an excellent book \n> for giving you ideas on how to adapt SQL structures to odd purposes.\n\nI have ordered the book from amazon.ca\n\nm\n\n\n> \n> -- \n> -Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\nMike Sosteric <mikes@athabascau.ca> Managing Editor, EJS <http://www.sociology.org/>\nDepartment of Global and Social Analysis Executive Director, ICAAP <http://www.icaap.org/>\nAthabasca University Cell: 1 780 909 1418\nSimon Fraser University Adjunct Professor \n\t\t\t\t\t Masters of Publishing Program \n--\nThis troubled planet is a place of the most violent contrasts. \nThose that receive the rewards are totally separated from those who\nshoulder the burdens. It is not a wise leadership - Spock, \"The Cloud Minders.\"\n\n\n___\n This communication is intended for the use of the recipient to whom it\n is addressed, and may contain confidential, personal, and or privileged\n information. Please contact us immediately if you are not the intended\n recipient of this communication, and do not copy, distribute, or take\n action relying on it. Any communications received in error, or\n subsequent reply, should be deleted or destroyed.\n---\n",
"msg_date": "Mon, 30 Sep 2002 12:24:13 -0600 (MDT)",
"msg_from": "Mike Sosteric <mikes@athabascau.ca>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] arrays"
},
{
"msg_contents": "How can you make a difference between now('statement'), and\nnow('immediate').\nTo me they are the same thing. Why not simply now() for transaction, and\nnow('CLOCK') or better yet system_clock() or clock() for curent time.\n\nJLL\n\nJosh Berkus wrote:\n> \n> Tom,\n> \n> > I'd be happier with the whole thing if anyone had exhibited a convincing\n> > use-case for statement timestamp. So far I've not seen any actual\n> > examples of situations that are not better served by either transaction\n> > timestamp or true current time. And the spec is perfectly clear that\n> > CURRENT_TIMESTAMP does not mean true current time...\n> \n> Are we still planning on putting the three different versions of now() on the\n> TODO? I.e.,\n> now('transaction'),\n> now('statement'), and\n> now('immediate')\n> With now() = now('transaction')?\n> \n> I still think it's a good idea, provided that we have some easy means to\n> determine now('statement').\n> \n> --\n> -Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n",
"msg_date": "Mon, 30 Sep 2002 14:37:45 -0400",
"msg_from": "Jean-Luc Lachance <jllachan@nsd.ca>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "OK, forget system_clock() or clock() timeofday() will do.\n\n\nJean-Luc Lachance wrote:\n> \n> How can you make a difference between now('statement'), and\n> now('immediate').\n> To me they are the same thing. Why not simply now() for transaction, and\n> now('CLOCK') or better yet system_clock() or clock() for curent time.\n> \n> JLL\n",
"msg_date": "Mon, 30 Sep 2002 14:47:15 -0400",
"msg_from": "Jean-Luc Lachance <jllachan@nsd.ca>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Notice the part I highlighted. The time returned is\n> > implementation-dependent \"during the execution of S\". Now, if we do:\n> \n> > \tBEGIN;\n> > \tSELECT CURRENT_TIMESTAMP;\n> > \tSELECT CURRENT_TIMESTAMP;\n> \n> > the time currently returned for the second query is _not_ during the\n> > duration of S (S being an SQL procedure statement)\n> \n> Not so fast. What is an \"SQL procedure statement\"?\n> \n> Our interactive commands do not map real well to the spec's definitions.\n> Consider for example SQL92 section 4.17:\n> \n> 4.17 Procedures\n> \n> A <procedure> consists of a <procedure name>, a sequence of <pa-\n> rameter declaration>s, and a single <SQL procedure statement>.\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> A <procedure> in a <module> is invoked by a compilation unit as-\n> sociated with the <module> by means of a host language \"call\"\n> statement that specifies the <procedure name> of the <procedure>\n> and supplies a sequence of parameter values corresponding in number\n> and in <data type> to the <parameter declaration>s of the <proce-\n> dure>. A call of a <procedure> causes the <SQL procedure statement>\n> that it contains to be executed.\n> \n> The only thing you can easily map this onto in Postgres is stored\n> functions; your reading would then say that each Postgres function call\n> requires its own evaluation of current_timestamp, which I think we are\n> all agreed would be a disastrous interpretation.\n> \n> It would be pretty easy to make the case that an ECPG module represents\n> a \"procedure\" in the spec's meaning, in which case it is *necessary* for\n> spec compliance that the ECPG module be able to execute all its commands\n> with the same value of current_timestamp. This would look like a series\n> of interactive commands to the backend.\n> \n> So I do not think that the spec provides clear support for your position.\n> The only thing that is really clear is that there is a minimum unit\n> of execution in which current_timestamp is not supposed to change.\n> It does not clearly define any maximum unit; and it is even less clear\n> that our interactive commands should be equated to \"SQL procedure\n> statement\".\n\n\nOK, you don't like \"SQL procedure statement\". Let's look at SQL92:\n\n 3) If an SQL-statement generally contains more than one reference\n to one or more <datetime value function>s, then all such ref-\n erences are effectively evaluated simultaneously. The time of\n evaluation of the <datetime value function> during the execution\n ^^^^^^^^^^^^^^^^^^^^\n of the SQL-statement is implementation-dependent.\n ^^^^^^^^^^^^^^^^^^^^\n\nso, again, we have wording that is has to be \"during\" the SQL statement.\n\nAlso, we have MSSQL, Interbase, and now Oracle modifying\nCURRENT_TIMESTAMP during the transaction. (The Oracle report just came\nin a few hours ago.)\n\nPerhaps we need a vote on this.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 30 Sep 2002 14:49:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Also, we have MSSQL, Interbase, and now Oracle modifying\n> CURRENT_TIMESTAMP during the transaction. (The Oracle report just came\n> in a few hours ago.)\n\nWeren't you dissatisfied with the specificity of that Oracle report?\n\n> Perhaps we need a vote on this.\n\nPerhaps, but let's wait till the facts are in.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 30 Sep 2002 17:26:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] CURRENT_TIMESTAMP "
},
{
"msg_contents": "\nI was wondering why is such a rage against arrays.\n\nI posted 2 very common problems where arrays provide\nthe only natural (and efficient) fit. (and got no responses)\nSo it seems to me that:\n\n- Arrays implementation (along with the intarray package) in postgresql\n is well performing and stable.\n- Some problems shout out for array usage.\n- The Array interface is defined in java.sql package.\n (I dont know if sql arrays is in some standard but it seems that\n Java sees it that way, at least).\n- The Array interface is implemented in the official postgresql java\npackage.\n- In some problems replacing arrays according the tradition relational\nparadigm would end up in a such a performance degradation, that\nsome applications would be unusable.\n- Oleg and Teodor did a great job in intarray, making array usage\n easy and efficient.\n\nThanx!\n\n\n==================================================================\nAchilleus Mantzios\nS/W Engineer\nIT dept\nDynacom Tankers Mngmt\nNikis 4, Glyfada\nAthens 16610\nGreece\ntel: +30-10-8981112\nfax: +30-10-8981877\nemail: achill@matrix.gatewaynet.com\n mantzios@softlab.ece.ntua.gr\n\n",
"msg_date": "Tue, 1 Oct 2002 10:49:41 +0300 (EEST)",
"msg_from": "Achilleus Mantzios <achill@matrix.gatewaynet.com>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] arrays "
},
{
"msg_contents": ">>>>> \"Josh\" == Josh Berkus <josh@agliodbs.com> writes:\n\n Josh> Now, I know at least one person who is using arrays to store\n Josh> scientific data. However, that data arrives in his lab in\n Josh> the form of matrices, and is not used for joins or query\n Josh> criteria beyond a simple \"where\" clause.\n\nIndeed, my first attempt to use arrays was to maintain some basic\nstatistics about a set of data. The array elements where to be\ndistribution moments and would only be used in \"where\" clauses. The\nproblem was that I wanted to be about to update the statistics using\ntriggers whenever the main data was updated. The inability to access\na specific array element in PL/pgSQL code made this so painful I ended\nup just extending a table with more columns.\n\nroland\n-- \n\t\t PGP Key ID: 66 BC 3B CD\nRoland B. Roberts, PhD RL Enterprises\nroland@rlenter.com 76-15 113th Street, Apt 3B\nroland@astrofoto.org Forest Hills, NY 11375\n",
"msg_date": "01 Oct 2002 13:43:02 -0400",
"msg_from": "Roland Roberts <roland@astrofoto.org>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] arrays"
},
{
"msg_contents": "Achilleus,\n\n> I was wondering why is such a rage against arrays.\n> \n> I posted 2 very common problems where arrays provide\n> the only natural (and efficient) fit. (and got no responses)\n> So it seems to me that:\n\nAll of your points are correct. \n\nUs \"old database hands\" have a knee-jerk reaction against arrays for\nlong-term data storage because, much of the time, developers use arrays\nbecause they are lazy or don't understand the relational model instead\nof because they are the best thing to use. This is particularly true\nof people who come to database development from, say, web design.\n\nIn this thread particularly, Mike was suggesting using arrays for a\nfield used in JOINs, which would be a royal mess. Which was why you\nheard so many arguments against using arrays.\n\nOr, to put it another way: \n\n1. Array data types are perfect for storing data that arrives in the\nform of arrays or matricies, such as scientific data , or interface\nprograms that store arrays of object properties. \n\n2. For other purposes, arrays are a very poor substitute for proper\nsub-table storage of related data according to the relational model. \n\n3. The distinguishing factor is \"atomicity\": ask yourself: \"is this\narray a discrete and undivisible unit, or is is a collection of related\nbut mutable elements?\" If the former, use and array. If the latter,\nuse a sub-table.\n\nClearer now?\n\n-Josh Berkus\n\n\n",
"msg_date": "Tue, 01 Oct 2002 10:52:53 -0700",
"msg_from": "\"Josh Berkus\" <josh@agliodbs.com>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] arrays "
},
{
"msg_contents": "\n[ Thread moved to hackers.]\n\nOK, I have enough information from the various other databases to make a\nproposal. It seems the other databases, particularly Oracle, record\nCURRENT_TIMESTAMP as the time of statement start. However, it isn't the\ntime of statement start from the user's perspective, but rather from the\ndatabase's perspective, i.e. if you call a function that has two\nstatements in it, each statement could have a different\nCURRENT_TIMESTAMP.\n\nI don't think that is standards-compliant, and I don't think any of our\nusers want that. What they probably want is to have a fixed\nCURRENT_TIMESTAMP from the time the query is submitted until it is\ncompleted. We can call that the \"statement arrival time\" version of\nCURRENT_TIMESTAMP. I don't know if any of the other databases support\nthis concept, but it seems the most useful, and is closer to the\nstandards and to other databases than we are now.\n\nSo, we have a couple of decisions to make:\n\n\tShould CURRENT_TIMESTAMP be changed to \"statement arrival time\"?\n\tShould now() be changed the same way?\n\tIf not, should now() and CURRENT_TIMESTAMP return the same type of\n\tvalue?\n\nOne idea is to change CURRENT_TIMESTAMP to \"statement arrival time\", and\nleave now() as transaction start time. \n\nAlso, should we added now(\"val\") where val can be \"transaction\",\n\"statement\", or \"clock\"?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 3 Oct 2002 16:18:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] [GENERAL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "On Thu, Oct 03, 2002 at 04:18:08PM -0400, Bruce Momjian wrote:\n> \n> So, we have a couple of decisions to make:\n> \n> \tShould CURRENT_TIMESTAMP be changed to \"statement arrival time\"?\n> \tShould now() be changed the same way?\n> \tIf not, should now() and CURRENT_TIMESTAMP return the same type of\n> \tvalue?\n> \n> One idea is to change CURRENT_TIMESTAMP to \"statement arrival time\", and\n> leave now() as transaction start time. \n\nA disadvantage to this, as I see it, is that users may have depended on\nthe traditional Postgres behaviour of time \"freezing\" in transaction. \nYou always had to select timeofday() for moving time. I can see an\nadvantage in making what Postgres does somewhat more like what other\npeople do (as flat-out silly as some of that seems to be). Still, it\nlooks to me like the present CURRENT_TIMESTAMP implementation is at\nleast as much like the spec as anyone else's implementation, and more\nlike the spec than many of them. So I'm still not clear on what\nproblem the change is going to fix, especially since it breaks with\ntraditional behaviour.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Thu, 3 Oct 2002 18:03:19 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] [GENERAL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "Andrew Sullivan wrote:\n> On Thu, Oct 03, 2002 at 04:18:08PM -0400, Bruce Momjian wrote:\n> > \n> > So, we have a couple of decisions to make:\n> > \n> > \tShould CURRENT_TIMESTAMP be changed to \"statement arrival time\"?\n> > \tShould now() be changed the same way?\n> > \tIf not, should now() and CURRENT_TIMESTAMP return the same type of\n> > \tvalue?\n> > \n> > One idea is to change CURRENT_TIMESTAMP to \"statement arrival time\", and\n> > leave now() as transaction start time. \n> \n> A disadvantage to this, as I see it, is that users may have depended on\n> the traditional Postgres behavior of time \"freezing\" in transaction. \n> You always had to select timeofday() for moving time. I can see an\n> advantage in making what Postgres does somewhat more like what other\n> people do (as flat-out silly as some of that seems to be). Still, it\n> looks to me like the present CURRENT_TIMESTAMP implementation is at\n> least as much like the spec as anyone else's implementation, and more\n> like the spec than many of them. So I'm still not clear on what\n> problem the change is going to fix, especially since it breaks with\n> traditional behavior.\n\nUh, why change? Well, we have a \"tradition\" issue here, and changing it\nwill require something in the release notes. The big reason to change\nis that most people using CURRENT_TIMESTAMP are not anticipating that it\nis transaction start time, and are asking/complaining. We had one only\nthis week. If it were obvious to users when they used it, we could just\nsay it is our way of doing it, but in most cases it is catching people\nby surprised. Given that other DB's have CURRENT_TIMESTAMP changing\neven more frequently than we think is reasonable, it would make sense to\nchange it so it more closely matches what people expect, both new SQL\nusers and users moving from other DBs.\n\nSo, in summary, reasons for the change:\n\n\tmore intuitive\n\tmore standard-compliant\n\tmore closely matches other db's\n\nReasons not to change:\n\n\tPostgreSQL traditional behavior\n\nDoes that help?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 3 Oct 2002 18:15:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] [GENERAL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> So, in summary, reasons for the change:\n> \tmore intuitive\n> \tmore standard-compliant\n> \tmore closely matches other db's\n\nI'd give you the first and third of those. As Andrew noted, the\nargument that \"it's more standard-compliant\" is not very solid.\n\n> Reasons not to change:\n> \tPostgreSQL traditional behavior\n\nYou've phrased that in a way that makes it sound like the decision\nis a no-brainer. How about\n\n\tBreaks existing Postgres applications in non-obvious ways\n\nwhich I think is a more realistic description of the downside.\n\nAlso, it seems a lot of people who have thought about this carefully\nthink that the start-of-transaction behavior is just plain more useful.\nThe fact that it surprises novices is not a reason why people who know\nthe behavior shouldn't want it to work like it does. (The behavior of\nnextval/currval for sequences surprises novices, too, but I haven't\nheard anyone claim we should change it because of that.)\n\nSo I think a fairer summary is\n\nPro:\n\n\tmore intuitive (but still not what an unversed person would\n\t\t\texpect, namely true current time)\n\targuably more standard-compliant\n\tmore closely matches other db's (but still not very closely)\n\nCon:\n\n\tbreaks existing Postgres applications in non-obvious ways\n\targuably less useful than our traditional behavior\n\nI've got no problem with the idea of adding a way to get at\nstatement-arrival time. (I like the idea of a parameterized version of\nnow() to provide a consistent interface to all three functionalities.)\nBut I'm less than enthused about changing the existing functions to give\npride of place to statement-arrival time. In the end, I think that\ntransaction-start time is the most commonly useful and safest variant,\nand so I feel it ought to have pride of place as the easiest one to get\nat.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Oct 2002 19:09:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] [GENERAL] CURRENT_TIMESTAMP "
},
{
"msg_contents": "On Thu, Oct 03, 2002 at 07:09:33PM -0400, Tom Lane wrote:\n\n> statement-arrival time. (I like the idea of a parameterized version of\n> now() to provide a consistent interface to all three functionalities.)\n\nI like this, too. I think it'd be probably useful. But. . .\n\n> pride of place to statement-arrival time. In the end, I think that\n> transaction-start time is the most commonly useful and safest variant,\n\n. . .I also think this is true. If I'm doing a bunch of database\noperations in one transaction, there is a remarkably good argument\nthat they happened \"at the same time\". After all, the marked passage\nof time is probably just an unfortunate side effect of the inability\nof my database can't process things instantaneously.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Thu, 3 Oct 2002 19:58:39 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] [GENERAL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > So, in summary, reasons for the change:\n> > \tmore intuitive\n> > \tmore standard-compliant\n> > \tmore closely matches other db's\n> \n> I'd give you the first and third of those. As Andrew noted, the\n> argument that \"it's more standard-compliant\" is not very solid.\n\nThe standard doesn't say anything about transaction in this regard. I\nactually think Oracle is closer to the standard than we are right now.\n\n> > Reasons not to change:\n> > \tPostgreSQL traditional behavior\n> \n> You've phrased that in a way that makes it sound like the decision\n> is a no-brainer. How about\n> \n> \tBreaks existing Postgres applications in non-obvious ways\n> \n> which I think is a more realistic description of the downside.\n\nI had used Andrew's words:\n\n\tthe traditional Postgres behaviour of time \"freezing\" in transaction. \n\nYes, \"breaking\" is a clearer description.\n\n> Also, it seems a lot of people who have thought about this carefully\n> think that the start-of-transaction behavior is just plain more useful.\n> The fact that it surprises novices is not a reason why people who know\n> the behavior shouldn't want it to work like it does. (The behavior of\n> nextval/currval for sequences surprises novices, too, but I haven't\n> heard anyone claim we should change it because of that.)\n\nNo one has suggested a more intuitive solution for sequences, or we\nwould have discussed it.\n\n> So I think a fairer summary is\n> \n> Pro:\n> \n> \tmore intuitive (but still not what an unversed person would\n> \t\t\texpect, namely true current time)\n> \targuably more standard-compliant\n\nWhat does \"arguably\" mean? That seems more like a throw-away objection.\n\n> \tmore closely matches other db's (but still not very closely)\n\nCloser!\n\nNo need to qualify what I said. It is \"more\" of all these things, not\n\"exactly\", of course.\n\n> Con:\n> \n> \tbreaks existing Postgres applications in non-obvious ways\n> \targuably less useful than our traditional behavior\n> \n> I've got no problem with the idea of adding a way to get at\n> statement-arrival time. (I like the idea of a parameterized version of\n> now() to provide a consistent interface to all three functionalities.)\n> But I'm less than enthused about changing the existing functions to give\n> pride of place to statement-arrival time. In the end, I think that\n> transaction-start time is the most commonly useful and safest variant,\n> and so I feel it ought to have pride of place as the easiest one to get\n> at.\n\nWell, let's see what others say. If no one is excited about the change,\nwe can just document its current behavior. Oh, I see it is already\ndocumented in func.sgml:\n\n It is quite important to realize that\n <function>CURRENT_TIMESTAMP</function> and related functions all return\n the time as of the start of the current transaction; their values do not\n increment while a transaction is running. But\n <function>timeofday()</function> returns the actual current time.\n\nSeems that isn't helping enough to reduce the number of people who are\nsurprised by our behavior. I don't think anyone would be surprised by\nstatement time.\n\nWhat do others think?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\n\n",
"msg_date": "Thu, 3 Oct 2002 20:41:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] [GENERAL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "On Fri, 2002-10-04 at 01:41, Bruce Momjian wrote:\n> Well, let's see what others say. If no one is excited about the change,\n> we can just document its current behavior. Oh, I see it is already\n> documented in func.sgml:\n> \n> It is quite important to realize that\n> <function>CURRENT_TIMESTAMP</function> and related functions all return\n> the time as of the start of the current transaction; their values do not\n> increment while a transaction is running. But\n> <function>timeofday()</function> returns the actual current time.\n> \n> Seems that isn't helping enough to reduce the number of people who are\n> surprised by our behavior. I don't think anyone would be surprised by\n> statement time.\n> \n> What do others think?\n\nI would prefer that CURRENT_TIME[STAMP] always produce the same time\nwithin a transaction. If it is changed, it will certainly break one of\nmy applications, which explicitly depends on the current behaviour. If\nyou change it, please provide an alternative way of doing the same\nthing.\n\nI can see that the current behaviour might give surprising results in a\nlong running transaction. Surprise could be reduced by giving the time\nof first use within the transaction rather than the start of the\ntransaction.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"For the word of God is quick, and powerful, and \n sharper than any twoedged sword, piercing even to the \n dividing asunder of soul and spirit, and of the joints\n and marrow, and is a discerner of the thoughts and \n intents of the heart.\" Hebrews 4:12 \n\n",
"msg_date": "04 Oct 2002 06:03:13 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] [GENERAL] CURRENT_TIMESTAMP"
},
{
"msg_contents": "Oliver Elphick <olly@lfix.co.uk> writes:\n> I can see that the current behaviour might give surprising results in a\n> long running transaction. Surprise could be reduced by giving the time\n> of first use within the transaction rather than the start of the\n> transaction.\n\n[ cogitates ... ] Hmm, we could do that, and it probably would break\nfew if any existing apps. But would it really reduce the surprise\nfactor? The complaints we've heard so far all seemed to come from\npeople who expected multiple current_timestamp calls to show advancing\ntimes within a transaction.\n\nOliver's idea might be worth doing just on performance grounds: instead\nof a gettimeofday() call at the start of every transaction, we'd only\nhave to reset a flag variable. When and if current_timestamp is done\ninside the transaction, then call the kernel to ask what time it is.\nWe win on every transaction that does not contain a current_timestamp\ncall, which is probably a good bet for most apps. But I don't think\nthis does much to resolve the behavioral complaints.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 04 Oct 2002 01:20:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] [GENERAL] CURRENT_TIMESTAMP "
},
{
"msg_contents": "On Sun, 29 Sep 2002, Mike Sosteric wrote:\n\n> On Sun, 29 Sep 2002, Bruce Momjian wrote:\n> \n> Apologies in advance if there is a more appropriate list. \n> \n> We are currently developing a database to host some complicated, XMl\n> layered data. We have chosen postgres because of its ability to store\n> multidimensional arrays. We feel that using these will allow us to\n> simplify the database structure considerably by storing some data in\n> multidimensional arrays. \n\nthe long and the short of it is that arrays are useful to store data, but \nshould not be used where you need to look up the data in them in a where \nclause.\n\n",
"msg_date": "Fri, 4 Oct 2002 10:08:54 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] arrays"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Marc G. Fournier [mailto:scrappy@hub.org] \n> Sent: 20 September 2002 14:55\n> To: Justin Clift\n> Cc: PostgreSQL Hackers Mailing List\n> Subject: Re: [HACKERS] Where to post a new PostgreSQL utility?\n> \n> \n> \n> gborg\n\nJust because I'm curious, is *all* new stuff going to Gborg, and is the\nexisting /contrib going to migrated there?\n\nRegards, Dave.\n",
"msg_date": "Fri, 20 Sep 2002 15:03:00 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: Where to post a new PostgreSQL utility?"
},
{
"msg_contents": "Dave Page writes:\n > Just because I'm curious, is *all* new stuff going to Gborg, and is the\n > existing /contrib going to migrated there?\n\nI'm curious too...\n\nIf that is to happen then the profile of gborg would need to be\nmassively increased. Currenly the only real link on the 'net to gborg\n(by searching through\nhttp://www.google.com/search?q=link:gborg.postgresql.org) is from the\nUser Lounge (link \"Related projects\") on the PostgreSQL site.\n\nI'd have thought a link on the main left-hand list would be more apt.\n\nAnd gbord needs a search facility if people are going to be able to\nfind anything...\n\nLee.\n",
"msg_date": "Fri, 20 Sep 2002 15:18:49 +0100",
"msg_from": "Lee Kindness <lkindness@csl.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: Where to post a new PostgreSQL utility?"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Lee Kindness [mailto:lkindness@csl.co.uk] \n> Sent: 20 September 2002 15:19\n> To: Dave Page\n> Cc: Marc G. Fournier; Justin Clift; Lee Kindness; PostgreSQL \n> Hackers Mailing List\n> Subject: Re: [HACKERS] Where to post a new PostgreSQL utility?\n> \n> \n> Dave Page writes:\n> > Just because I'm curious, is *all* new stuff going to \n> Gborg, and is the > existing /contrib going to migrated there?\n> \n> I'm curious too...\n> \n> If that is to happen then the profile of gborg would need to \n> be massively increased. Currenly the only real link on the \n> 'net to gborg (by searching through\n> http://www.google.com/search?q=link:gborg.postgresql.org) is \n> from the User Lounge (link \"Related projects\") on the PostgreSQL site.\n> \n> I'd have thought a link on the main left-hand list would be more apt.\n\nThat's being worked on in a roundabout kind of way...\n\n> And gbord needs a search facility if people are going to be \n> able to find anything...\n\nYes, I think your're right. I will suggest it to the relevant people.\n\nRegards, Dave.\n",
"msg_date": "Fri, 20 Sep 2002 15:27:47 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: Where to post a new PostgreSQL utility?"
}
] |
[
{
"msg_contents": "Hi all,\n\nWhile testing for large databases, I am trying to load 12.5M rows of data from \na text file and it takes lot longer than mysql even with copy.\n\nMysql takes 221 sec. v/s 1121 sec. for postgres. For postgresql, that is around \n11.5K rows per second. Each tuple has 23 fields with fixed length of around 100 \nbytes\n\nI wrote a programs which does inserts in batches but none of thme reaches \nperformance of copy. I tried 1K/5K/10K/100K rows in a transaction but it can \nnot cross 2.5K rows/sec.\n\nThe machine is 800MHz, P-III/512MB/IDE disk. Postmaster is started with 30K \nbuffers i.e. around 235MB buffers. Kernel caching paramaters are defaults.\n\nBesides there is issue of space. Mysql takes 1.4GB space for 1.2GB text data \nand postgresql takes 3.2GB of space. Even with 40 bytes per row overhead \nmentioned in FAQ, that should come to around 1.7GB, counting for 40% increase \nin size. Vacuum was run on database.\n\nAny further help? Especially if batch inserts could be speed up, that would be \ngreat..\n\nBye\n Shridhar\n\n--\nAlone, adj.:\tIn bad company.\t\t-- Ambrose Bierce, \"The Devil's Dictionary\"\n\n",
"msg_date": "Fri, 20 Sep 2002 21:22:08 +0530",
"msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>",
"msg_from_op": true,
"msg_subject": "Improving speed of copy"
},
{
"msg_contents": "Are you using copy within a transaction?\n\nI don't know how to explain the size difference tho. I have never seen an\noverhead difference that large. What type of MySQL tables were you using\nand what version?\n\nHave you tried this with Oracle or similar commercial database?\n\n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Shridhar\nDaithankar\nSent: Friday, September 20, 2002 9:52 AM\nTo: Pgsql-hackers@postgresql.org\nSubject: [HACKERS] Improving speed of copy\n\n\nHi all,\n\nWhile testing for large databases, I am trying to load 12.5M rows of data\nfrom\na text file and it takes lot longer than mysql even with copy.\n\nMysql takes 221 sec. v/s 1121 sec. for postgres. For postgresql, that is\naround\n11.5K rows per second. Each tuple has 23 fields with fixed length of around\n100\nbytes\n\nI wrote a programs which does inserts in batches but none of thme reaches\nperformance of copy. I tried 1K/5K/10K/100K rows in a transaction but it can\nnot cross 2.5K rows/sec.\n\nThe machine is 800MHz, P-III/512MB/IDE disk. Postmaster is started with 30K\nbuffers i.e. around 235MB buffers. Kernel caching paramaters are defaults.\n\nBesides there is issue of space. Mysql takes 1.4GB space for 1.2GB text data\nand postgresql takes 3.2GB of space. Even with 40 bytes per row overhead\nmentioned in FAQ, that should come to around 1.7GB, counting for 40%\nincrease\nin size. Vacuum was run on database.\n\nAny further help? Especially if batch inserts could be speed up, that would\nbe\ngreat..\n\nBye\n Shridhar\n\n--\nAlone, adj.:\tIn bad company.\t\t-- Ambrose Bierce, \"The Devil's Dictionary\"\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n",
"msg_date": "Fri, 20 Sep 2002 10:14:30 -0600",
"msg_from": "\"Jonah H. Harris\" <jharris@nightstarcorporation.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving speed of copy"
},
{
"msg_contents": "On 20 Sep 2002 at 21:22, Shridhar Daithankar wrote:\n\n> Mysql takes 221 sec. v/s 1121 sec. for postgres. For postgresql, that is around \n> 11.5K rows per second. Each tuple has 23 fields with fixed length of around 100 \n> bytes\n> \n> I wrote a programs which does inserts in batches but none of thme reaches \n> performance of copy. I tried 1K/5K/10K/100K rows in a transaction but it can \n> not cross 2.5K rows/sec.\n\n1121 sec. was time with postgres default of 64 buffers. With 30K buffers it has \ndegraded to 1393 sec.\n\nOne more issue is time taken for composite index creation. It's 4341 sec as \nopposed to 436 sec for mysql. These are three non-unique character fields where \nthe combination itself is not unique as well. Will doing a R-Tree index would \nbe a better choice?\n\nIn select test where approx. 15 rows where reported with query on index field, \nmysql took 14 sec. and psotgresql took 17.5 sec. Not bad but other issues \neclipse the result..\n\nTIA once again..\n\nBye\n Shridhar\n\n--\nrevolutionary, adj.:\tRepackaged.\n\n",
"msg_date": "Fri, 20 Sep 2002 21:48:30 +0530",
"msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>",
"msg_from_op": true,
"msg_subject": "Re: Improving speed of copy"
},
{
"msg_contents": "On 20 Sep 2002 at 10:14, Jonah H. Harris wrote:\n\n> Are you using copy within a transaction?\n\nNo. Will that help? I can try. But the utility I wrote, I could insert say 10K \nrecords in a transaction. Copy seems to be doing it all in one transaction. I \ndon't get any value for select count(*) in another psql session till copy \nfinishes..\n \n> I don't know how to explain the size difference tho. I have never seen an\n> overhead difference that large. What type of MySQL tables were you using\n> and what version?\n\nDunno.. Not my machine.. I am just trying to tune postgres on a friends \nmachine.. not even postgres/root there.. So can not answer these questions fast \nbut will get back on themm..\n \n> Have you tried this with Oracle or similar commercial database?\n\nNo. This requirement is specific for open source database..\n\nMay be in another test on a 4 way/4GB RAM machine, I might seem another \nresult. Mysql was creating index on a 10GB table for last 25hours and last I \nknew it wasn't finished..Must be something with parameters..\n\nWill keep you guys posted..\n\nBye\n Shridhar\n\n--\nbrain, n:\tThe apparatus with which we think that we think.\t\t-- Ambrose Bierce, \n\"The Devil's Dictionary\"\n\n",
"msg_date": "Fri, 20 Sep 2002 21:52:34 +0530",
"msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>",
"msg_from_op": true,
"msg_subject": "Re: Improving speed of copy"
},
{
"msg_contents": "Also, did you disable fsync?\n\n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Jonah H. Harris\nSent: Friday, September 20, 2002 10:15 AM\nTo: shridhar_daithankar@persistent.co.in; Pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] Improving speed of copy\n\n\nAre you using copy within a transaction?\n\nI don't know how to explain the size difference tho. I have never seen an\noverhead difference that large. What type of MySQL tables were you using\nand what version?\n\nHave you tried this with Oracle or similar commercial database?\n\n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Shridhar\nDaithankar\nSent: Friday, September 20, 2002 9:52 AM\nTo: Pgsql-hackers@postgresql.org\nSubject: [HACKERS] Improving speed of copy\n\n\nHi all,\n\nWhile testing for large databases, I am trying to load 12.5M rows of data\nfrom\na text file and it takes lot longer than mysql even with copy.\n\nMysql takes 221 sec. v/s 1121 sec. for postgres. For postgresql, that is\naround\n11.5K rows per second. Each tuple has 23 fields with fixed length of around\n100\nbytes\n\nI wrote a programs which does inserts in batches but none of thme reaches\nperformance of copy. I tried 1K/5K/10K/100K rows in a transaction but it can\nnot cross 2.5K rows/sec.\n\nThe machine is 800MHz, P-III/512MB/IDE disk. Postmaster is started with 30K\nbuffers i.e. around 235MB buffers. Kernel caching paramaters are defaults.\n\nBesides there is issue of space. Mysql takes 1.4GB space for 1.2GB text data\nand postgresql takes 3.2GB of space. Even with 40 bytes per row overhead\nmentioned in FAQ, that should come to around 1.7GB, counting for 40%\nincrease\nin size. Vacuum was run on database.\n\nAny further help? Especially if batch inserts could be speed up, that would\nbe\ngreat..\n\nBye\n Shridhar\n\n--\nAlone, adj.:\tIn bad company.\t\t-- Ambrose Bierce, \"The Devil's Dictionary\"\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n",
"msg_date": "Fri, 20 Sep 2002 10:26:46 -0600",
"msg_from": "\"Jonah H. Harris\" <jharris@nightstarcorporation.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving speed of copy"
},
{
"msg_contents": "On 20 Sep 2002 at 10:26, Jonah H. Harris wrote:\n\n> Also, did you disable fsync?\n\nAarrrgggh.. If that turns out to be culprit, I will kill him..;-)\n\nProblem is I can't see postgresql.conf nor can access his command history and \nhe has left for the day..\n\nI will count that in checklist but this is postgresql 7.1.3 on RHL7.2.. IIRC it \nshould have WAL, in which case -F should not matter much..\n\nOn second thought, would it be worth to try 7.2.2, compiled? Will there be any \nperformance difference? I can see on other machine that Mandrake8.2 has come \nwith 7.2-12..\n\nI think this may be the factor as well..\n\nBye\n Shridhar\n\n--\nA hypothetical paradox:\tWhat would happen in a battle between an Enterprise \nsecurity team,\twho always get killed soon after appearing, and a squad of \nImperial\tStormtroopers, who can't hit the broad side of a planet?\t\t-- Tom \nGalloway\n\n",
"msg_date": "Fri, 20 Sep 2002 22:00:43 +0530",
"msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>",
"msg_from_op": true,
"msg_subject": "Re: Improving speed of copy"
},
{
"msg_contents": "On Fri, 2002-09-20 at 08:52, Shridhar Daithankar wrote:\n\n> Besides there is issue of space. Mysql takes 1.4GB space for 1.2GB text data \n> and postgresql takes 3.2GB of space. Even with 40 bytes per row overhead \n> mentioned in FAQ, that should come to around 1.7GB, counting for 40% increase \n> in size. Vacuum was run on database.\n> \n\nHow did you calculate the size of database? If you used \"du\" make sure\nyou do it in the data/base directory as to not include the WAL files. \n\n\n-- \nBest Regards,\n \nMike Benoit\nNetNation Communication Inc.\nSystems Engineer\nTel: 604-684-6892 or 888-983-6600\n ---------------------------------------\n \n Disclaimer: Opinions expressed here are my own and not \n necessarily those of my employer\n\n",
"msg_date": "20 Sep 2002 10:27:13 -0700",
"msg_from": "Mike Benoit <mikeb@netnation.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving speed of copy"
},
{
"msg_contents": "On Fri, 20 Sep 2002, Shridhar Daithankar wrote:\n\n> In select test where approx. 15 rows where reported with query on index field, \n> mysql took 14 sec. and psotgresql took 17.5 sec. Not bad but other issues \n> eclipse the result..\n\nI don't know about anyone else but I find this aspect strange. That's 1 second\n(approx.) per row retrieved. That is pretty dire for an index scan. The\ndata/index must be very non unique.\n\n\n-- \nNigel J. Andrews\n\n",
"msg_date": "Fri, 20 Sep 2002 18:41:24 +0100 (BST)",
"msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: Improving speed of copy"
},
{
"msg_contents": "Nigel J. Andrews wrote:\n> On Fri, 20 Sep 2002, Shridhar Daithankar wrote:\n> \n>>In select test where approx. 15 rows where reported with query on index field, \n>>mysql took 14 sec. and psotgresql took 17.5 sec. Not bad but other issues \n>>eclipse the result..\n> \n> I don't know about anyone else but I find this aspect strange. That's 1 second\n> (approx.) per row retrieved. That is pretty dire for an index scan. The\n> data/index must be very non unique.\n> \n\nYeah, I'd agree that is strange. Can we see EXPLAIN ANALYZE for that query.\n\nAlso, in one of your ealier posts you mentioned a slowdown after raising \nshared buffers from the default 64 to 30000. You might have driven the machine \ninto swapping. Maybe try something more like 10000 - 15000.\n\nHTH,\n\nJoe\n\n",
"msg_date": "Fri, 20 Sep 2002 11:28:15 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving speed of copy"
},
{
"msg_contents": "On 20 Sep 2002 at 10:27, Mike Benoit wrote:\n\n> On Fri, 2002-09-20 at 08:52, Shridhar Daithankar wrote:\n> \n> > Besides there is issue of space. Mysql takes 1.4GB space for 1.2GB text data \n> > and postgresql takes 3.2GB of space. Even with 40 bytes per row overhead \n> > mentioned in FAQ, that should come to around 1.7GB, counting for 40% increase \n> > in size. Vacuum was run on database.\n> > \n> \n> How did you calculate the size of database? If you used \"du\" make sure\n> you do it in the data/base directory as to not include the WAL files. \n\nOK latest experiments, I turned number of buffers 15K and fsync is disabled.. \nLoad time is now 1250 sec.\n\nI noticed lots of notices in log saying, XLogWrite: new log files created.. I \nam pushing wal_buffers to 1000 and wal_files to 40 to test again.. I hope it \ngives me some required boost..\n\nAnd BTW about disk space usage, it's 2.6G with base pg_xlog taking 65M. still \nnot good..\n\nWill keep you guys updated..\n\nBye\n Shridhar\n\n--\nIt is necessary to have purpose.\t\t-- Alice #1, \"I, Mudd\", stardate 4513.3\n\n",
"msg_date": "Sat, 21 Sep 2002 14:14:26 +0530",
"msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>",
"msg_from_op": true,
"msg_subject": "Re: Improving speed of copy"
},
{
"msg_contents": "On 20 Sep 2002 at 18:41, Nigel J. Andrews wrote:\n\n> On Fri, 20 Sep 2002, Shridhar Daithankar wrote:\n> \n> > In select test where approx. 15 rows where reported with query on index field, \n> > mysql took 14 sec. and psotgresql took 17.5 sec. Not bad but other issues \n> > eclipse the result..\n> \n> I don't know about anyone else but I find this aspect strange. That's 1 second\n> (approx.) per row retrieved. That is pretty dire for an index scan. The\n> data/index must be very non unique.\n\nSorry for late reply.. The numbers were scaled off.. Actually my fiend forgot \nto add units to those number.. The actual numbers are 140ms for mysql and 17\n5ms for postgresql.. Further since result are obtained via 'time psql' higher \noverhead of postgres connection establishement is factored in..\n\nNeck to neck I would say..\n\nBye\n Shridhar\n\n--\nSteele's Law:\tThere exist tasks which cannot be done by more than ten men\tor \nfewer than one hundred.\n\n",
"msg_date": "Mon, 23 Sep 2002 12:04:59 +0530",
"msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>",
"msg_from_op": true,
"msg_subject": "Re: Improving speed of copy"
},
{
"msg_contents": "Hi,\n\nI am wondering about bad INSERT performance compared against the speed of\nCOPY. (I use 7.2.2 on RedHat 7.2)\n\nI have a table with about 30 fields, some constraints, some indexes, some\nforeign key constraints. I use COPY to import old data. Copying about\n10562 rows takes about 19 seconds.\n\nFor testing I have writtin a simple function in PL/pgSQL that inserts dummy\nrecords into the same table (just a FOR loop and an INSERT INTO ...).\n\nTo insert another 10562 rows takes about 12 minutes now!!!\n\nWhat is the problem with INSERT in postgresql? I usually don't compare mysql\nand postgresql because mysql is just playing stuff, but I have think that\nthe insert performance of mysql (even with innodb tables) is about 10 times\nbetter than the insert performance of postgresql.\n\nWhat is the reason and what can be done about it?\n\nBest Regards,\nMichael\n\nP.S: Perhaps you want to know about my postgresql.conf\n\n#\n# Shared Memory Size\n#\nshared_buffers = 12288 # 2*max_connections, min 16\nmax_fsm_relations = 100 # min 10, fsm is free space map\nmax_fsm_pages = 20000 # min 1000, fsm is free space map\nmax_locks_per_transaction = 64 # min 10\nwal_buffers = 8 # min 4\n\n#\n# Non-shared Memory Sizes\n#\nsort_mem = 4096 # min 32 (in Kb)\nvacuum_mem = 16384 # min 1024\n\n#\n# Write-ahead log (WAL)\n#\nwal_files = 8 # range 0-64, default 0\nwal_sync_method = fdatasync # the default varies across platforms:\n# # fsync, fdatasync, open_sync, or open_datasync\nfsync = true\n\n\n",
"msg_date": "Wed, 25 Sep 2002 22:10:30 +0200",
"msg_from": "\"Michael Paesold\" <mpaesold@gmx.at>",
"msg_from_op": false,
"msg_subject": "Insert Performance"
},
{
"msg_contents": "\"Michael Paesold\" <mpaesold@gmx.at> writes:\n> To insert another 10562 rows takes about 12 minutes now!!!\n\nSee\nhttp://www.ca.postgresql.org/users-lounge/docs/7.2/postgres/populate.html\nparticularly the point about not committing each INSERT as a separate\ntransaction.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Sep 2002 16:49:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Insert Performance "
},
{
"msg_contents": "Tom Lane wrote:\n\n> \"Michael Paesold\" <mpaesold@gmx.at> writes:\n> > To insert another 10562 rows takes about 12 minutes now!!!\n>\n> See\n> http://www.ca.postgresql.org/users-lounge/docs/7.2/postgres/populate.html\n> particularly the point about not committing each INSERT as a separate\n> transaction.\n>\n> regards, tom lane\n\nAs I said I wrote a function to insert the rows (PL/pgSQL). All values were\ninserted inside a single function call; I always though that a function call\nwould be executed inside a transaction block. Experience says it does.\n\nAbout the other points in the docs:\n\n> Use COPY FROM:\nWell, I am currently comparing INSERT to COPY ... ;)\n\n> Remove Indexes:\nDoesn't COPY also have to update indexes?\n\n> ANALYZE Afterwards:\nI have done a VACUUM FULL; VACUUM ANALYZE; just before running the test.\n\nSo is it just the planner/optimizer/etc. costs? Would a PREPARE in 7.3 help?\n\nBest Regards,\nMichael Paesold\n\n",
"msg_date": "Wed, 25 Sep 2002 23:25:54 +0200",
"msg_from": "\"Michael Paesold\" <mpaesold@gmx.at>",
"msg_from_op": false,
"msg_subject": "Re: Insert Performance "
},
{
"msg_contents": "\"Michael Paesold\" <mpaesold@gmx.at> writes:\n> To insert another 10562 rows takes about 12 minutes now!!!\n\n> As I said I wrote a function to insert the rows (PL/pgSQL). All values were\n> inserted inside a single function call; I always though that a function call\n> would be executed inside a transaction block. Experience says it does.\n\nWell, there's something fishy about your results. Using CVS tip I see\nabout a 4-to-1 difference between COPYing 10000 rows and INSERT'ing\n10000 rows (as one transaction). That's annoyingly high, but it's still\nway lower than what you're reporting ...\n\nI used the contents of table tenk1 in the regression database for test\ndata, and dumped it out with \"pg_dump -a\" with and without -d. I then\njust timed feeding the scripts to psql ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Sep 2002 18:06:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Insert Performance "
},
{
"msg_contents": "Tom Lane wrote:\n\n\n> \"Michael Paesold\" <mpaesold@gmx.at> writes:\n> > To insert another 10562 rows takes about 12 minutes now!!!\n>\n> > As I said I wrote a function to insert the rows (PL/pgSQL). All values\nwere\n> > inserted inside a single function call; I always though that a function\ncall\n> > would be executed inside a transaction block. Experience says it does.\n>\n> Well, there's something fishy about your results. Using CVS tip I see\n> about a 4-to-1 difference between COPYing 10000 rows and INSERT'ing\n> 10000 rows (as one transaction). That's annoyingly high, but it's still\n> way lower than what you're reporting ...\n>\n> I used the contents of table tenk1 in the regression database for test\n> data, and dumped it out with \"pg_dump -a\" with and without -d. I then\n> just timed feeding the scripts to psql ...\n>\n> regards, tom lane\n\nI have further played around with the test here. I now realized that insert\nperformance is much better right after a vacuum full; vacuum analyze;\n\nI have this function bench_invoice(integer) that will insert $1 records into\ninvoice table;\nselect bench_invoice(10000) took about 10 minutes average. Now I executed\nthis with psql:\n\nvacuum full; vacuum analyze;\nselect bench_invoice(1000); select bench_invoice(1000); ... (10 times)\n\nIt seems performance is degrading with every insert!\nHere is the result (time in seconds in bench_invoice(), commit between\nselects just under a second)\n\n13, 24, 36, 47, 58, 70, 84, 94, 105, 117, ... (seconds per 1000 rows\ninserted)\n\nIsn't that odd?\nI have tried again. vacuum analyze alone (without full) is enough to lower\ntimes again. They will start again with 13 seconds.\n\nI did not delete from the table by now; the table now has about 50000 rows.\nThe disk is not swapping, there are no other users using postgres,\npostmaster takes about 100% cpu time during the whole operation. There are\nno special messages in error log.\n\nCan you explain?\nShould I enable some debug logging? Disable some optimizer? Do something\nelse?\nThis is a development server, I habe no problem with playing around.\n\nBest Regards,\nMichael Paesold\n\n\n\n",
"msg_date": "Thu, 26 Sep 2002 01:44:22 +0200",
"msg_from": "\"Michael Paesold\" <mpaesold@gmx.at>",
"msg_from_op": false,
"msg_subject": "Re: Insert Performance "
},
{
"msg_contents": "Update:\n\n> vacuum full; vacuum analyze;\n> select bench_invoice(1000); select bench_invoice(1000); ... (10 times)\n>\n> It seems performance is degrading with every insert!\n> Here is the result (time in seconds in bench_invoice(), commit between\n> selects just under a second)\n>\n> 13, 24, 36, 47, 58, 70, 84, 94, 105, 117, ... (seconds per 1000 rows\n> inserted)\n>\n> Isn't that odd?\n> I have tried again. vacuum analyze alone (without full) is enough to lower\n> times again. They will start again with 13 seconds.\n\nTested further what exactly will reset insert times to lowest possible:\n\nvacuum full; helps\nvacuum analyze; helps\nanalyze <tablename>; of table that I insert to doesn't help!\nanalyze <tablename>; of any table reference in foreign key constraints\ndoesn't help!\n\nOnly vacuum will reset the insert times to the lowest possible!\nWhat does the vacuum code do?? :-]\n\nRegards,\nMichael Paesold\n\n",
"msg_date": "Thu, 26 Sep 2002 01:58:17 +0200",
"msg_from": "\"Michael Paesold\" <mpaesold@gmx.at>",
"msg_from_op": false,
"msg_subject": "Re: Insert Performance "
},
{
"msg_contents": "> Only vacuum will reset the insert times to the lowest possible!\n> What does the vacuum code do?? :-]\n\nPlease see the manual and the extensive discussions on this point in the\narchives. This behaviour is well known -- though undesirable. It is an\neffect of the multi-version concurrency control system.\n\nGavin\n\n",
"msg_date": "Thu, 26 Sep 2002 10:08:07 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Insert Performance "
},
{
"msg_contents": "\"Michael Paesold\" <mpaesold@gmx.at> writes:\n> Only vacuum will reset the insert times to the lowest possible!\n> What does the vacuum code do?? :-]\n\nIt removes dead tuples. Dead tuples can only arise from update or\ndelete operations ... so you have not been telling us the whole\ntruth. An insert-only test would not have this sort of behavior.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Sep 2002 23:15:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Insert Performance "
},
{
"msg_contents": "Tom Lane wrote:\n\n> \"Michael Paesold\" <mpaesold@gmx.at> writes:\n> > Only vacuum will reset the insert times to the lowest possible!\n> > What does the vacuum code do?? :-]\n> \n> It removes dead tuples. Dead tuples can only arise from update or\n> delete operations ... so you have not been telling us the whole\n> truth. An insert-only test would not have this sort of behavior.\n> \n> regards, tom lane\n\nSleeping is good. When I woke up this morning I had an idea of\nwhat is causing these problems; and you are right. I had used a\nself-written sequence system for the invoice_ids -- I can't use a\nsequence because sequence values can skip.\n\nSo inserting an invoice would also do an update on a single row\nof the cs_sequence table, which cause the problems.\n\nNow, with a normal sequence, it works like a charm.\n17 sec. for 10000 rows and 2-3 sec. for commit.\n\nBut why is performance so much degrading? After 10000 updates\non a row, the row seems to be unusable without vacuum! I hope\nthe currently discussed autovacuum daemon will help in such a\nsituation.\n\nSo I think I will have to look for another solution. It would be\nnice if one could lock a sequence! That would solve all my\ntroubles,...\n\n<dreaming>\nBEGIN;\nLOCK SEQUENCE invoice_id_seq;\n-- now only this connection can get nextval(), all others will block\nINSERT INTO invoice VALUES (nextval('invoice_id_seq'), ...);\nINSERT INTO invoice VALUES (nextval('invoice_id_seq'), ...);\n...\nCOMMIT;\n-- now this only helps if sequences could be rolled back -- wake up!\n</dreaming>\n\nWhat could you recommend? Locking the table and selecting\nmax(invoice_id) wouldn't really be much faster, with max(invoice_id)\nnot using an index...\n\nBest Regards,\nMichael Paesold\n\n\n\n",
"msg_date": "Thu, 26 Sep 2002 12:28:37 +0200",
"msg_from": "\"Michael Paesold\" <mpaesold@gmx.at>",
"msg_from_op": false,
"msg_subject": "Re: Insert Performance "
},
{
"msg_contents": "On 26 Sep 2002 at 12:28, Michael Paesold wrote:\n> But why is performance so much degrading? After 10000 updates\n> on a row, the row seems to be unusable without vacuum! I hope\n> the currently discussed autovacuum daemon will help in such a\n> situation.\n\nLet mw know if it works. Use CVS BTW.. I am eager to know any bug reports.. \nDidn't have a chance to test it the way I would have liked. May be this \nweekend..\n\n\nBye\n Shridhar\n\n--\nQOTD:\tThe forest may be quiet, but that doesn't mean\tthe snakes have gone away.\n\n",
"msg_date": "Thu, 26 Sep 2002 16:00:59 +0530",
"msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>",
"msg_from_op": true,
"msg_subject": "Re: Insert Performance "
},
{
"msg_contents": "\"Michael Paesold\" <mpaesold@gmx.at> writes:\n> So inserting an invoice would also do an update on a single row\n> of the cs_sequence table, which cause the problems.\n\n> Now, with a normal sequence, it works like a charm.\n> 17 sec. for 10000 rows and 2-3 sec. for commit.\n\n> But why is performance so much degrading? After 10000 updates\n> on a row, the row seems to be unusable without vacuum!\n\nProbably, because the table contains 10000 dead tuples and one live one.\nThe system is scanning all 10001 tuples looking for the one to UPDATE.\n\nIn 7.3 it might help a little to create an index on the table. But\nreally this is one of the reasons that SEQUENCEs were invented ---\nyou have no alternative but to do frequent vacuums, if you repeatedly\nupdate the same row of a table. You might consider issuing a selective\n\"VACUUM cs_sequence\" command every so often (ideally every few hundred\nupdates).\n\n> I hope the currently discussed autovacuum daemon will help in such a\n> situation.\n\nProbably, if we can teach it to recognize that such frequent vacuums are\nneeded. In the meantime, cron is your friend ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 26 Sep 2002 10:05:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Insert Performance "
},
{
"msg_contents": ">Have you tried this with Oracle or similar commercial database?\n\n\nI have timed COPY/LOAD times for Postgresql/Mysql/Oracle/Db2 -\n\nthe rough comparison is :\n\nDb2 and Mysql fastest (Db2 slightly faster)\nOracle approx twice as slow as Db2\nPostgresql about 3.5-4 times slower than Db2\n\nHowever Postgresql can sometimes create indexes faster than Mysql .... \nso that the total time of COPY + CREATE INDEX can be smaller for \nPostgresql than Mysql.\n\nOracle an Db2 seemed similarish to Postgresql with respect to CREATE INDEX\n\n\nregards\n\nMark\n\n\n",
"msg_date": "Wed, 02 Oct 2002 19:37:04 +1200",
"msg_from": "Mark Kirkwood <markir@paradise.net.nz>",
"msg_from_op": false,
"msg_subject": "Re: Improving speed of copy"
},
{
"msg_contents": "On Fri, 20 Sep 2002, Shridhar Daithankar wrote:\n\n> On 20 Sep 2002 at 21:22, Shridhar Daithankar wrote:\n>\n> > Mysql takes 221 sec. v/s 1121 sec. for postgres. For postgresql,\n> > that is around 11.5K rows per second. Each tuple has 23 fields with\n> > fixed length of around 100 bytes\n\nYes, postgres is much slower than MySQL for doing bulk loading of data.\nThere's not much, short of hacking on the code, that can be done about\nthis.\n\n> One more issue is time taken for composite index creation. It's 4341\n> sec as opposed to 436 sec for mysql. These are three non-unique\n> character fields where the combination itself is not unique as well.\n\nSetting sort_mem appropriately makes a big difference here. I generally\nbump it up to 2-8 MB for everyone, and when I'm building a big index, I\nset it to 32 MB or so just for that session.\n\nBut make sure you don't set it so high you drive your system into\nswapping, or it will kill your performance. Remember also, that in\n7.2.x, postgres will actually use almost three times the value you give\nsort_mem (i.e., sort_mem of 32 MB will actually allocate close to 96 MB\nof memory for the sort).\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Mon, 7 Oct 2002 00:06:11 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: Improving speed of copy"
}
] |
[
{
"msg_contents": "http://developer.novell.com/connections/091902.html\n\nI'm somehwat surprized no one else has mentioned this, as it's on Slashdot...\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 20 Sep 2002 12:40:45 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": true,
"msg_subject": "Novell releasing PostgreSQL for NetWare."
}
] |
[
{
"msg_contents": "It seems the 'numeric' and 'int8' tests are failing in CVS HEAD. The\nculprit seems to be the recent to_char() change made by Karel, but I\nhaven't verified that. The diff follows.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n*** ./expected/int8.out\tFri Jan 26 17:50:26 2001\n--- ./results/int8.out\tFri Sep 20 12:37:25 2002\n***************\n*** 245,256 ****\n \n SELECT '' AS to_char_14, to_char(q2, 'FM9999999999999999.999') FROM INT8_TBL;\n to_char_14 | to_char \n! ------------+-------------------\n! | 456\n! | 4567890123456789\n! | 123\n! | 4567890123456789\n! | -4567890123456789\n (5 rows)\n \n SELECT '' AS to_char_15, to_char(q2, 'S 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 . 9 9 9') FROM INT8_TBL;\n--- 245,256 ----\n \n SELECT '' AS to_char_14, to_char(q2, 'FM9999999999999999.999') FROM INT8_TBL;\n to_char_14 | to_char \n! ------------+--------------------\n! | 456.\n! | 4567890123456789.\n! | 123.\n! | 4567890123456789.\n! | -4567890123456789.\n (5 rows)\n \n SELECT '' AS to_char_15, to_char(q2, 'S 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 . 9 9 9') FROM INT8_TBL;\n\n======================================================================\n\n*** ./expected/numeric.out\tFri Apr 7 15:17:42 2000\n--- ./results/numeric.out\tFri Sep 20 12:37:36 2002\n***************\n*** 785,792 ****\n | +7799461.4119\n | +16397.038491\n | +93901.57763026\n! | -83028485\n! | +74881\n | -24926804.04504742\n (10 rows)\n \n--- 785,792 ----\n | +7799461.4119\n | +16397.038491\n | +93901.57763026\n! | -83028485.\n! | +74881.\n | -24926804.04504742\n (10 rows)\n \n***************\n*** 800,807 ****\n | 7799461.4119\n | 16397.038491\n | 93901.57763026\n! | <83028485>\n! | 74881\n | <24926804.04504742>\n (10 rows)\n \n--- 800,807 ----\n | 7799461.4119\n | 16397.038491\n | 93901.57763026\n! | <83028485.>\n! | 74881.\n | <24926804.04504742>\n (10 rows)\n \n***************\n*** 860,867 ****\n | 0000000007799461.4119\n | 0000000000016397.038491\n | 0000000000093901.57763026\n! | -0000000083028485\n! | 0000000000074881\n | -0000000024926804.04504742\n (10 rows)\n \n--- 860,867 ----\n | 0000000007799461.4119\n | 0000000000016397.038491\n | 0000000000093901.57763026\n! | -0000000083028485.\n! | 0000000000074881.\n | -0000000024926804.04504742\n (10 rows)\n \n***************\n*** 950,957 ****\n | 7799461.4119\n | 16397.038491\n | 93901.57763026\n! | -83028485\n! | 74881\n | -24926804.04504742\n (10 rows)\n \n--- 950,957 ----\n | 7799461.4119\n | 16397.038491\n | 93901.57763026\n! | -83028485.\n! | 74881.\n | -24926804.04504742\n (10 rows)\n \n***************\n*** 980,987 ****\n | + 7 7 9 9 4 6 1 . 4 1 1 9 \n | + 1 6 3 9 7 . 0 3 8 4 9 1 \n | + 9 3 9 0 1 . 5 7 7 6 3 0 2 6 \n! | - 8 3 0 2 8 4 8 5 \n! | + 7 4 8 8 1 \n | - 2 4 9 2 6 8 0 4 . 0 4 5 0 4 7 4 2 \n (10 rows)\n \n--- 980,987 ----\n | + 7 7 9 9 4 6 1 . 4 1 1 9 \n | + 1 6 3 9 7 . 0 3 8 4 9 1 \n | + 9 3 9 0 1 . 5 7 7 6 3 0 2 6 \n! | - 8 3 0 2 8 4 8 5 . \n! | + 7 4 8 8 1 . \n | - 2 4 9 2 6 8 0 4 . 0 4 5 0 4 7 4 2 \n (10 rows)\n \n***************\n*** 1025,1032 ****\n | 7799461.4119\n | 16397.038491\n | 93901.57763026\n! | -83028485\n! | 74881\n | -24926804.04504742\n (10 rows)\n \n--- 1025,1032 ----\n | 7799461.4119\n | 16397.038491\n | 93901.57763026\n! | -83028485.\n! | 74881.\n | -24926804.04504742\n (10 rows)\n \n\n======================================================================\n\n",
"msg_date": "20 Sep 2002 12:42:27 -0400",
"msg_from": "Neil Conway <neilc@samurai.com>",
"msg_from_op": true,
"msg_subject": "regression test failure in CVS HEAD"
},
{
"msg_contents": "\nTom has fixed it. Sorry I didn't test earlier.\n\n---------------------------------------------------------------------------\n\nNeil Conway wrote:\n> It seems the 'numeric' and 'int8' tests are failing in CVS HEAD. The\n> culprit seems to be the recent to_char() change made by Karel, but I\n> haven't verified that. The diff follows.\n> \n> Cheers,\n> \n> Neil\n> \n> -- \n> Neil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n> \n> *** ./expected/int8.out\tFri Jan 26 17:50:26 2001\n> --- ./results/int8.out\tFri Sep 20 12:37:25 2002\n> ***************\n> *** 245,256 ****\n> \n> SELECT '' AS to_char_14, to_char(q2, 'FM9999999999999999.999') FROM INT8_TBL;\n> to_char_14 | to_char \n> ! ------------+-------------------\n> ! | 456\n> ! | 4567890123456789\n> ! | 123\n> ! | 4567890123456789\n> ! | -4567890123456789\n> (5 rows)\n> \n> SELECT '' AS to_char_15, to_char(q2, 'S 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 . 9 9 9') FROM INT8_TBL;\n> --- 245,256 ----\n> \n> SELECT '' AS to_char_14, to_char(q2, 'FM9999999999999999.999') FROM INT8_TBL;\n> to_char_14 | to_char \n> ! ------------+--------------------\n> ! | 456.\n> ! | 4567890123456789.\n> ! | 123.\n> ! | 4567890123456789.\n> ! | -4567890123456789.\n> (5 rows)\n> \n> SELECT '' AS to_char_15, to_char(q2, 'S 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 . 9 9 9') FROM INT8_TBL;\n> \n> ======================================================================\n> \n> *** ./expected/numeric.out\tFri Apr 7 15:17:42 2000\n> --- ./results/numeric.out\tFri Sep 20 12:37:36 2002\n> ***************\n> *** 785,792 ****\n> | +7799461.4119\n> | +16397.038491\n> | +93901.57763026\n> ! | -83028485\n> ! | +74881\n> | -24926804.04504742\n> (10 rows)\n> \n> --- 785,792 ----\n> | +7799461.4119\n> | +16397.038491\n> | +93901.57763026\n> ! | -83028485.\n> ! | +74881.\n> | -24926804.04504742\n> (10 rows)\n> \n> ***************\n> *** 800,807 ****\n> | 7799461.4119\n> | 16397.038491\n> | 93901.57763026\n> ! | <83028485>\n> ! | 74881\n> | <24926804.04504742>\n> (10 rows)\n> \n> --- 800,807 ----\n> | 7799461.4119\n> | 16397.038491\n> | 93901.57763026\n> ! | <83028485.>\n> ! | 74881.\n> | <24926804.04504742>\n> (10 rows)\n> \n> ***************\n> *** 860,867 ****\n> | 0000000007799461.4119\n> | 0000000000016397.038491\n> | 0000000000093901.57763026\n> ! | -0000000083028485\n> ! | 0000000000074881\n> | -0000000024926804.04504742\n> (10 rows)\n> \n> --- 860,867 ----\n> | 0000000007799461.4119\n> | 0000000000016397.038491\n> | 0000000000093901.57763026\n> ! | -0000000083028485.\n> ! | 0000000000074881.\n> | -0000000024926804.04504742\n> (10 rows)\n> \n> ***************\n> *** 950,957 ****\n> | 7799461.4119\n> | 16397.038491\n> | 93901.57763026\n> ! | -83028485\n> ! | 74881\n> | -24926804.04504742\n> (10 rows)\n> \n> --- 950,957 ----\n> | 7799461.4119\n> | 16397.038491\n> | 93901.57763026\n> ! | -83028485.\n> ! | 74881.\n> | -24926804.04504742\n> (10 rows)\n> \n> ***************\n> *** 980,987 ****\n> | + 7 7 9 9 4 6 1 . 4 1 1 9 \n> | + 1 6 3 9 7 . 0 3 8 4 9 1 \n> | + 9 3 9 0 1 . 5 7 7 6 3 0 2 6 \n> ! | - 8 3 0 2 8 4 8 5 \n> ! | + 7 4 8 8 1 \n> | - 2 4 9 2 6 8 0 4 . 0 4 5 0 4 7 4 2 \n> (10 rows)\n> \n> --- 980,987 ----\n> | + 7 7 9 9 4 6 1 . 4 1 1 9 \n> | + 1 6 3 9 7 . 0 3 8 4 9 1 \n> | + 9 3 9 0 1 . 5 7 7 6 3 0 2 6 \n> ! | - 8 3 0 2 8 4 8 5 . \n> ! | + 7 4 8 8 1 . \n> | - 2 4 9 2 6 8 0 4 . 0 4 5 0 4 7 4 2 \n> (10 rows)\n> \n> ***************\n> *** 1025,1032 ****\n> | 7799461.4119\n> | 16397.038491\n> | 93901.57763026\n> ! | -83028485\n> ! | 74881\n> | -24926804.04504742\n> (10 rows)\n> \n> --- 1025,1032 ----\n> | 7799461.4119\n> | 16397.038491\n> | 93901.57763026\n> ! | -83028485.\n> ! | 74881.\n> | -24926804.04504742\n> (10 rows)\n> \n> \n> ======================================================================\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 20 Sep 2002 13:12:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: regression test failure in CVS HEAD"
},
{
"msg_contents": "On Fri, Sep 20, 2002 at 01:12:17PM -0400, Bruce Momjian wrote:\n> \n> Tom has fixed it. Sorry I didn't test earlier.\n\n Thanks.\n\n> Neil Conway wrote:\n> > It seems the 'numeric' and 'int8' tests are failing in CVS HEAD. The\n> > culprit seems to be the recent to_char() change made by Karel, but I\n> > haven't verified that. The diff follows.\n\n You're right. Sorry.\n\n> > SELECT '' AS to_char_14, to_char(q2, 'FM9999999999999999.999') FROM INT8_TBL;\n> > to_char_14 | to_char \n> > ! ------------+--------------------\n> > ! | 456.\n> > ! | 4567890123456789.\n> > ! | 123.\n> > ! | 4567890123456789.\n> > ! | -4567890123456789.\n\n\n The results like this are right.\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Mon, 23 Sep 2002 08:39:34 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: regression test failure in CVS HEAD"
}
] |
[
{
"msg_contents": "Thomas Lockhart wrote:\n> ...\n> > Why you object to that, and insist it must be an environment variable\n> > instead (if that is indeed what you're doing), I'm not sure....\n> \n> Well, what I was hoping for, but no longer expect, is that features\n> (store xlog in another area) can be implemented and applied without\n> rejection by the new gatekeepers. It is a feature that we do not have\n> now, and could have implemented for 7.3.\n> \n> No need to rehash the points which were not understood in the\n> \"discussion\".\n> \n> I have no fundamental objection to extending and replacing\n> implementation features as positive contributions to development. I do\n> have trouble with folks rejecting features without understanding the\n> issues, and sorry, there was a strong thread of \"why would anyone want\n> to put storage on another device\" to the discussion.\n\nI believe the discussion was \"Why not use symlinks?\" I think we have\naddressed that issue with the GUC variable solution. Certainly we all\nrecognize the value of moving storage to another drive. It is mentioned\nin the SGML docs and other places.\n\nIn fact, I tried to open a dialog with you on this issue several times,\nbut when I got no reply, I had to remove PGXLOG. If we had continued\ndiscussion, we might have come up with the GUC compromise.\n\n> There has been a fundamental shift in the quality and civility of\n> discussions over issues over the last couple of years, and I was naively\n> hoping that we could work through that on this topic. Not happening, and\n> not likely too.\n\nMy impression is that things have been getting better in the past six\nmonths. There is more open discussion, and more voting, meaning one\ngroup isn't making all the decisions.\n\nI have worked to limit the sway of any \"new gatekeepers\". People are\nencouraged to vote, and we normally accept that outcome. I think\ngatekeepers should sway only in the force of their arguments. Do you\nfeel this was not followed on the PGXLOG case, or is the concept in\nerror?\n\nI certainly have been frustrated when my features were not accepted, but\nI have to accept the vote of the group.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 20 Sep 2002 14:07:23 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "On Fri, 20 Sep 2002, Thomas Lockhart wrote:\n\n> Well, what I was hoping for, but no longer expect, is that features\n> (store xlog in another area) can be implemented and applied without\n> rejection by the new gatekeepers.\n\nIt can be, and very simply. So long as you do it in the way which\nis not error-prone, rather than the way which is.\n\n> I have no fundamental objection to extending and replacing\n> implementation features as positive contributions to development. I do\n> have trouble with folks rejecting features without understanding the\n> issues, and sorry, there was a strong thread of \"why would anyone want\n> to put storage on another device\" to the discussion.\n\nI doubt it. There was perhaps a strong thread of \"windows users\nare loosers,\" but certainly Unix folks put storage on another device\nall the time, using symlinks. This was mentioned many, many times.\n\n> There has been a fundamental shift in the quality and civility of\n> discussions over issues over the last couple of years, and I was naively\n> hoping that we could work through that on this topic. Not happening, and\n> not likely too.\n\nWell, when you're going to bring in Windows in a pretty heavily\nopen-source-oriented group, no, it's not likely you're going to bring\neveryone together. (This is not a value judgement, it's just a, \"Hello,\nthis is the usenet (or something similar),\" observation.\n\nThat said, again, I don't think anybody was objecting to what you\nwanted to do. It was simply a bad implementation that I, and probably\nall the others, were objecting to. So please don't go on like we didn't\nlike the concept.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Sat, 21 Sep 2002 13:16:42 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "\nOn Fri, 20 Sep 2002, Bruce Momjian wrote:\n\n> In fact, I tried to open a dialog with you on this issue several times,\n> but when I got no reply, I had to remove PGXLOG. If we had continued\n> discussion, we might have come up with the GUC compromise.\n\nYa know, I'm sitting back and reading this, and other threads, and\nassimilating what is being bantered about, and start to think that its\ntime to cut back on the gatekeepers ...\n\nThomas implemented an option that he felt was useful, and that doesn't\nbreak anything inside of the code ... he provided 2 methods of being able\nto move the xlog's to another location (through command line and\nenvironment variable, both of which are standard methods for doing such in\nserver software) ... but, because a small number of ppl \"voted\" that it\nshould go away, it went away ...\n\nYou don't :vote: on stuff like this ... if you don't like it, you just\ndon't use it ... nobody is forcing you to do so. If you think there are\ngoing to be idiots out here that aren't going to use it right, then you\ndocument it appropriately, with *strong* wording against using it ...\n\n\n",
"msg_date": "Sun, 22 Sep 2002 23:31:13 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "Marc G. Fournier wrote:\n> \n> On Fri, 20 Sep 2002, Bruce Momjian wrote:\n> \n> > In fact, I tried to open a dialog with you on this issue several times,\n> > but when I got no reply, I had to remove PGXLOG. If we had continued\n> > discussion, we might have come up with the GUC compromise.\n> \n> Ya know, I'm sitting back and reading this, and other threads, and\n> assimilating what is being bantered about, and start to think that its\n> time to cut back on the gatekeepers ...\n> \n> Thomas implemented an option that he felt was useful, and that doesn't\n> break anything inside of the code ... he provided 2 methods of being able\n> to move the xlog's to another location (through command line and\n> environment variable, both of which are standard methods for doing such in\n> server software) ... but, because a small number of ppl \"voted\" that it\n> should go away, it went away ...\n> \n> You don't :vote: on stuff like this ... if you don't like it, you just\n> don't use it ... nobody is forcing you to do so. If you think there are\n> going to be idiots out here that aren't going to use it right, then you\n> document it appropriately, with *strong* wording against using it ...\n\nI understand your thought of reevaluating how we decide things.\n\nHowever, if you don't accept voting as a valid way to determine if a\npatch is acceptible, what method do you suggest? I don't think we want\nto go down the road of saying that you can't vote \"no\" on a feature\naddition. \n\nWe just rejected a patch today on LIMIT with UPDATE/DELETE via an\ninformal vote, and I think it was a valid rejection.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 22 Sep 2002 22:35:48 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "On Sun, 22 Sep 2002, Bruce Momjian wrote:\n\n> Marc G. Fournier wrote:\n> >\n> > On Fri, 20 Sep 2002, Bruce Momjian wrote:\n> >\n> > > In fact, I tried to open a dialog with you on this issue several times,\n> > > but when I got no reply, I had to remove PGXLOG. If we had continued\n> > > discussion, we might have come up with the GUC compromise.\n> >\n> > Ya know, I'm sitting back and reading this, and other threads, and\n> > assimilating what is being bantered about, and start to think that its\n> > time to cut back on the gatekeepers ...\n> >\n> > Thomas implemented an option that he felt was useful, and that doesn't\n> > break anything inside of the code ... he provided 2 methods of being able\n> > to move the xlog's to another location (through command line and\n> > environment variable, both of which are standard methods for doing such in\n> > server software) ... but, because a small number of ppl \"voted\" that it\n> > should go away, it went away ...\n> >\n> > You don't :vote: on stuff like this ... if you don't like it, you just\n> > don't use it ... nobody is forcing you to do so. If you think there are\n> > going to be idiots out here that aren't going to use it right, then you\n> > document it appropriately, with *strong* wording against using it ...\n>\n> I understand your thought of reevaluating how we decide things.\n>\n> However, if you don't accept voting as a valid way to determine if a\n> patch is acceptible, what method do you suggest? I don't think we want\n> to go down the road of saying that you can't vote \"no\" on a feature\n> addition.\n>\n> We just rejected a patch today on LIMIT with UPDATE/DELETE via an\n> informal vote, and I think it was a valid rejection.\n\nIts not the concept of 'the vote', its what is being voted on that I have\na major problem with ... for instance, with the above LIMIT patch ... you\nare talking about functionality ... I haven't seen that thread yet, so am\nnot sure why it was rejected, but did the submitter agree with the\nreasons? Assuming he did, is this something he's going to re-submit later\nafter makign fixes?\n\nSee, that is one thing I have enjoyed over the years ... someone submit's\na patch and a few ppl jump on top of it, point out a few problems iwth it\nand the submitter re-submits with appropriate fixes ...\n\nActually, I just went to my -patches folder and read the thread ... first\noff, the 'informal vote' appears to have consisted of Tom Lane and Alvaro\nHerrera, which isn't a vote ... second of all, in that case, the\nimplementation of such, I believe, would go against SQL specs, no? Second\nof all, doesn't it just purely go against the point of a RDBMS if there\nare multiple rows in a table with nothing to identify them except for the\nctid/oid? *scratch head*\n\nMy point is, the use of an ENVIRONMENT variable for pointing ot a\ndirectory is nowhere near on the scale of implementing an SQL statement\n(or extension) that serves to take us steps backwards against the progress\nwe've made to improve our compliance ...\n\none has been removed due to personal preferences and nothign else ... the\nother rejected as it will break (unless I've misread things?) standard,\naccepted procedures ...\n\n\n\n",
"msg_date": "Mon, 23 Sep 2002 00:09:47 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> Ya know, I'm sitting back and reading this, and other threads, and\n> assimilating what is being bantered about, and start to think that\n> its time to cut back on the gatekeepers ...\n\nOn the contrary, the quality of code accepted into a DBMS is really\nimportant. If you disagree with the definition of \"code quality\" that\nsome developers are employing, then we can discuss that -- but I think\nthat as the project matures, we should be more picky about the\nfeatures we implement, not less.\n\n> Thomas implemented an option that he felt was useful, and that\n> doesn't break anything inside of the code\n\nThe problem with this line of thinking is that \"it doesn't break\nstuff\" is not sufficient reason for adding a new feature. The burden\nof proof is on the person implementing the new feature.\n\n> ... he provided 2 methods of being able to move the xlog's to\n> another location\n\nYes, but why do we need 2 different ways to do exactly the same thing?\n\n> but, because a small number of ppl \"voted\" that it should go away,\n> it went away ...\n\nThey didn't just vote, they provided reasons why they thought the\nfeature was brain-damaged -- reasons which have not be persuasively\nrefuted, IMHO. If you'd like to see this feature in the code, might I\nsuggest that you spend less time complaining about \"gate keepers\"\n(hint: it's called code review), and more time explaining exactly why\nthe feature is worth having?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n",
"msg_date": "22 Sep 2002 23:14:22 -0400",
"msg_from": "Neil Conway <neilc@samurai.com>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "Marc G. Fournier wrote:\n> > However, if you don't accept voting as a valid way to determine if a\n> > patch is acceptible, what method do you suggest? I don't think we want\n> > to go down the road of saying that you can't vote \"no\" on a feature\n> > addition.\n> >\n> > We just rejected a patch today on LIMIT with UPDATE/DELETE via an\n> > informal vote, and I think it was a valid rejection.\n> \n> Its not the concept of 'the vote', its what is being voted on that I have\n> a major problem with ... for instance, with the above LIMIT patch ... you\n> are talking about functionality ... I haven't seen that thread yet, so am\n> not sure why it was rejected, but did the submitter agree with the\n> reasons? Assuming he did, is this something he's going to re-submit later\n> after makign fixes?\n> \n> See, that is one thing I have enjoyed over the years ... someone submit's\n> a patch and a few ppl jump on top of it, point out a few problems iwth it\n> and the submitter re-submits with appropriate fixes ...\n> \n> Actually, I just went to my -patches folder and read the thread ... first\n> off, the 'informal vote' appears to have consisted of Tom Lane and Alvaro\n> Herrera, which isn't a vote ... second of all, in that case, the\n> implementation of such, I believe, would go against SQL specs, no? Second\n> of all, doesn't it just purely go against the point of a RDBMS if there\n> are multiple rows in a table with nothing to identify them except for the\n> ctid/oid? *scratch head*\n> \n> My point is, the use of an ENVIRONMENT variable for pointing ot a\n> directory is nowhere near on the scale of implementing an SQL statement\n> (or extension) that serves to take us steps backwards against the progress\n> we've made to improve our compliance ...\n\nThe issue isn't really compliance because LIMIT in SELECT isn't\ncompliant either, so adding it to UPDATE/DELETE is just as non-standard\nas in SELECT. The real question we vote on, I think, is, \"Should this\nfeature be given to our users? What value does it provide, and what\nconfusion does it cause? Does the standard suggest anything?\" \n\nI think that is the usual criteria. For LIMIT on UPDATE/DELETE, it\nprovides little value, and adds confusion, i.e. an extra clause in those two\ncommands that really doesn't add any functionality.\n\nNow, for the PG_XLOG environment variable/-X flag, it is almost the same\nresult, i.e. it doesn't add much value (use a symlink) and does add\nconfusion (oops, I forgot to set it).\n\nThe idea of having the pg_xlog location in GUC I think was a good\ncompromise, but too late to be discovered. If the patch author had\ncontinued discussion at the time, I think it would be in 7.3.\n\n> one has been removed due to personal preferences and nothign else ... the\n> other rejected as it will break (unless I've misread things?) standard,\n> accepted procedures ...\n\nPG_XLOG was remove for a few reasons:\n\n\tIt didn't add much functionality\n\tIt was ugly to add -X to all those commands\n\tIt was error-prone\n\nAgain, the same criteria. Are you saying the criteria I mentioned above\nis wrong?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 22 Sep 2002 23:21:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> You don't :vote: on stuff like this ...\n\nWhy not, exactly?\n\nI wasn't aware that any of core had a non-vetoable right to apply\nany patch we liked regardless of the number and strength of the\nobjections. AFAIK, we resolve differences of opinion by discussion,\nfollowed by a vote if the discussion doesn't produce a consensus.\n\nIt was pretty clear that Thomas' original patch lost the vote, or\nwould have lost if we'd bothered to hold a formal vote. I don't\nsee anyone arguing against the notion of making XLOG location more\neasily configurable --- it was just the notion of making it depend\non environment variables that scared people.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 22 Sep 2002 23:21:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile? "
},
{
"msg_contents": "On Sun, 22 Sep 2002, Tom Lane wrote:\n> \n> It was pretty clear that Thomas' original patch lost the vote, or\n> would have lost if we'd bothered to hold a formal vote.\n\nHasn't there just been a formal vote on this?\n\n> I don't\n> see anyone arguing against the notion of making XLOG location more\n> easily configurable --- it was just the notion of making it depend\n> on environment variables that scared people.\n\nAnd it's obvious it was centred on the use of an environment variable from the\nsubject line, it's still got PGXLOG in capitals in it.\n\n\n-- \nNigel J. Andrews\n\n",
"msg_date": "Mon, 23 Sep 2002 10:04:55 +0100 (BST)",
"msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "\"Nigel J. Andrews\" wrote:\n<snip>\n> \n> And it's obvious it was centred on the use of an environment variable from the\n> subject line, it's still got PGXLOG in capitals in it.\n\nActually, to be really precise, my original email asked for an\nenvironment variable. But only because I'd thought about it from the\npoint of view of us already having a PGDATA environment variable and\nhadn't considered alternatives nor seen Thomas's stuff.\n\nPersonally, I don't care if it's a -X, or an environment variable, or a\nGUC option. I'm just extremely positive that we should have an\nalternative to using symlinks for this (they don't work properly on NT).\n\nAfter following the discussion for a while I'm inclined to think that we\nshould indeed have the GUC version, and *maybe* have the environment\nvariable or the -X.\n\nThe only thing bad about the -X is it's ability to trash your data if\nyou forget it or get it wrong, and it's really easy to do in a decent\nscale environment with many servers. Marc has already suggested we\nmight as well have something about a particular pg_xlog directory that\nPostgreSQL can use to check it's validity upon startup, so that could\nsolve the data damaging issue.\n\nSo, this thread has migrated away from a PGXLOG environment variable to\ndiscuss PGXLOG in general (good or bad) and also has implementation\npoints too (about which people have been arguing).\n\nRegards and best wishes,\n\nJustin Clift\n\n \n> --\n> Nigel J. Andrews\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Mon, 23 Sep 2002 19:27:39 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "On Sun, 22 Sep 2002, Marc G. Fournier wrote:\n\n> Thomas implemented an option that he felt was useful, and that doesn't\n> break anything inside of the code ... he provided 2 methods of being able\n> to move the xlog's to another location (through command line and\n> environment variable, both of which are standard methods for doing such in\n> server software) ... but, because a small number of ppl \"voted\" that it\n> should go away, it went away ...\n\nThe option as he implemented it did make the system more fragile.\nYou can't back up an environment variable, it's separated from other\nconfiguration information, and it's more easily changed without\nrealizing it. We should be building systems that are as resilient to\nhuman failure as possible, not opening up more possibilities of failure.\n\nWe already have a place for configuration information: the configuration\nfile. If I created a patch to move a variable out of the configuration\nfile and make it an environment variable instead, everybody would\n(rightly) think I was nuts, and the patch certainly would not be\naccepted. So why should the situation be different for new configuration\ninformation?\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Mon, 23 Sep 2002 21:24:25 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > You don't :vote: on stuff like this ...\n> \n> Why not, exactly?\n> \n> I wasn't aware that any of core had a non-vetoable right to apply\n> any patch we liked regardless of the number and strength of the\n> objections. AFAIK, we resolve differences of opinion by discussion,\n> followed by a vote if the discussion doesn't produce a consensus.\n> \n> It was pretty clear that Thomas' original patch lost the vote, or\n> would have lost if we'd bothered to hold a formal vote. I don't\n> see anyone arguing against the notion of making XLOG location more\n> easily configurable --- it was just the notion of making it depend\n> on environment variables that scared people.\n\nAnd AFAICS it is scary only because screwing that up will simply corrupt\nyour database. Thus, a simple random number (okay, and a timestamp of\ninitdb) in two files, one in $PGDATA and one in $PGXLOG would be a\ntotally sufficient safety mechanism to prevent starting with the wrong\nXLOG directory.\n\nCan we get that instead of ripping out anything?\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n",
"msg_date": "Tue, 24 Sep 2002 14:49:41 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "Jan Wieck wrote:\n> Tom Lane wrote:\n> > \n> > \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > > You don't :vote: on stuff like this ...\n> > \n> > Why not, exactly?\n> > \n> > I wasn't aware that any of core had a non-vetoable right to apply\n> > any patch we liked regardless of the number and strength of the\n> > objections. AFAIK, we resolve differences of opinion by discussion,\n> > followed by a vote if the discussion doesn't produce a consensus.\n> > \n> > It was pretty clear that Thomas' original patch lost the vote, or\n> > would have lost if we'd bothered to hold a formal vote. I don't\n> > see anyone arguing against the notion of making XLOG location more\n> > easily configurable --- it was just the notion of making it depend\n> > on environment variables that scared people.\n> \n> And AFAICS it is scary only because screwing that up will simply corrupt\n> your database. Thus, a simple random number (okay, and a timestamp of\n> initdb) in two files, one in $PGDATA and one in $PGXLOG would be a\n> totally sufficient safety mechanism to prevent starting with the wrong\n> XLOG directory.\n> \n> Can we get that instead of ripping out anything?\n\nWell, the problem is that Thomas stopped communicating, perhaps because\nsome were too aggressive in criticizing the patch. Once that happened,\nthere was no way to come up with a solution, and that's why it was\nremoved.\n\nAlso, we are in the process of removing args and moving them to GUC so I\ndon't see why we would make WAL an exception. It isn't changed that\noften.\n\nFYI, I am about to do the same removal for the SSL stuff too. Bear is\nno longer responding. It is on the open items list now. If I can't\nfind someone who can review the good/bad parts of our SSL changes, it\nmight all be yanked out.\n\n---------------------------------------------------------------------------\n\n P O S T G R E S Q L\n\n 7 . 3 O P E N I T E M S\n\n\nCurrent at ftp://momjian.postgresql.org/pub/postgresql/open_items.\n\nSource Code Changes\n-------------------\nSchema handling - ready? interfaces? client apps?\nDrop column handling - ready for all clients, apps?\nFix BeOS, QNX4 ports\nFix AIX large file compile failure of 2002-09-11 (Andreas)\nGet bison upgrade on postgresql.org for ecpg only (Marc)\nFix vacuum btree bug (Tom)\nFix client apps for autocommit = off\nFix clusterdb to be schema-aware\nChange log_min_error_statement to be off by default (Gavin)\nFix return tuple counts/oid/tag for rules\nLoading 7.2 pg_dumps\n\tfunctions no longer public executable\n\tlanguages no longer public usable\nAdd schema dump option to pg_dump\nMake SET not start a transaction with autocommit off, document it\nAdd GRANT EXECUTE to all /contrib functions\nRevert or fix SSL change\n\nOn Going\n--------\nSecurity audit\n\nDocumentation Changes\n---------------------\nDocument need to add permissions to loaded functions and languages\nMove documation to gborg for moved projects\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 24 Sep 2002 14:56:51 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "Jan Wieck <JanWieck@Yahoo.com> writes:\n> And AFAICS it is scary only because screwing that up will simply corrupt\n> your database. Thus, a simple random number (okay, and a timestamp of\n> initdb) in two files, one in $PGDATA and one in $PGXLOG would be a\n> totally sufficient safety mechanism to prevent starting with the wrong\n> XLOG directory.\n\n> Can we get that instead of ripping out anything?\n\nSure, if someone wants to do that it'd go a long way towards addressing\nthe safety issues.\n\nBut given that, I think a GUC variable is the most appropriate control\nmechanism; as someone else pointed out, we've worked long and hard to\nmake GUC useful and feature-ful, so it seems silly to invent new\nconfiguration items that bypass GUC. The safety concerns were the main\nreason I liked a symlink or separate file, but if we attack the safety\nproblem directly then we might as well go for convenience in how you\nactually set the configuration value.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 24 Sep 2002 14:58:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile? "
},
{
"msg_contents": "On Tue, 24 Sep 2002, Jan Wieck wrote:\n\n> And AFAICS it is scary only because screwing that up will simply corrupt\n> your database. Thus, a simple random number (okay, and a timestamp of\n> initdb) in two files, one in $PGDATA and one in $PGXLOG would be a\n> totally sufficient safety mechanism to prevent starting with the wrong\n> XLOG directory.\n\nBut still, why set up a situation where your database might not\nstart? Why not set it up so that if you get just *one* environment\nor command-line variable right, you can't set another inconsistently\nand screw up your start anyway? Why store configuration information\noutside of the database data directory in a form that's not easily\nbacked up, and not easily found by other utilities?\n\nIt's almost like people *don't* want to put this in the config file\nor something....\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Wed, 25 Sep 2002 10:07:43 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "Hi everyone,\n\nIn order to clarify things, how about we do a formal vote with specific\ndetails like this:\n\n*******\n\nAre you for...\n\n- pg_xlog directory changeable at all, not using symlinks?\n\nYes/No\n\n- a PGXLOG environment variable to do this?\n\nYes/No\n\n- a -X command line option to do this?\n\nYes/No\n\n- a GUC (postgresql.conf) option to do this?\n\nYes/No\n\n- altering the format of the pg_xlog directory so that it\ncan't be used with the wrong database instance?\n\nYes/No\n\n*******\n\nDoes this seem reasonable?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Wed, 25 Sep 2002 12:14:40 +1000",
"msg_from": "jra@dp.samba.org",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "On Wed, 25 Sep 2002, Curt Sampson wrote:\n\n> On Tue, 24 Sep 2002, Jan Wieck wrote:\n> \n> > And AFAICS it is scary only because screwing that up will simply corrupt\n> > your database. Thus, a simple random number (okay, and a timestamp of\n> > initdb) in two files, one in $PGDATA and one in $PGXLOG would be a\n> > totally sufficient safety mechanism to prevent starting with the wrong\n> > XLOG directory.\n> \n> But still, why set up a situation where your database might not\n> start? Why not set it up so that if you get just *one* environment\n> or command-line variable right, you can't set another inconsistently\n> and screw up your start anyway? Why store configuration information\n> outside of the database data directory in a form that's not easily\n> backed up, and not easily found by other utilities?\n> \n> It's almost like people *don't* want to put this in the config file\n> or something....\n\nCurt, did you see my post about this earlier? I'll repeat it now, just in \ncase anyone else missed it.\n\nProblem: \n- People need to move the pg_xlog directory around on heavily \nloaded systems to improve performance\n\nConstraints: \n- Windows can't reliably use links to do this. \n- If the pg_xlog directory is moved wrong or referenced incorrectly, data \ncorruption may occur. This makes using a switch or environmental var \ndangerous\n\nI consider using a GUC in the postgresql.conf file to be better than any \nother option listed so far, but it is still a dangerous place for it to \nbe. \n\nSo, the way I think that would work best would be:\n\nIf there's a directory called pg_xlog in the $PGDATA directory, then use \nthat.\n\nIf there's a file called pg_xlog in the $PGDATA directory, then it will \ncontain the path to the real pg_xlog directory.\n\nIf you want to move the pg_xlog directory, you called a custom script \ncalled \"mvpgxlog\" or something like it that:\n\n1: Checks to make sure the database is shut down\n2: Checks to make sure the destination path has enough free space for the \nxlogs\n3: If these are both true (and whatever logic we need here for safety) \nthen copy the current pg_xlog directory contents to the new pg_xlog (even \nif we are already using an alternative location, this should work), set \nproper permissions, rename / move the pg_xlog file / directorry, then \nedit/create the $PGDATA/pg_xlog file to point to the new directory.\n\nThis method has several advantages, and no real disadvantages I can think \nof. The advantages are:\n\n- It makes it easy to move the pg_xlog directory.\n- It works equally well for Windows and Unix.\n- Gets rid of another GUC setting people can scram their database with.\n- It is easy to backup your pg_xlog setting.\n- If painted green it should not rust.\n\nHow's that sound for a general theory of operation?\n\n",
"msg_date": "Wed, 25 Sep 2002 08:52:13 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "\nI don't see the gain of having a file called pg_xlog vs. using GUC.\n\n---------------------------------------------------------------------------\n\nscott.marlowe wrote:\n> On Wed, 25 Sep 2002, Curt Sampson wrote:\n> \n> > On Tue, 24 Sep 2002, Jan Wieck wrote:\n> > \n> > > And AFAICS it is scary only because screwing that up will simply corrupt\n> > > your database. Thus, a simple random number (okay, and a timestamp of\n> > > initdb) in two files, one in $PGDATA and one in $PGXLOG would be a\n> > > totally sufficient safety mechanism to prevent starting with the wrong\n> > > XLOG directory.\n> > \n> > But still, why set up a situation where your database might not\n> > start? Why not set it up so that if you get just *one* environment\n> > or command-line variable right, you can't set another inconsistently\n> > and screw up your start anyway? Why store configuration information\n> > outside of the database data directory in a form that's not easily\n> > backed up, and not easily found by other utilities?\n> > \n> > It's almost like people *don't* want to put this in the config file\n> > or something....\n> \n> Curt, did you see my post about this earlier? I'll repeat it now, just in \n> case anyone else missed it.\n> \n> Problem: \n> - People need to move the pg_xlog directory around on heavily \n> loaded systems to improve performance\n> \n> Constraints: \n> - Windows can't reliably use links to do this. \n> - If the pg_xlog directory is moved wrong or referenced incorrectly, data \n> corruption may occur. This makes using a switch or environmental var \n> dangerous\n> \n> I consider using a GUC in the postgresql.conf file to be better than any \n> other option listed so far, but it is still a dangerous place for it to \n> be. \n> \n> So, the way I think that would work best would be:\n> \n> If there's a directory called pg_xlog in the $PGDATA directory, then use \n> that.\n> \n> If there's a file called pg_xlog in the $PGDATA directory, then it will \n> contain the path to the real pg_xlog directory.\n> \n> If you want to move the pg_xlog directory, you called a custom script \n> called \"mvpgxlog\" or something like it that:\n> \n> 1: Checks to make sure the database is shut down\n> 2: Checks to make sure the destination path has enough free space for the \n> xlogs\n> 3: If these are both true (and whatever logic we need here for safety) \n> then copy the current pg_xlog directory contents to the new pg_xlog (even \n> if we are already using an alternative location, this should work), set \n> proper permissions, rename / move the pg_xlog file / directorry, then \n> edit/create the $PGDATA/pg_xlog file to point to the new directory.\n> \n> This method has several advantages, and no real disadvantages I can think \n> of. The advantages are:\n> \n> - It makes it easy to move the pg_xlog directory.\n> - It works equally well for Windows and Unix.\n> - Gets rid of another GUC setting people can scram their database with.\n> - It is easy to backup your pg_xlog setting.\n> - If painted green it should not rust.\n> \n> How's that sound for a general theory of operation?\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 25 Sep 2002 11:00:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "I do.\n\nThe problem is that if you change the location of pg_xlog and do one thing \nwrong, poof, your database is now corrupt. Like Tom said earlier, imagine \na command like switch called \"please-dont-scram-my-database\" and if you \never forgot it then your data is gone.\n\nIs it better to move such a switch into the postgresql.conf file? Imagine \na GUC setting called \"butter-and-bread\" that when set would delete all \nyour data. That's what the equivalent here is, if you make a single \nmistake.\n\nHaving a FILE called pg_xlog isn't the fix here, it's the result of the \nfix, which is to take all the steps of moving the pg_xlog directory and \nput them into one script file the user doesn't need to understand to do it \nright. I.e. idiot proof the system as much as possible.\n\nWe could do it much simpler, if everyone was on Unix. We could just write \na script that would do everything the same but instead of using a file \ncalled pg_xlog, would make a link. the reason for the file is to make it \nmore transportable to brain damaged OSes like Windows.\n\nDo you really think the GUC variable is a safe way of referencing the \npg_xlog directory all by itself? I can see MANY posts to the lists that \nwill go like this:\n\nI just installed Postgresql 7.4 and it's been working fine. I needed more \nspeed, so I looked up the GUC for the pg_xlog and set it to /vol/vol3/ on \nmy machine. Now my database won't come up. I set it back but it still \nwon't come up. What can I do to fix that?\n\nHere's the email we'd get from my solution:\n\nHey, I just tried to move my pg_xlog directory with the mvpgxlog script, \nand it gave an error of \"permission denied on destination\". What does that \nmean?\n\nThe choice is yours.\n\n On Wed, 25 Sep 2002, Bruce Momjian wrote:\n\n> \n> I don't see the gain of having a file called pg_xlog vs. using GUC.\n> \n> ---------------------------------------------------------------------------\n> \n> scott.marlowe wrote:\n> > On Wed, 25 Sep 2002, Curt Sampson wrote:\n> > \n> > > On Tue, 24 Sep 2002, Jan Wieck wrote:\n> > > \n> > > > And AFAICS it is scary only because screwing that up will simply corrupt\n> > > > your database. Thus, a simple random number (okay, and a timestamp of\n> > > > initdb) in two files, one in $PGDATA and one in $PGXLOG would be a\n> > > > totally sufficient safety mechanism to prevent starting with the wrong\n> > > > XLOG directory.\n> > > \n> > > But still, why set up a situation where your database might not\n> > > start? Why not set it up so that if you get just *one* environment\n> > > or command-line variable right, you can't set another inconsistently\n> > > and screw up your start anyway? Why store configuration information\n> > > outside of the database data directory in a form that's not easily\n> > > backed up, and not easily found by other utilities?\n> > > \n> > > It's almost like people *don't* want to put this in the config file\n> > > or something....\n> > \n> > Curt, did you see my post about this earlier? I'll repeat it now, just in \n> > case anyone else missed it.\n> > \n> > Problem: \n> > - People need to move the pg_xlog directory around on heavily \n> > loaded systems to improve performance\n> > \n> > Constraints: \n> > - Windows can't reliably use links to do this. \n> > - If the pg_xlog directory is moved wrong or referenced incorrectly, data \n> > corruption may occur. This makes using a switch or environmental var \n> > dangerous\n> > \n> > I consider using a GUC in the postgresql.conf file to be better than any \n> > other option listed so far, but it is still a dangerous place for it to \n> > be. \n> > \n> > So, the way I think that would work best would be:\n> > \n> > If there's a directory called pg_xlog in the $PGDATA directory, then use \n> > that.\n> > \n> > If there's a file called pg_xlog in the $PGDATA directory, then it will \n> > contain the path to the real pg_xlog directory.\n> > \n> > If you want to move the pg_xlog directory, you called a custom script \n> > called \"mvpgxlog\" or something like it that:\n> > \n> > 1: Checks to make sure the database is shut down\n> > 2: Checks to make sure the destination path has enough free space for the \n> > xlogs\n> > 3: If these are both true (and whatever logic we need here for safety) \n> > then copy the current pg_xlog directory contents to the new pg_xlog (even \n> > if we are already using an alternative location, this should work), set \n> > proper permissions, rename / move the pg_xlog file / directorry, then \n> > edit/create the $PGDATA/pg_xlog file to point to the new directory.\n> > \n> > This method has several advantages, and no real disadvantages I can think \n> > of. The advantages are:\n> > \n> > - It makes it easy to move the pg_xlog directory.\n> > - It works equally well for Windows and Unix.\n> > - Gets rid of another GUC setting people can scram their database with.\n> > - It is easy to backup your pg_xlog setting.\n> > - If painted green it should not rust.\n> > \n> > How's that sound for a general theory of operation?\n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> > \n> \n> \n\n",
"msg_date": "Wed, 25 Sep 2002 09:23:13 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I don't see the gain of having a file called pg_xlog vs. using GUC.\n\nWell, the point is to have a safety interlock --- but I like Jan's\nidea of using matching identification files in both directories.\nWith that, a GUC variable seems just fine.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Sep 2002 12:42:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile? "
},
{
"msg_contents": "On Wed, 25 Sep 2002, Tom Lane wrote:\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I don't see the gain of having a file called pg_xlog vs. using GUC.\n> \n> Well, the point is to have a safety interlock --- but I like Jan's\n> idea of using matching identification files in both directories.\n> With that, a GUC variable seems just fine.\n\nAgreed, the interlock is a great idea. I hadn't seen that one go by.\n\n",
"msg_date": "Wed, 25 Sep 2002 10:55:52 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile? "
},
{
"msg_contents": "Hi,\n\nI have come across a problem (bug?) with PL/pgSQL GET DIAGNOSTICS.\n\nIn a PL/pgSQL function I want to insert into a table and get the OID back.\nThat usually works with\nGET DIAGNOSTICS last_oid = RESULT_OID;\nright after the insert statement.\n\nBut if the table that I insert to has a rule (or perhaps a trigger?) that\nupdates another table, the RESULT_OID after the insert will be 0 (zero).\n\nCan this be fixed (I have no such problem with JDBC and getLastOID())?\n\nTestcase:\n\nCREATE TABLE pltest (\n id BIGINT default cs_nextval('invoice_invoice_id') NOT NULL,\n t TEXT,\n primary key (id)\n);\n\nCREATE TABLE plcounter (\n counter INTEGER NOT NULL\n);\n\nCREATE FUNCTION pltestfunc(integer) RETURNS BOOLEAN AS'\nDECLARE\n lastOID OID;\nBEGIN\n FOR i IN 1..$1 LOOP\n INSERT INTO pltest (t) VALUES (\\'test\\');\n GET DIAGNOSTICS lastOID = RESULT_OID;\n RAISE NOTICE \\'RESULT_OID: %\\', lastOID;\n IF lastOID <= 0 THEN\n RAISE EXCEPTION \\'RESULT_OID is zero\\';\n END IF;\n END LOOP;\n RETURN true;\nEND;\n' LANGUAGE 'plpgsql';\n\n-- comment out the rule and the test will work\nCREATE RULE pltest_insert AS\n ON INSERT TO pltest DO\n UPDATE plcounter SET counter=counter+1;\n\nINSERT INTO plcounter VALUES (0);\nSELECT pltestfunc(10);\nSELECT * FROM pltest;\n\nDROP FUNCTION pltestfunc(integer);\nDROP TABLE pltest;\n\n\nRegards,\nMichael\n\n",
"msg_date": "Wed, 25 Sep 2002 20:08:57 +0200",
"msg_from": "\"Michael Paesold\" <mpaesold@gmx.at>",
"msg_from_op": false,
"msg_subject": "Bug in PL/pgSQL GET DIAGNOSTICS?"
},
{
"msg_contents": "Curt Sampson wrote:\n> \n> On Tue, 24 Sep 2002, Jan Wieck wrote:\n> \n> > And AFAICS it is scary only because screwing that up will simply corrupt\n> > your database. Thus, a simple random number (okay, and a timestamp of\n> > initdb) in two files, one in $PGDATA and one in $PGXLOG would be a\n> > totally sufficient safety mechanism to prevent starting with the wrong\n> > XLOG directory.\n> \n> But still, why set up a situation where your database might not\n> start? Why not set it up so that if you get just *one* environment\n> or command-line variable right, you can't set another inconsistently\n> and screw up your start anyway? Why store configuration information\n> outside of the database data directory in a form that's not easily\n> backed up, and not easily found by other utilities?\n\nWith the number of screws our product has, there are so many\npossible combinations that don't work, why worry about one more\nor less?\n\nSeriously, if you move around files, make symlinks or adjust\nconfig variable to reflect that, there's allways the possibility\nthat you fatfinger it and cannot startup. The point is not to\nmake it pellethead-safe so that the damned thing will start\nallways, but to make it pellethead-safe so that an attempt to\nstart with wrong settings doesn't blow away the whole server.\n\n> \n> It's almost like people *don't* want to put this in the config file\n> or something....\n\nI want to have it it the config file. Just that that doesn't\nprevent anything. And if we have a \"signature\" file in the xlog\nand data directories, you can make it dummy-safe as you like ...\nif the config option is set wrong, first search for it on all\ndrives before bailing out and if found, postmaster corrects the\nconfig setting. That way the admin can play hide and seek with\nour database ... ;-)\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being\nright. #\n# Let's break this rule - forgive\nme. #\n#==================================================\nJanWieck@Yahoo.com #\n",
"msg_date": "Wed, 25 Sep 2002 15:07:12 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "\"scott.marlowe\" wrote:\n\n> Having a FILE called pg_xlog isn't the fix here, it's the result of the\n> fix, which is to take all the steps of moving the pg_xlog directory and\n> put them into one script file the user doesn't need to understand to do it\n> right. I.e. idiot proof the system as much as possible.\n\nAnd your script/program cannot modify postgresql.conf instead of\ncreating a new file?\n\nPlease remember: \"A fool with a tool is still a fool\". You can\nprovide programs and scripts as many as you want. There have\nallways been these idiots who did stuff like truncating pg_log\n...\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being\nright. #\n# Let's break this rule - forgive\nme. #\n#==================================================\nJanWieck@Yahoo.com #\n",
"msg_date": "Wed, 25 Sep 2002 15:17:42 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "Jan Wieck wrote:\n> > It's almost like people *don't* want to put this in the config file\n> > or something....\n> \n> I want to have it it the config file. Just that that doesn't\n> prevent anything. And if we have a \"signature\" file in the xlog\n> and data directories, you can make it dummy-safe as you like ...\n> if the config option is set wrong, first search for it on all\n> drives before bailing out and if found, postmaster corrects the\n> config setting. That way the admin can play hide and seek with\n> our database ... ;-)\n\nLet's get it into GUC and see what problems people have. We may find\nout no one has difficulty.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 25 Sep 2002 15:18:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "On Wed, 25 Sep 2002, Jan Wieck wrote:\n\n> \"scott.marlowe\" wrote:\n> \n> > Having a FILE called pg_xlog isn't the fix here, it's the result of the\n> > fix, which is to take all the steps of moving the pg_xlog directory and\n> > put them into one script file the user doesn't need to understand to do it\n> > right. I.e. idiot proof the system as much as possible.\n> \n> And your script/program cannot modify postgresql.conf instead of\n> creating a new file?\n\nThat's a minor point. It could be anywhere. It's just that much like a \nsymlink is visible from the shell with a simple ls -l, so too is pg_xlog \nbeing a file an obvious sign that pg_xlog doesn't live here anymore.\n\n> Please remember: \"A fool with a tool is still a fool\". You can\n> provide programs and scripts as many as you want. There have\n> allways been these idiots who did stuff like truncating pg_log\n\nSo, should we take out seatbelts from cars, safeties from guns, and have \neveryone run about with sharp sticks too? :-) I know that the second we \nmake something more idiot proof, someone will make a better idiot, but \nthat doesn't mean we shouldn't make things more idiot proof, we should \njust try to anticipate the majority of idiots (and let's face it, we can \nall be idiots at the right moments sometimes.)\n\nBut, I have a few more questions about the signature file solution. Is \nthe signature file going to be updated by date or something everytime the \ndatabase is started up and shut down? If not, then it's quite possible \nthat someone could copy the pg_xlog dir somewhere, run it for a while, \nthen they change it back to the base pg_xlog will the database know that \nthose xlogs are stale and not start up, or will it start up and corrupt \nthe database with the old xlogs? As long as there's a time stamp in both \nplaces it should work fine.\n\n",
"msg_date": "Wed, 25 Sep 2002 13:35:38 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "scott.marlowe wrote:\n> On Wed, 25 Sep 2002, Jan Wieck wrote:\n> So, should we take out seatbelts from cars, safeties from guns, and have \n> everyone run about with sharp sticks too? :-) I know that the second we \n> make something more idiot proof, someone will make a better idiot, but \n> that doesn't mean we shouldn't make things more idiot proof, we should \n> just try to anticipate the majority of idiots (and let's face it, we can \n> all be idiots at the right moments sometimes.)\n\nCan we wait for someone to be injured in a car accident before putting\nin heavy seat belts?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 25 Sep 2002 15:44:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "\"scott.marlowe\" wrote:\n> [...]\n> But, I have a few more questions about the signature file solution. Is\n> the signature file going to be updated by date or something everytime the\n> database is started up and shut down? If not, then it's quite possible\n> that someone could copy the pg_xlog dir somewhere, run it for a while,\n> then they change it back to the base pg_xlog will the database know that\n> those xlogs are stale and not start up, or will it start up and corrupt\n> the database with the old xlogs? As long as there's a time stamp in both\n> places it should work fine.\n\nGood question. Actually, I think it'd be a perfect place and use\nfor a copy of the controlfile.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being\nright. #\n# Let's break this rule - forgive\nme. #\n#==================================================\nJanWieck@Yahoo.com #\n",
"msg_date": "Wed, 25 Sep 2002 15:57:03 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> scott.marlowe wrote:\n> > On Wed, 25 Sep 2002, Jan Wieck wrote:\n> > So, should we take out seatbelts from cars, safeties from guns, and have\n> > everyone run about with sharp sticks too? :-) I know that the second we\n> > make something more idiot proof, someone will make a better idiot, but\n> > that doesn't mean we shouldn't make things more idiot proof, we should\n> > just try to anticipate the majority of idiots (and let's face it, we can\n> > all be idiots at the right moments sometimes.)\n\nSure, been there, done that ...\n\n> \n> Can we wait for someone to be injured in a car accident before putting\n> in heavy seat belts?\n\nAbout the car seatbelts I have a theory. If we would not have\nseatbelts, and instead of Airbags sharp sticks instantly killing\nthe driver in the case of an accident, most of these wannabe\nRacing-Champs on our streets would either drive more reasonable\nor get removed by natural selection. Maybe the overall number of\naccidents would drop below the actual number of deaths in traffic\n(remember, we only kill the drivers on purpose, not anyone else\nin the car) ... and for sure the far lower number of *only*\ncrippled or disabled victims will take a big burden off of the\nhealthcare and wellfare system ... \n\nOkay, okay, enough proof of the first statement ... back to\nbusiness.\n\n\nJan B-)\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being\nright. #\n# Let's break this rule - forgive\nme. #\n#==================================================\nJanWieck@Yahoo.com #\n",
"msg_date": "Wed, 25 Sep 2002 16:10:45 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "OT: Seatbelts (was: Re: PGXLOG variable worthwhile?)"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Can we wait for someone to be injured in a car accident before putting\n> in heavy seat belts?\n\nNot the analogy you wanted to make ... if you knew there was a serious\nrisk, that's called negligence in most American courts. Ask Ford about\nthe Pinto ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Sep 2002 16:14:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile? "
},
{
"msg_contents": "\"Michael Paesold\" <mpaesold@gmx.at> writes:\n> I have come across a problem (bug?) with PL/pgSQL GET DIAGNOSTICS.\n\nHm. This seems to be SPI's version of the same definitional issue\nwe're contending with for status data returned from an interactive\nquery: SPI is currently set up to return the status of the last\nquerytree it executes, which is probably the wrong thing to do in the\npresence of rule rewrites. But I'm hesitant to change SPI until we know\nwhat we're going to do for interactive query status.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Sep 2002 17:13:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug in PL/pgSQL GET DIAGNOSTICS? "
},
{
"msg_contents": "Tom Lane wrote:\n\n> \"Michael Paesold\" <mpaesold@gmx.at> writes:\n> > I have come across a problem (bug?) with PL/pgSQL GET DIAGNOSTICS.\n>\n> Hm. This seems to be SPI's version of the same definitional issue\n> we're contending with for status data returned from an interactive\n> query: SPI is currently set up to return the status of the last\n> querytree it executes, which is probably the wrong thing to do in the\n> presence of rule rewrites. But I'm hesitant to change SPI until we know\n> what we're going to do for interactive query status.\n>\n> regards, tom lane\n\nSo this is not going to be fixed for 7.3 I suggest, no? Can you add the\nissue to the TODO list or can this thread be added to any appropriate TODO\nitem?\n\nRegards,\nMichael Paesold\n\n",
"msg_date": "Wed, 25 Sep 2002 23:23:40 +0200",
"msg_from": "\"Michael Paesold\" <mpaesold@gmx.at>",
"msg_from_op": false,
"msg_subject": "Re: Bug in PL/pgSQL GET DIAGNOSTICS? "
},
{
"msg_contents": "Michael Paesold wrote:\n> Tom Lane wrote:\n> \n> > \"Michael Paesold\" <mpaesold@gmx.at> writes:\n> > > I have come across a problem (bug?) with PL/pgSQL GET DIAGNOSTICS.\n> >\n> > Hm. This seems to be SPI's version of the same definitional issue\n> > we're contending with for status data returned from an interactive\n> > query: SPI is currently set up to return the status of the last\n> > querytree it executes, which is probably the wrong thing to do in the\n> > presence of rule rewrites. But I'm hesitant to change SPI until we know\n> > what we're going to do for interactive query status.\n> >\n> > regards, tom lane\n> \n> So this is not going to be fixed for 7.3 I suggest, no? Can you add the\n> issue to the TODO list or can this thread be added to any appropriate TODO\n> item?\n\nI already have a TODO item:\n\n\t* Return proper effected tuple count from complex commands [return]\n\nI am unsure if it will be fixed in 7.3 or not. It is still on the open\nitems list, and I think we have a general plan to fix it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 25 Sep 2002 18:21:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Bug in PL/pgSQL GET DIAGNOSTICS?"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I already have a TODO item:\n> \t* Return proper effected tuple count from complex commands [return]\n> I am unsure if it will be fixed in 7.3 or not. It is still on the open\n> items list, and I think we have a general plan to fix it.\n\nI got distracted and wasn't following the thread a few days ago about\nthe topic. Did people come to a consensus about how it should work?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Sep 2002 18:27:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug in PL/pgSQL GET DIAGNOSTICS? "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I already have a TODO item:\n> > \t* Return proper effected tuple count from complex commands [return]\n> > I am unsure if it will be fixed in 7.3 or not. It is still on the open\n> > items list, and I think we have a general plan to fix it.\n> \n> I got distracted and wasn't following the thread a few days ago about\n> the topic. Did people come to a consensus about how it should work?\n\nWell, sort of. It was similar to your original proposal. See the TODO\nlink for details. I am heading out for 2 hours and will summarize when\nI return.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 25 Sep 2002 18:29:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Bug in PL/pgSQL GET DIAGNOSTICS?"
},
{
"msg_contents": "On Wed, 25 Sep 2002, Jan Wieck wrote:\n\n> With the number of screws our product has, there are so many\n> possible combinations that don't work, why worry about one more\n> or less?\n\nThat's just silly, so I won't even bother replying.\n\n> Seriously, if you move around files, make symlinks or adjust\n> config variable to reflect that, there's allways the possibility\n> that you fatfinger it and cannot startup.\n\nTrue. But once your symlink is in place, it is stored on disk in the\npostgres data directory. An environment variable is a transient setting\nin memory, which means that you have to have a program set it, and you\nhave to make sure that program gets run before any startup, be it an\nautomated startup from /etc/rc on boot or a manual startup.\n\n> I want to have it it the config file.\n\nWell, then we're agreed.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Thu, 26 Sep 2002 10:04:09 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I already have a TODO item:\n> > \t* Return proper effected tuple count from complex commands [return]\n> > I am unsure if it will be fixed in 7.3 or not. It is still on the open\n> > items list, and I think we have a general plan to fix it.\n> \n> I got distracted and wasn't following the thread a few days ago about\n> the topic. Did people come to a consensus about how it should work?\n\nOK, I am back. I think the most promising proposal was from you, Tom:\n\n\thttp://candle.pha.pa.us/mhonarc/todo.detail/return/msg00012.html\n\nIt basically breaks down the three results (tag, oid, tuple count), and\nthe INSTEAD/non-INSTEAD behavior.\n\nI actually got a big chuckle from this paragraph:\n\n\tCome on, guys, work with me a little here. I've thrown out several\n\talternative suggestions already, and all I've gotten from either of\n\tyou is refusal to think about the problem.\n\nI liked the \"work with me\" phrase.\n\nTo summarize, with non-INSTEAD, we get the tag, oid, and tuple count of\nthe original query. Everyone agrees on that.\n\nFor non-INSTEAD, we have:\n\n\t1) return original tag\n\t2) return oid if all inserts in the rule insert only one row\n\t3) return tuple count of all commands with the same tag\n\nFor item 2, it is possible to have multiple INSERTS in the rule and\nreturn an oid if the sum of the inserts is only one row.\n\nItem 3 is the most controversial. Some say sum all tuple counts, i.e.\nsum INSERT/UPDATE/DELETE. That just seems to messy to me. I think\nsumming only the matching tags has the highest probability of returning\na meaningful number.\n\nAlso, item 2 and 3 work well together with INSERT because a tuple count\nof 1 returns an oid, while > 1 does not, which is consistent with a\nnon-rule insert.\n\n(FYI, I am still working SSL.)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 25 Sep 2002 21:40:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Bug in PL/pgSQL GET DIAGNOSTICS?"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, I am back. I think the most promising proposal was from you, Tom:\n> \thttp://candle.pha.pa.us/mhonarc/todo.detail/return/msg00012.html\n\nBut that wasn't a specific proposal --- it was more or less an\nenumeration of the possibilities. What are we picking?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Sep 2002 23:58:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug in PL/pgSQL GET DIAGNOSTICS? "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, I am back. I think the most promising proposal was from you, Tom:\n> > \thttp://candle.pha.pa.us/mhonarc/todo.detail/return/msg00012.html\n> \n> But that wasn't a specific proposal --- it was more or less an\n> enumeration of the possibilities. What are we picking?\n\nThe rest of my message explains your poposal while clarifying certain\noptions you gave in the email.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 26 Sep 2002 00:02:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Bug in PL/pgSQL GET DIAGNOSTICS?"
},
{
"msg_contents": "On Wed, 25 Sep 2002 21:40:03 -0400 (EDT), Bruce Momjian\n<pgman@candle.pha.pa.us> wrote:\n>Item 3 is the most controversial. Some say sum all tuple counts, i.e.\n>sum INSERT/UPDATE/DELETE. That just seems to messy to me. I think\n>summing only the matching tags has the highest probability of returning\n>a meaningful number.\n\n[Trying to keep it short this time]\n\nI still believe that there is more than one correct answer; it just\ndepends on what the dba intends. So I proposed a syntax change for\nletting the dba explicitly mark the statements she/he wants to affect\ntuple count and oid.\n\n-> http://archives.postgresql.org/pgsql-hackers/2002-09/msg00720.php\n\nUnfortunately I tried to summarize all other proposals and the mail\ngot so long that nobody read it to the end :-(\n\nServus\n Manfred\n",
"msg_date": "Thu, 26 Sep 2002 10:14:23 +0200",
"msg_from": "Manfred Koizar <mkoi-pg@aon.at>",
"msg_from_op": false,
"msg_subject": "Re: Bug in PL/pgSQL GET DIAGNOSTICS?"
},
{
"msg_contents": "Curt Sampson wrote:\n> \n> On Wed, 25 Sep 2002, Jan Wieck wrote:\n> \n> > With the number of screws our product has, there are so many\n> > possible combinations that don't work, why worry about one more\n> > or less?\n> \n> That's just silly, so I won't even bother replying.\n\nCurt,\n\nit might sound silly on first sight and isolated. But it was in reply\nto:\n\n>>> But still, why set up a situation where your database might not\n>>> start? Why not set it up so that if you get just *one* environment\n>>> or command-line variable right, you can't set another inconsistently\n>>> and screw up your start anyway? Why store configuration information\n>>> outside of the database data directory in a form that's not easily\n>>> backed up, and not easily found by other utilities?\n\nApply that argumentation to all of our commandline switches and config\noptions and we end up with something that behaves like Microsoft\nproducts ... they know everything better, you cannot tune them, they\nwork ... and you needed a bigger machine anyway.\n\nI am absolutely not in favour of the PGXLOG environment variable. But if\nsomeone else wants it, it doesn't bother me because I wouldn't use it\nand it cannot hurt me.\n\nI am simply against this \"I think it's wrong so you have to change your\nbehaviour\" attitude. \n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n",
"msg_date": "Thu, 26 Sep 2002 09:55:47 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "On Thu, 26 Sep 2002, Jan Wieck wrote:\n\n> >>> But still, why set up a situation where your database might not\n> >>> start? Why not set it up so that if you get just *one* environment\n> >>> or command-line variable right, you can't set another inconsistently\n> >>> and screw up your start anyway? Why store configuration information\n> >>> outside of the database data directory in a form that's not easily\n> >>> backed up, and not easily found by other utilities?\n>\n> Apply that argumentation to all of our commandline switches and config\n> options and we end up with something that behaves like Microsoft\n> products ... they know everything better, you cannot tune them, they\n> work ... and you needed a bigger machine anyway.\n\nTalk about a straw man! I have repeatedly said:\n\n I WANT THE FEATURE THAT LETS YOU TUNE THE LOCATION OF THE LOG FILE!\n\nRead it again, and again, until you understand that we both want\nthat feature.\n\nThen realize, I just want it implemented in a way that makes it\nless likely that people will find themselves in a situation where\nthe server doesn't start.\n\n> I am absolutely not in favour of the PGXLOG environment variable. But if\n> someone else wants it, it doesn't bother me because I wouldn't use it\n> and it cannot hurt me.\n\nResponsible programmers, when confronted with a more accident-prone\nand less accident-prone way of doing something, chose the less\naccident-prone way of doing things. That way people who are naive,\nor tired, or just having a bad day are less likely to come to harm.\n\nUsing the config file is not only safer, it's actually more\nconvenient. And since we're going to have the config file option\nanyway, removing the environment variable option means that others\nhave less documentation to read, and will spend less time wondering\nwhy there's two different ways to do the same thing. And naive\npeople won't chose the wrong way because they don't know any better.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Thu, 26 Sep 2002 23:20:52 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "Manfred Koizar wrote:\n> On Wed, 25 Sep 2002 21:40:03 -0400 (EDT), Bruce Momjian\n> <pgman@candle.pha.pa.us> wrote:\n> >Item 3 is the most controversial. Some say sum all tuple counts, i.e.\n> >sum INSERT/UPDATE/DELETE. That just seems to messy to me. I think\n> >summing only the matching tags has the highest probability of returning\n> >a meaningful number.\n> \n> [Trying to keep it short this time]\n> \n> I still believe that there is more than one correct answer; it just\n> depends on what the dba intends. So I proposed a syntax change for\n> letting the dba explicitly mark the statements she/he wants to affect\n> tuple count and oid.\n> \n> -> http://archives.postgresql.org/pgsql-hackers/2002-09/msg00720.php\n> \n> Unfortunately I tried to summarize all other proposals and the mail\n> got so long that nobody read it to the end :-(\n\nThat is an interesting idea; some syntax in the rule that marks the\nitems. The one downside to that is the fact the rule writer has to\nmake adjustments. Perhaps we could implement the behavoir I described\nand add such tagging later.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 26 Sep 2002 12:22:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Bug in PL/pgSQL GET DIAGNOSTICS?"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> To summarize, with non-INSTEAD, we get the tag, oid, and tuple count of\n> the original query. Everyone agrees on that.\n>\n> For non-INSTEAD, we have:\n\n[I think this is the INSTEAD part.]\n\n> \t1) return original tag\n> \t2) return oid if all inserts in the rule insert only one row\n> \t3) return tuple count of all commands with the same tag\n\nI think proper encapsulation would require us to simulate the original\ncommand, hiding the fact that something else happened internally. I know\nit's really hard to determine the \"virtual\" count of an update or delete\nif the command had acted on a permament base table, but I'd rather\nmaintain the encapsulation of updateable views and return \"unknown\" in\nthat case.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 27 Sep 2002 00:26:53 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Bug in PL/pgSQL GET DIAGNOSTICS?"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > To summarize, with non-INSTEAD, we get the tag, oid, and tuple count of\n> > the original query. Everyone agrees on that.\n> >\n> > For non-INSTEAD, we have:\n> \n> [I think this is the INSTEAD part.]\n\nSorry, yes.\n\n> > \t1) return original tag\n> > \t2) return oid if all inserts in the rule insert only one row\n> > \t3) return tuple count of all commands with the same tag\n> \n> I think proper encapsulation would require us to simulate the original\n> command, hiding the fact that something else happened internally. I know\n> it's really hard to determine the \"virtual\" count of an update or delete\n> if the command had acted on a permament base table, but I'd rather\n> maintain the encapsulation of updateable views and return \"unknown\" in\n> that case.\n\nWell, let's look at the common case. For proper view rules, these would\nall return the right values because the UPDATE in the rule would be\nreturned. Is that what you mean?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 26 Sep 2002 18:30:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Bug in PL/pgSQL GET DIAGNOSTICS?"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> Well, let's look at the common case. For proper view rules, these would\n> all return the right values because the UPDATE in the rule would be\n> returned. Is that what you mean?\n\nI guess that really depends on whether the rules are written to properly\nconstrain the writes to the view to the set of rows visible by the view.\nFor example, if a view v1 selects from a single table t1 constrained by a\nsearch condition, and I do UPDATE v1 SET ...; without a condition, does\nthat affect all rows in t1? If not, then both our proposals are\nequivalent, if yes, then the it's the user's fault, I suppose.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sat, 28 Sep 2002 13:08:54 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Bug in PL/pgSQL GET DIAGNOSTICS?"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > Well, let's look at the common case. For proper view rules, these would\n> > all return the right values because the UPDATE in the rule would be\n> > returned. Is that what you mean?\n> \n> I guess that really depends on whether the rules are written to properly\n> constrain the writes to the view to the set of rows visible by the view.\n> For example, if a view v1 selects from a single table t1 constrained by a\n> search condition, and I do UPDATE v1 SET ...; without a condition, does\n> that affect all rows in t1? If not, then both our proposals are\n> equivalent, if yes, then the it's the user's fault, I suppose.\n\nWell, since we found that we can't get a perfect solution, I started to\nthink of the common cases. First, there is the \"log changes\" type of\nrule, but that isn't INSTEAD, so it doesn't even apply here. We already\nknow we want to return the result of the main query.\n\t\n\tCREATE RULE service_request_update AS -- UPDATE rule\n\tON UPDATE TO service_request \n\tDO \n\t INSERT INTO service_request_log (customer_id, description, mod_type)\n\t VALUES (old.customer_id, old.description, 'U');\n\t\n\tCREATE RULE service_request_delete AS -- DELETE rule\n\tON DELETE TO service_request \n\tDO\n\t INSERT INTO service_request_log (customer_id, description, mod_type)\n\t VALUES (old.customer_id, old.description, 'D');\n\nSecond, there is the updatable view rule, that is INSTEAD, and relies on\nthe primary key of the table:\n\t\n\tCREATE RULE view_realtable_insert AS -- INSERT rule\n\tON INSERT TO view_realtable \n\tDO INSTEAD \n\t INSERT INTO realtable \n\t VALUES (new.col);\n\t\n\tCREATE RULE view_realtable_update AS -- UPDATE rule\n\tON UPDATE TO view_realtable \n\tDO INSTEAD \n\t UPDATE realtable \n\t SET col = new.col \n\t WHERE col = old.col;\n\t\n\tCREATE RULE view_realtable_delete AS -- DELETE rule\n\tON DELETE TO view_realtable \n\tDO INSTEAD \n\t DELETE FROM realtable \n\t WHERE col = old.col;\n\nIt is my understanding that the proposed rule result improvements will\nreturn the proper values in these cases. That is why I like the current\nproposal. It also makes any extra non-tag matching queries in the rule\nnot affect the result, which seems best.\n\nDoes anyone else have a common rule that would return incorrect results\nusing the proposed rules?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sat, 28 Sep 2002 13:41:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Bug in PL/pgSQL GET DIAGNOSTICS?"
},
{
"msg_contents": "On Sat, 28 Sep 2002 13:41:04 -0400 (EDT), Bruce Momjian\n<pgman@candle.pha.pa.us> wrote:\n>Does anyone else have a common rule that would return incorrect results\n>using the proposed rules?\n\n\tCREATE VIEW twotables AS\n\tSELECT ... FROM table1 INNER JOIN table2 ON ... ;\n\n\tCREATE RULE twotables_insert AS -- INSERT rule\n\tON INSERT TO twotables \n\tDO INSTEAD (\n\t INSERT INTO table1 VALUES (new.pk, new.col1);\n\t INSERT INTO table2 VALUES (new.pk, new.col2)\n\t); \n\t\n\tCREATE RULE twotables_update AS -- UPDATE rule\n\tON UPDATE TO twotables \n\tDO INSTEAD (\n\t UPDATE table1 SET col1 = new.col1 WHERE pk = old.pk;\n\t UPDATE table2 SET col2 = new.col2 WHERE pk = old.pk\n\t); \n\t\n\tCREATE RULE twotables_delete AS -- DELETE rule\n\tON DELETE TO twotables \n\tDO INSTEAD (\n\t DELETE FROM table1 WHERE pk = old.pk;\n\t DELETE FROM table2 WHERE pk = old.pk\n\t);\n\n\tCREATE VIEW visible AS\n\tSELECT ... FROM table3\n\tWHERE deleted = 0;\n\n\tCREATE RULE visible_delete AS -- DELETE rule\n\tON DELETE TO visible \n\tDO INSTEAD \n\t UPDATE table3\n\t SET deleted = 1\n\t WHERE pk = old.pk;\n\nServus\n Manfred\n",
"msg_date": "Sat, 28 Sep 2002 22:38:43 +0200",
"msg_from": "Manfred Koizar <mkoi-pg@aon.at>",
"msg_from_op": false,
"msg_subject": "Re: Bug in PL/pgSQL GET DIAGNOSTICS?"
},
{
"msg_contents": "\nOK, that is a good example. It would return the sum of the matching\ntags. You are suggesting here that it would be better to take the\nresult of the last matching tag command, right?\n\n---------------------------------------------------------------------------\n\nManfred Koizar wrote:\n> On Sat, 28 Sep 2002 13:41:04 -0400 (EDT), Bruce Momjian\n> <pgman@candle.pha.pa.us> wrote:\n> >Does anyone else have a common rule that would return incorrect results\n> >using the proposed rules?\n> \n> \tCREATE VIEW twotables AS\n> \tSELECT ... FROM table1 INNER JOIN table2 ON ... ;\n> \n> \tCREATE RULE twotables_insert AS -- INSERT rule\n> \tON INSERT TO twotables \n> \tDO INSTEAD (\n> \t INSERT INTO table1 VALUES (new.pk, new.col1);\n> \t INSERT INTO table2 VALUES (new.pk, new.col2)\n> \t); \n> \t\n> \tCREATE RULE twotables_update AS -- UPDATE rule\n> \tON UPDATE TO twotables \n> \tDO INSTEAD (\n> \t UPDATE table1 SET col1 = new.col1 WHERE pk = old.pk;\n> \t UPDATE table2 SET col2 = new.col2 WHERE pk = old.pk\n> \t); \n> \t\n> \tCREATE RULE twotables_delete AS -- DELETE rule\n> \tON DELETE TO twotables \n> \tDO INSTEAD (\n> \t DELETE FROM table1 WHERE pk = old.pk;\n> \t DELETE FROM table2 WHERE pk = old.pk\n> \t);\n> \n> \tCREATE VIEW visible AS\n> \tSELECT ... FROM table3\n> \tWHERE deleted = 0;\n> \n> \tCREATE RULE visible_delete AS -- DELETE rule\n> \tON DELETE TO visible \n> \tDO INSTEAD \n> \t UPDATE table3\n> \t SET deleted = 1\n> \t WHERE pk = old.pk;\n> \n> Servus\n> Manfred\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sat, 28 Sep 2002 19:20:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Bug in PL/pgSQL GET DIAGNOSTICS?"
},
{
"msg_contents": "On Sat, 28 Sep 2002 19:20:43 -0400 (EDT), Bruce Momjian\n<pgman@candle.pha.pa.us> wrote:\n>OK, that is a good example. It would return the sum of the matching\n>tags. You are suggesting here that it would be better to take the\n>result of the last matching tag command, right?\n\nThe examples were meant to support my previous suggestion of\nexplicitly marking the statement you want to be counted, something\nlike:\n\n\tCREATE VIEW twotables AS\n\tSELECT ... FROM table1 INNER JOIN table2 ON ... ;\n\n\tCREATE RULE twotables_insert AS -- INSERT rule\n\tON INSERT TO twotables \n\tDO INSTEAD (\n\t COUNT INSERT INTO table1 VALUES (new.pk, new.col1);\n\t INSERT INTO table2 VALUES (new.pk, new.col2)\n\t); \n\t\n\tCREATE RULE twotables_update AS -- UPDATE rule\n\tON UPDATE TO twotables \n\tDO INSTEAD (\n\t COUNT UPDATE table1 SET col1 = new.col1 WHERE pk = old.pk;\n\t UPDATE table2 SET col2 = new.col2 WHERE pk = old.pk\n\t); \n\t\n\tCREATE RULE twotables_delete AS -- DELETE rule\n\tON DELETE TO twotables \n\tDO INSTEAD (\n\t COUNT DELETE FROM table1 WHERE pk = old.pk;\n\t DELETE FROM table2 WHERE pk = old.pk\n\t);\n\n\tCREATE VIEW visible AS\n\tSELECT ... FROM table3\n\tWHERE deleted = 0;\n\n\tCREATE RULE visible_delete AS -- DELETE rule\n\tON DELETE TO visible \n\tDO INSTEAD \n\t COUNT UPDATE table3\n\t SET deleted = 1\n\t WHERE pk = old.pk;\n\nOne argument against automatically \"don't count non-INSTEAD rules and\ncount the last statement in INSTEAD rules\": sql-createrule.html says:\n| for view updates: there must be an unconditional INSTEAD rule [...]\n| If you want to handle all the useful cases in conditional rules, you\n| can; just add an unconditional DO INSTEAD NOTHING rule [...]\n| Then make the conditional rules non-INSTEAD\n\n\tCREATE RULE v_update AS -- UPDATE rule\n\tON UPDATE TO v \n\tDO INSTEAD NOTHING;\n\n\tCREATE RULE v_update2 AS -- UPDATE rule\n\tON UPDATE TO v WHERE <condition1>\n\tDO (\n\t COUNT ...\n\t); \n\n\tCREATE RULE v_update3 AS -- UPDATE rule\n\tON UPDATE TO v WHERE <condition2>\n\tDO (\n\t COUNT ...\n\t); \n\nServus\n Manfred\n",
"msg_date": "Sun, 29 Sep 2002 21:26:37 +0200",
"msg_from": "Manfred Koizar <mkoi-pg@aon.at>",
"msg_from_op": false,
"msg_subject": "Re: Bug in PL/pgSQL GET DIAGNOSTICS?"
},
{
"msg_contents": "\nWe have talked about possible return values for RULES, particularly\nINSTEAD rule. Manfred has a nice example here, so I propose we handle\nINSTEAD rules this way: that we return the oid and tuple count of the\nlast INSTEAD rule query with a tag matching the main query. The\nreturned tag, of course, would be the tag of the main query. This works\nfor Manfred's case, and it works for my case when there is only one\naction in the INSTEAD rule. If there is more than one matching tag in\nthe INSTEAD rule, the user has the option to place the query he wants\nfor the return at the end of the rule. This does give the user some\ncontrol over what is returned.\n\nComments?\n\nI think non-INSTEAD rules already return the tag, oid, and tuple count of\nthe main query, right?\n\n---------------------------------------------------------------------------\n\nManfred Koizar wrote:\n> On Sat, 28 Sep 2002 19:20:43 -0400 (EDT), Bruce Momjian\n> <pgman@candle.pha.pa.us> wrote:\n> >OK, that is a good example. It would return the sum of the matching\n> >tags. You are suggesting here that it would be better to take the\n> >result of the last matching tag command, right?\n> \n> The examples were meant to support my previous suggestion of\n> explicitly marking the statement you want to be counted, something\n> like:\n> \n> \tCREATE VIEW twotables AS\n> \tSELECT ... FROM table1 INNER JOIN table2 ON ... ;\n> \n> \tCREATE RULE twotables_insert AS -- INSERT rule\n> \tON INSERT TO twotables \n> \tDO INSTEAD (\n> \t COUNT INSERT INTO table1 VALUES (new.pk, new.col1);\n> \t INSERT INTO table2 VALUES (new.pk, new.col2)\n> \t); \n> \t\n> \tCREATE RULE twotables_update AS -- UPDATE rule\n> \tON UPDATE TO twotables \n> \tDO INSTEAD (\n> \t COUNT UPDATE table1 SET col1 = new.col1 WHERE pk = old.pk;\n> \t UPDATE table2 SET col2 = new.col2 WHERE pk = old.pk\n> \t); \n> \t\n> \tCREATE RULE twotables_delete AS -- DELETE rule\n> \tON DELETE TO twotables \n> \tDO INSTEAD (\n> \t COUNT DELETE FROM table1 WHERE pk = old.pk;\n> \t DELETE FROM table2 WHERE pk = old.pk\n> \t);\n> \n> \tCREATE VIEW visible AS\n> \tSELECT ... FROM table3\n> \tWHERE deleted = 0;\n> \n> \tCREATE RULE visible_delete AS -- DELETE rule\n> \tON DELETE TO visible \n> \tDO INSTEAD \n> \t COUNT UPDATE table3\n> \t SET deleted = 1\n> \t WHERE pk = old.pk;\n> \n> One argument against automatically \"don't count non-INSTEAD rules and\n> count the last statement in INSTEAD rules\": sql-createrule.html says:\n> | for view updates: there must be an unconditional INSTEAD rule [...]\n> | If you want to handle all the useful cases in conditional rules, you\n> | can; just add an unconditional DO INSTEAD NOTHING rule [...]\n> | Then make the conditional rules non-INSTEAD\n> \n> \tCREATE RULE v_update AS -- UPDATE rule\n> \tON UPDATE TO v \n> \tDO INSTEAD NOTHING;\n> \n> \tCREATE RULE v_update2 AS -- UPDATE rule\n> \tON UPDATE TO v WHERE <condition1>\n> \tDO (\n> \t COUNT ...\n> \t); \n> \n> \tCREATE RULE v_update3 AS -- UPDATE rule\n> \tON UPDATE TO v WHERE <condition2>\n> \tDO (\n> \t COUNT ...\n> \t); \n> \n> Servus\n> Manfred\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 3 Oct 2002 22:21:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Return of INSTEAD rules"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> We have talked about possible return values for RULES, particularly\n> INSTEAD rule. Manfred has a nice example here, so I propose we handle\n> INSTEAD rules this way: that we return the oid and tuple count of the\n> last INSTEAD rule query with a tag matching the main query.\n\nHmm ... that's subtly different from what I'd seen discussed before.\nI thought the idea was\n\n\t1. If no INSTEAD rule: return tag, count, and OID of original\n\t query, regardless of what is added by non-INSTEAD rules.\n\t (I think this part is not controversial.)\n\t2. If any INSTEAD rule: return tag, count, and OID of the last\n\t executed query that has the same tag as the original query.\n\t If no substituted query matches the original query's tag,\n\t return original query's tag with zero count and OID.\n\t (This is where the going gets tough.)\n\nI think you just modified the second part of that to restrict it to\nqueries that were added by INSTEAD rules. This is doable but it's\nnot a trivial change --- in particular, I think it implies adding\nanother field to Query data structure so we can mark INSTEAD-added\nvs non-INSTEAD-added queries. Which means an initdb because it breaks\nstored rules.\n\nOffhand I think this might be worth doing, because I like that subtle\nchange in behavior. But we should understand exactly what we're doing\nhere...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Oct 2002 23:39:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Return of INSTEAD rules "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > We have talked about possible return values for RULES, particularly\n> > INSTEAD rule. Manfred has a nice example here, so I propose we handle\n> > INSTEAD rules this way: that we return the oid and tuple count of the\n> > last INSTEAD rule query with a tag matching the main query.\n> \n> Hmm ... that's subtly different from what I'd seen discussed before.\n> I thought the idea was\n> \n> \t1. If no INSTEAD rule: return tag, count, and OID of original\n> \t query, regardless of what is added by non-INSTEAD rules.\n> \t (I think this part is not controversial.)\n> \t2. If any INSTEAD rule: return tag, count, and OID of the last\n> \t executed query that has the same tag as the original query.\n> \t If no substituted query matches the original query's tag,\n> \t return original query's tag with zero count and OID.\n> \t (This is where the going gets tough.)\n> \n> I think you just modified the second part of that to restrict it to\n> queries that were added by INSTEAD rules. This is doable but it's\n> not a trivial change --- in particular, I think it implies adding\n> another field to Query data structure so we can mark INSTEAD-added\n> vs non-INSTEAD-added queries. Which means an initdb because it breaks\n> stored rules.\n\nI am confused how yours differs from mine. I don't see how the last\nmatching tagged query would not be from an INSTEAD rule. Are you\nthinking multiple queries in the query string?\n\n> Offhand I think this might be worth doing, because I like that subtle\n> change in behavior. But we should understand exactly what we're doing\n> here...\n\nSeems we are adding up reasons for initdb. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 4 Oct 2002 00:47:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Return of INSTEAD rules"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I am confused how yours differs from mine. I don't see how the last\n> matching tagged query would not be from an INSTEAD rule.\n\nYou could have both INSTEAD and non-INSTEAD rules firing for the same\noriginal query. If the alphabetically-last rule is a non-INSTEAD rule,\nthen there's a difference.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 04 Oct 2002 00:53:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Return of INSTEAD rules "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I am confused how yours differs from mine. I don't see how the last\n> > matching tagged query would not be from an INSTEAD rule.\n> \n> You could have both INSTEAD and non-INSTEAD rules firing for the same\n> original query. If the alphabetically-last rule is a non-INSTEAD rule,\n> then there's a difference.\n\nHow do we get multiple rules on a query? I thought it was mostly\nINSERT/UPDATE/DELETE, and those all operate on a single table.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 4 Oct 2002 11:49:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Return of INSTEAD rules"
},
{
"msg_contents": "On Thu, 3 Oct 2002 22:21:27 -0400 (EDT), Bruce Momjian\n<pgman@candle.pha.pa.us> wrote:\n>so I propose we handle\n>INSTEAD rules this way: that we return the oid and tuple count of the\n>last INSTEAD rule query with a tag matching the main query. \n\nBruce, this won't work for this example\n\n>> \tCREATE RULE visible_delete AS -- DELETE rule\n>> \tON DELETE TO visible \n>> \tDO INSTEAD \n>> \t COUNT UPDATE table3\n>> \t SET deleted = 1\n>> \t WHERE pk = old.pk;\n\nbecause here we don't have a rule query with a matching tag. Same\napplies for\n\n>> \tCREATE RULE v_update AS -- UPDATE rule\n>> \tON UPDATE TO v \n>> \tDO INSTEAD NOTHING;\n\nI wrote:\n>> One argument against automatically \"don't count non-INSTEAD rules and\n>> count the last statement in INSTEAD rules\"\n\nSeems I introduced a little bit of confusion here by argueing against\nsomething that has never been proposed before. Funny, that this\nnon-existent proposal is now seriously discussed :-(\n\nHas the idea of extending the syntax to explicitly mark queries as\nCOUNTed already been rejected? If yes, I cannot help here. If no, I\nkeep telling you that this approach can emulate most of the other\npossible solutions still under discussion.\n\nBruce wrote:\n>If there is more than one matching tag in\n>the INSTEAD rule, the user has the option to place the query he wants\n>for the return at the end of the rule.\n\nAre you sure this is always possible without unwanted side effects?\n\nServus\n Manfred\n",
"msg_date": "Fri, 04 Oct 2002 18:00:53 +0200",
"msg_from": "Manfred Koizar <mkoi-pg@aon.at>",
"msg_from_op": false,
"msg_subject": "Re: Return of INSTEAD rules"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I am confused how yours differs from mine. I don't see how the last\n> matching tagged query would not be from an INSTEAD rule.\n>> \n>> You could have both INSTEAD and non-INSTEAD rules firing for the same\n>> original query. If the alphabetically-last rule is a non-INSTEAD rule,\n>> then there's a difference.\n\n> How do we get multiple rules on a query? I thought it was mostly\n> INSERT/UPDATE/DELETE, and those all operate on a single table.\n\nYou can create as many rules as you want. One reasonably likely\nscenario is that you have a view, you make an ON INSERT DO INSTEAD\nrule to support insertions into the view (by inserting into some\nunderlying table(s) instead), and then you add some not-INSTEAD\nrules to perform logging into other tables that aren't part of the\nview but just keep track of activity.\n\nYou'd not want the logging activity to usurp the count result for this\nsetup, I think, even if it happened last. (Indeed, that might be\n*necessary*, if for some reason it needed to access the rows inserted\ninto the view's base table.)\n\nThis approach would give us a general principle that applies in all\ncases: not-INSTEAD rules don't affect the returned command result.\nPerhaps that would answer Manfred's thought that we should be able\nto label which rules affect the result. If you have any INSTEAD rules,\nthen it doesn't matter exactly how many you have, so you can mark them\nINSTEAD or not to suit your fancy.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 04 Oct 2002 12:08:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Return of INSTEAD rules "
},
{
"msg_contents": "Tom Lane wrote:\n> You can create as many rules as you want. One reasonably likely\n> scenario is that you have a view, you make an ON INSERT DO INSTEAD\n> rule to support insertions into the view (by inserting into some\n> underlying table(s) instead), and then you add some not-INSTEAD\n> rules to perform logging into other tables that aren't part of the\n> view but just keep track of activity.\n> \n> You'd not want the logging activity to usurp the count result for this\n> setup, I think, even if it happened last. (Indeed, that might be\n> *necessary*, if for some reason it needed to access the rows inserted\n> into the view's base table.)\n> \n> This approach would give us a general principle that applies in all\n> cases: not-INSTEAD rules don't affect the returned command result.\n> Perhaps that would answer Manfred's thought that we should be able\n> to label which rules affect the result. If you have any INSTEAD rules,\n> then it doesn't matter exactly how many you have, so you can mark them\n> INSTEAD or not to suit your fancy.\n\nOh, I like that, and rules fire alphabetically, right?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 4 Oct 2002 12:53:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Return of INSTEAD rules"
},
{
"msg_contents": "Manfred Koizar wrote:\n> On Thu, 3 Oct 2002 22:21:27 -0400 (EDT), Bruce Momjian\n> <pgman@candle.pha.pa.us> wrote:\n> >so I propose we handle\n> >INSTEAD rules this way: that we return the oid and tuple count of the\n> >last INSTEAD rule query with a tag matching the main query. \n> \n> Bruce, this won't work for this example\n> \n> >> \tCREATE RULE visible_delete AS -- DELETE rule\n> >> \tON DELETE TO visible \n> >> \tDO INSTEAD \n> >> \t COUNT UPDATE table3\n> >> \t SET deleted = 1\n> >> \t WHERE pk = old.pk;\n> \n> because here we don't have a rule query with a matching tag. Same\n> applies for\n\nTrue, but because we have said we are going to return the tag of the\noriginal command, I don't think we have anything valid to return in this\ncase to match the tag.\n\n> >> \tCREATE RULE v_update AS -- UPDATE rule\n> >> \tON UPDATE TO v \n> >> \tDO INSTEAD NOTHING;\n\nThis is OK because the default is return zeros.\n\n> I wrote:\n> >> One argument against automatically \"don't count non-INSTEAD rules and\n> >> count the last statement in INSTEAD rules\"\n> \n> Seems I introduced a little bit of confusion here by argueing against\n> something that has never been proposed before. Funny, that this\n> non-existent proposal is now seriously discussed :-(\n> \n> Has the idea of extending the syntax to explicitly mark queries as\n> COUNTed already been rejected? If yes, I cannot help here. If no, I\n\nWell, I am hoping to find something that was automatic. If we do our\nbest, and we still get complains, we can add some syntax. I am\nconcerned that adding syntax is just over-designing something that isn't\nnecessary.\n\n> keep telling you that this approach can emulate most of the other\n> possible solutions still under discussion.\n> \n> Bruce wrote:\n> >If there is more than one matching tag in\n> >the INSTEAD rule, the user has the option to place the query he wants\n> >for the return at the end of the rule.\n> \n> Are you sure this is always possible without unwanted side effects?\n\nI am sure it isn't always possible, but let's do our best and see how\npeople react.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 4 Oct 2002 12:59:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Return of INSTEAD rules"
},
{
"msg_contents": "\"Michael Paesold\" <mpaesold@gmx.at> writes:\n> In a PL/pgSQL function I want to insert into a table and get the OID back.\n> That usually works with\n> GET DIAGNOSTICS last_oid = RESULT_OID;\n> right after the insert statement.\n\n> But if the table that I insert to has a rule (or perhaps a trigger?) that\n> updates another table, the RESULT_OID after the insert will be 0 (zero).\n\nAs of CVS tip, this example produces the results I believe you want:\n\nregression=# SELECT pltestfunc(10);\nNOTICE: RESULT_OID: 282229\nNOTICE: RESULT_OID: 282230\nNOTICE: RESULT_OID: 282231\nNOTICE: RESULT_OID: 282232\nNOTICE: RESULT_OID: 282233\nNOTICE: RESULT_OID: 282234\nNOTICE: RESULT_OID: 282235\nNOTICE: RESULT_OID: 282236\nNOTICE: RESULT_OID: 282237\nNOTICE: RESULT_OID: 282238\n pltestfunc\n------------\n t\n(1 row)\n\nregression=# SELECT * FROM pltest;\n id | t\n----+------\n 1 | test\n 2 | test\n 3 | test\n 4 | test\n 5 | test\n 6 | test\n 7 | test\n 8 | test\n 9 | test\n 10 | test\n(10 rows)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 14 Oct 2002 19:51:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug in PL/pgSQL GET DIAGNOSTICS? "
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"Michael Paesold\" <mpaesold@gmx.at> writes:\n> > In a PL/pgSQL function I want to insert into a table and get the OID\nback.\n> > That usually works with\n> > GET DIAGNOSTICS last_oid = RESULT_OID;\n> > right after the insert statement.\n>\n> > But if the table that I insert to has a rule (or perhaps a trigger?)\nthat\n> > updates another table, the RESULT_OID after the insert will be 0 (zero).\n>\n> As of CVS tip, this example produces the results I believe you want:\n>\n> regression=# SELECT pltestfunc(10);\n> NOTICE: RESULT_OID: 282229\n> NOTICE: RESULT_OID: 282230\n> NOTICE: RESULT_OID: 282231\n...\n[snip]\n\nThat is very good news. I appreciate that you got it fixed for 7.3. I am\nsure I am only one of many who have use for that, but thanks anyway!\n\nBest Regards,\nMichael Paesold\n\n",
"msg_date": "Tue, 15 Oct 2002 11:08:58 +0200",
"msg_from": "\"Michael Paesold\" <mpaesold@gmx.at>",
"msg_from_op": false,
"msg_subject": "Re: Bug in PL/pgSQL GET DIAGNOSTICS? "
}
] |
[
{
"msg_contents": "Hi,\n\nhere's the latest mail from Sebastian explaining the problem we ran\ninto. As far as I understand the problem, the application uses stored\nprocedures for each and every select statement. Thus he needs his\nprocedures to return the whole query result, which is not doable with\nour functions.\n\nI offered to replace his procedures by the corresponding select\nstatements as I question the design anyway, but due to the amount of\nwork, this isn't a likely solution. Instead they are considering using\nSAP-DB instead of PostgreSQL. \n\nAny ideas how to get PostgreSQL used are more than appreciated. Of\ncourse mails that prove me right are also appreciated, but I'd love to\nfind a solution. :-)\n\nThanks.\n\nMichael\n\nP.S.: The attached diff should patch cleanly into the current cvs, but\nI'd prefer if Hiroshi or someone else who knows the odbc stuff better\nthan I do takes a look at it before committing.\n\n----- Forwarded message from Sebastian Hetze <s.hetze@linux-ag.de> -----\n\nDate: Fri, 20 Sep 2002 17:37:03 +0200\nFrom: Sebastian Hetze <s.hetze@linux-ag.de>\nTo: michael.meskes@credativ.de\nCc: dpage@vale-housing.co.uk\nSubject: PostgreSQL integration Visual Basic, SQLProcedureColumns\n\nHi *!\n\nIn our current project, we are trying to migrate a database application\nwritten in Visual Basic 6.0 SP5 from Microsoft SQL6.5 to postgresql 7.2.\nWe have about 150 stored procedures in MS-SQL that return result sets\n(multiple rows of multiple columns, just like a select does).\nThe application makes heavy use of ADO/OLEDB methods to access these\nprocedures.\nIt goes without saying that we do not want to rewrite the whole\napplication, so we want to get this ADO thing working the same with\npostgresql.\n\nHere is the summary of our findings so far:\n\n1. There is a OLE DB provider for ODBC drivers that bridges the ADO\n interface to the common ODBC interface.\n When using ADO methods to integrate stored procedures (SP) into the\n Visual Basic (VB) IDE, we try to set up the DataEnvironment to\n contain these procedures.\n\n2. When including a new SP into the DataEnvironment, the VB IDE first\n calls SQLProcedures to get a list of the available SP from\n postgresql.\n Once the SP to include has been selected VB calls SQLProcedureColumns\n to find out about arguments to call the SP with.\n\n3. psqlodbc returns 'not implemented' for SQLProcedureColumns.\n We have implemented it as far as the actual information to be\n returned is available. Patch included.\n There appears to be no way to get the actual description of the\n result set columns. (we simulated that by introducing a new\n system table holding all sorts of information about arguments\n and result set columns for SQLProcedureColumns)\n\n4. postgresql functions are not really the same thing as stored\n procedures. Functions always return one value, that might be a\n integer, a single row (array) or a cursor. postgresql functions\n are SELECTed, not CALLed.\n\n5. We rewrote the SP from MSSQL to return cursors in PL/pgSQL.\n\n6. Unlike SAPDB, where cursors returned by SP are actually used\n to fetch data by the VB IDE and application, we did not get\n these postgresql cursors to work.\n\nWe have experimented quite a while with all different sorts of\ndeclaration of SQL_RETURN_VALUE, SQL_RESULT_COL and SQL_PARAM_OUTPUT\nfor the SQLProcedureColumns results. We did not see any effect on\nthe behaviour of the ADO DataEnvironment. Finally, we got the\nimpression that VB ignores all result set information from\nSQLProcedureColumns and tries to prepare the CALL/SELECT statement\ninstead. With postgresql this preparation appearently does not lead\nto any useful results.\n\nThis is where we are stuck now.\n\nAny hints or suggestions what we could do to solve this riddle?\n\nThanx alot!\n\n Sebastian\n-- \nSebastian Hetze Linux Information Systems AG\n Fon +49 (0)30 72 62 38-0 Ehrenbergstr. 19\nS.Hetze@Linux-AG.com Fax +49 (0)30 72 62 38-99 D-10245 Berlin\nLinux is our Business. ____________________________________ www.Linux-AG.com __\n\ndiff -Nur psqlodbc-dist/info.c psqlodbc-neu/info.c\n--- psqlodbc-dist/info.c\t2002-09-21 16:23:48.000000000 +0200\n+++ psqlodbc-neu/info.c\t2002-09-21 17:12:46.000000000 +0200\n@@ -972,11 +972,16 @@\n \t\t\tpfExists[SQL_API_SQLNUMPARAMS] = TRUE;\n \t\t\tpfExists[SQL_API_SQLPARAMOPTIONS] = TRUE;\n \t\t\tpfExists[SQL_API_SQLPRIMARYKEYS] = TRUE;\n-\t\t\tpfExists[SQL_API_SQLPROCEDURECOLUMNS] = FALSE;\n \t\t\tif (PG_VERSION_LT(conn, 6.5))\n+\t\t\t{\n \t\t\t\tpfExists[SQL_API_SQLPROCEDURES] = FALSE;\n+\t\t\t\tpfExists[SQL_API_SQLPROCEDURECOLUMNS] = FALSE;\n+\t\t\t}\n \t\t\telse\n+\t\t\t{\n \t\t\t\tpfExists[SQL_API_SQLPROCEDURES] = TRUE;\n+\t\t\t\tpfExists[SQL_API_SQLPROCEDURECOLUMNS] = TRUE;\n+\t\t\t}\n \t\t\tpfExists[SQL_API_SQLSETPOS] = TRUE;\n \t\t\tpfExists[SQL_API_SQLSETSCROLLOPTIONS] = TRUE;\t\t/* odbc 1.0 */\n \t\t\tpfExists[SQL_API_SQLTABLEPRIVILEGES] = TRUE;\n@@ -1148,7 +1153,7 @@\n \t\t\t\t\t*pfExists = TRUE;\n \t\t\t\t\tbreak;\n \t\t\t\tcase SQL_API_SQLPROCEDURECOLUMNS:\n-\t\t\t\t\t*pfExists = FALSE;\n+\t\t\t\t\t*pfExists = TRUE;\n \t\t\t\t\tbreak;\n \t\t\t\tcase SQL_API_SQLPROCEDURES:\n \t\t\t\t\tif (PG_VERSION_LT(conn, 6.5))\n@@ -4146,27 +4151,667 @@\n }\n \n \n-RETCODE\t\tSQL_API\n-PGAPI_ProcedureColumns(\n-\t\t\t\t\t HSTMT hstmt,\n-\t\t\t\t\t UCHAR FAR * szProcQualifier,\n-\t\t\t\t\t SWORD cbProcQualifier,\n-\t\t\t\t\t UCHAR FAR * szProcOwner,\n-\t\t\t\t\t SWORD cbProcOwner,\n-\t\t\t\t\t UCHAR FAR * szProcName,\n-\t\t\t\t\t SWORD cbProcName,\n-\t\t\t\t\t UCHAR FAR * szColumnName,\n-\t\t\t\t\t SWORD cbColumnName)\n+RETCODE SQL_API\n+PGAPI_ProcedureColumns(HSTMT hstmt,\n+\t\t UCHAR FAR * szSchemaName,\n+\t\t SWORD cbSchemaName,\n+\t\t UCHAR FAR * szProcOwner,\n+\t\t SWORD cbProcOwner,\n+\t\t UCHAR FAR * szProcName,\n+\t\t SWORD cbProcName,\n+\t\t UCHAR FAR * szColumnName, SWORD cbColumnName)\n {\n-\tstatic char *func = \"PGAPI_ProcedureColumns\";\n-\tStatementClass\t*stmt = (StatementClass *) hstmt;\n+ static char *func = \"PGAPI_ProcedureColumns\";\n+ StatementClass *stmt = (StatementClass *) hstmt;\n+ ConnectionClass *conn = SC_get_conn(stmt);\n+ QResultClass *res;\n+ char columns_query[INFO_INQUIRY_LEN];\n+ SQLRETURN result;\n+ SQLHSTMT hcol_stmt;\n+ StatementClass *col_stmt;\n+ Int2 result_cols;\n+ char proc_owner[MAX_INFO_STRING],\n+ proc_name[MAX_INFO_STRING],\n+ colname[255];\n+ SWORD proc_nargs,\n+ proc_retset;\n+ Int4 proc_rettype;\n+ Int2 i;\n+ Oid proc_oid;\n+ Int4 dlen;\n+\n+ /*\n+ * This is a copy from pg_config.h \n+ */\n+#define INDEX_MAX_KEYS 16\n+ Oid proc_argtypes[INDEX_MAX_KEYS];\n \n-\tmylog(\"%s: entering...\\n\", func);\n+ ConnInfo *ci;\n+ TupleNode *row;\n \n+ mylog(\"%s: entering...\\n\", func);\n+\n+ if (PG_VERSION_LT(conn, 6.3))\n+ {\n \tstmt->errornumber = STMT_NOT_IMPLEMENTED_ERROR;\n-\tstmt->errormsg = \"not implemented\";\n-\tSC_log_error(func, \"Function not implemented\", stmt);\n+\tstmt->errormsg = \"Version is too old\";\n+\tSC_log_error(func, \"Function not implemented\",\n+\t\t (StatementClass *) hstmt);\n+\treturn SQL_ERROR;\n+ }\n+\n+ if (!SC_recycle_statement(stmt))\n+\treturn SQL_ERROR;\n+\n+ stmt->manual_result = TRUE;\n+ stmt->errormsg_created = TRUE;\n+\n+ conn = (ConnectionClass *) (stmt->hdbc);\n+ ci = &stmt->hdbc->connInfo;\n+\n+ /*\n+ * This statement is far from elegant. I simply have no idea how to\n+ * get this oidvector thing any other way... FIXME if you can... \n+ */\n+ /*\n+ * columns_query is set up to read everything we know about the\n+ * procedures out of pg_proc. There is nothing else we can tell\n+ * about the functions / procs with the current implementation of\n+ * the postgresql system tables. You might want to introduce a new\n+ * system table to hold argument names, possibly return or result\n+ * set name + type information and such things. Sometime in the\n+ * future.... \n+ */\n+ strcpy(columns_query,\n+\t \"select u.usename, p.proname, p.pronargs, p.proretset, p.prorettype, p.proargtypes[0] as arg1, p.proargtypes[1] as arg2, p.proargtypes[2] as arg3, p.proargtypes[3] as arg4, p.proargtypes[4] as arg5, p.proargtypes[5] as arg6, p.proargtypes[6] as arg7, p.proargtypes[7] as arg8, p.proargtypes[8] as arg9, p.proargtypes[9] as arg10, p.proargtypes[10] as arg11, p.proargtypes[11] as arg12, p.proargtypes[12] as arg13, p.proargtypes[13] as arg14, p.proargtypes[14] as arg15, p.proargtypes[15] as arg16, p.oid FROM pg_proc p, pg_user u WHERE p.prorettype <> 0 and (p.pronargs = 0 or oidvectortypes(p.proargtypes) <> '') and p.proowner = u.usesysid\");\n+ my_strcat(columns_query, \" and u.usename like '%.*s'\", szSchemaName,\n+\t cbSchemaName);\n+ my_strcat(columns_query, \" and p.proname like '%.*s'\", szProcName,\n+\t cbProcName);\n+ strcat(columns_query, \" ORDER BY p.proowner, p.proname\");\n+\n+ result = SQLAllocStmt(stmt->hdbc, &hcol_stmt);\n+ if ((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO))\n+ {\n+\tstmt->errornumber = STMT_NO_MEMORY_ERROR;\n+\tstmt->errormsg =\n+\t \"Couldn't allocate statement for SQLProcedureColumns result.\";\n+\tSC_log_error(func, \"\", stmt);\n+\treturn SQL_ERROR;\n+ }\n+\n+ col_stmt = (StatementClass *) hcol_stmt;\n+ result =\n+\tSQLExecDirect(hcol_stmt, columns_query, strlen(columns_query));\n+ if ((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO))\n+ {\n+\tstmt->errormsg = SC_create_errormsg(hcol_stmt);\n+\tstmt->errornumber = col_stmt->errornumber;\n+\tSC_log_error(func, \"\", stmt);\n+\tSQLFreeStmt(hcol_stmt, SQL_DROP);\n+\treturn SQL_ERROR;\n+ }\n+\n+ /*\n+ * Now that the statement is executed we have to set up buffers to\n+ * hold the values for all columns in each fetch loop ...\n+ * long statement need a lot of bindings. It would be much easier if\n+ * we could bind the oidvector thing as a whole. FIXME: how can this\n+ * be done?? \n+ */\n+\n+ /*\n+ * u.usename \n+ */\n+ result = SQLBindCol(hcol_stmt, 1, SQL_CHAR,\n+\t\t\tproc_owner, MAX_INFO_STRING, NULL);\n+\n+ if ((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO))\n+ {\n+\tstmt->errormsg = col_stmt->errormsg;\n+\tstmt->errornumber = col_stmt->errornumber;\n+\tSC_log_error(func, \"\", stmt);\n+\tSQLFreeStmt(hcol_stmt, SQL_DROP);\n+\treturn SQL_ERROR;\n+ }\n+\n+ /*\n+ * p.proname \n+ */\n+ result = SQLBindCol(hcol_stmt, 2, SQL_CHAR,\n+\t\t\tproc_name, MAX_INFO_STRING, NULL);\n+\n+ if ((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO))\n+ {\n+\tstmt->errormsg = col_stmt->errormsg;\n+\tstmt->errornumber = col_stmt->errornumber;\n+\tSC_log_error(func, \"\", stmt);\n+\tSQLFreeStmt(hcol_stmt, SQL_DROP);\n+\treturn SQL_ERROR;\n+ }\n+\n+ /*\n+ * p.pronargs \n+ */\n+ result = SQLBindCol(hcol_stmt, 3, SQL_SMALLINT,\n+\t\t\t&proc_nargs, sizeof(SWORD), NULL);\n+\n+ if ((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO))\n+ {\n+\tstmt->errormsg = col_stmt->errormsg;\n+\tstmt->errornumber = col_stmt->errornumber;\n+\tSC_log_error(func, \"\", stmt);\n+\tSQLFreeStmt(hcol_stmt, SQL_DROP);\n+\treturn SQL_ERROR;\n+ }\n+\n+ /*\n+ * p.proretset \n+ */\n+ result = SQLBindCol(hcol_stmt, 4, SQL_BIT,\n+\t\t\t&proc_retset, sizeof(SWORD), NULL);\n+\n+ if ((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO))\n+ {\n+\tstmt->errormsg = col_stmt->errormsg;\n+\tstmt->errornumber = col_stmt->errornumber;\n+\tSC_log_error(func, \"\", stmt);\n+\tSQLFreeStmt(hcol_stmt, SQL_DROP);\n \treturn SQL_ERROR;\n+ }\n+\n+ /*\n+ * p.prorettype \n+ */\n+ result = SQLBindCol(hcol_stmt, 5, SQL_INTEGER,\n+\t\t\t&proc_rettype, sizeof(Int4), NULL);\n+\n+ if ((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO))\n+ {\n+\tstmt->errormsg = col_stmt->errormsg;\n+\tstmt->errornumber = col_stmt->errornumber;\n+\tSC_log_error(func, \"\", stmt);\n+\tSQLFreeStmt(hcol_stmt, SQL_DROP);\n+\treturn SQL_ERROR;\n+ }\n+\n+ /*\n+ * p.proargtypes , column by column, 16 times ... \n+ */\n+ result =\n+\tSQLBindCol(hcol_stmt, 6, SQL_INTEGER, &proc_argtypes[0], 4, NULL);\n+\n+ if ((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO))\n+ {\n+\tstmt->errormsg = col_stmt->errormsg;\n+\tstmt->errornumber = col_stmt->errornumber;\n+\tSC_log_error(func, \"\", stmt);\n+\tSQLFreeStmt(hcol_stmt, SQL_DROP);\n+\treturn SQL_ERROR;\n+ }\n+\n+ result =\n+\tSQLBindCol(hcol_stmt, 7, SQL_INTEGER, &proc_argtypes[1], 4, NULL);\n+\n+ if ((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO))\n+ {\n+\tstmt->errormsg = col_stmt->errormsg;\n+\tstmt->errornumber = col_stmt->errornumber;\n+\tSC_log_error(func, \"\", stmt);\n+\tSQLFreeStmt(hcol_stmt, SQL_DROP);\n+\treturn SQL_ERROR;\n+ }\n+\n+ result =\n+\tSQLBindCol(hcol_stmt, 8, SQL_INTEGER, &proc_argtypes[2], 4, NULL);\n+\n+ if ((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO))\n+ {\n+\tstmt->errormsg = col_stmt->errormsg;\n+\tstmt->errornumber = col_stmt->errornumber;\n+\tSC_log_error(func, \"\", stmt);\n+\tSQLFreeStmt(hcol_stmt, SQL_DROP);\n+\treturn SQL_ERROR;\n+ }\n+\n+ result =\n+\tSQLBindCol(hcol_stmt, 9, SQL_INTEGER, &proc_argtypes[3], 4, NULL);\n+\n+ if ((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO))\n+ {\n+\tstmt->errormsg = col_stmt->errormsg;\n+\tstmt->errornumber = col_stmt->errornumber;\n+\tSC_log_error(func, \"\", stmt);\n+\tSQLFreeStmt(hcol_stmt, SQL_DROP);\n+\treturn SQL_ERROR;\n+ }\n+\n+ result =\n+\tSQLBindCol(hcol_stmt, 10, SQL_INTEGER, &proc_argtypes[4], 4, NULL);\n+\n+ if ((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO))\n+ {\n+\tstmt->errormsg = col_stmt->errormsg;\n+\tstmt->errornumber = col_stmt->errornumber;\n+\tSC_log_error(func, \"\", stmt);\n+\tSQLFreeStmt(hcol_stmt, SQL_DROP);\n+\treturn SQL_ERROR;\n+ }\n+\n+ result =\n+\tSQLBindCol(hcol_stmt, 11, SQL_INTEGER, &proc_argtypes[5], 4, NULL);\n+\n+ if ((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO))\n+ {\n+\tstmt->errormsg = col_stmt->errormsg;\n+\tstmt->errornumber = col_stmt->errornumber;\n+\tSC_log_error(func, \"\", stmt);\n+\tSQLFreeStmt(hcol_stmt, SQL_DROP);\n+\treturn SQL_ERROR;\n+ }\n+\n+ result =\n+\tSQLBindCol(hcol_stmt, 12, SQL_INTEGER, &proc_argtypes[6], 4, NULL);\n+\n+ if ((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO))\n+ {\n+\tstmt->errormsg = col_stmt->errormsg;\n+\tstmt->errornumber = col_stmt->errornumber;\n+\tSC_log_error(func, \"\", stmt);\n+\tSQLFreeStmt(hcol_stmt, SQL_DROP);\n+\treturn SQL_ERROR;\n+ }\n+\n+ result =\n+\tSQLBindCol(hcol_stmt, 13, SQL_INTEGER, &proc_argtypes[7], 4, NULL);\n+\n+ if ((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO))\n+ {\n+\tstmt->errormsg = col_stmt->errormsg;\n+\tstmt->errornumber = col_stmt->errornumber;\n+\tSC_log_error(func, \"\", stmt);\n+\tSQLFreeStmt(hcol_stmt, SQL_DROP);\n+\treturn SQL_ERROR;\n+ }\n+\n+ result =\n+\tSQLBindCol(hcol_stmt, 14, SQL_INTEGER, &proc_argtypes[8], 4, NULL);\n+\n+ if ((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO))\n+ {\n+\tstmt->errormsg = col_stmt->errormsg;\n+\tstmt->errornumber = col_stmt->errornumber;\n+\tSC_log_error(func, \"\", stmt);\n+\tSQLFreeStmt(hcol_stmt, SQL_DROP);\n+\treturn SQL_ERROR;\n+ }\n+\n+ result =\n+\tSQLBindCol(hcol_stmt, 15, SQL_INTEGER, &proc_argtypes[9], 4, NULL);\n+\n+ if ((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO))\n+ {\n+\tstmt->errormsg = col_stmt->errormsg;\n+\tstmt->errornumber = col_stmt->errornumber;\n+\tSC_log_error(func, \"\", stmt);\n+\tSQLFreeStmt(hcol_stmt, SQL_DROP);\n+\treturn SQL_ERROR;\n+ }\n+\n+ result = SQLBindCol(hcol_stmt, 16, SQL_INTEGER,\n+\t\t\t&proc_argtypes[10], 4, NULL);\n+\n+ if ((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO))\n+ {\n+\tstmt->errormsg = col_stmt->errormsg;\n+\tstmt->errornumber = col_stmt->errornumber;\n+\tSC_log_error(func, \"\", stmt);\n+\tSQLFreeStmt(hcol_stmt, SQL_DROP);\n+\treturn SQL_ERROR;\n+ }\n+\n+ result = SQLBindCol(hcol_stmt, 17, SQL_INTEGER,\n+\t\t\t&proc_argtypes[11], 4, NULL);\n+\n+ if ((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO))\n+ {\n+\tstmt->errormsg = col_stmt->errormsg;\n+\tstmt->errornumber = col_stmt->errornumber;\n+\tSC_log_error(func, \"\", stmt);\n+\tSQLFreeStmt(hcol_stmt, SQL_DROP);\n+\treturn SQL_ERROR;\n+ }\n+\n+ result = SQLBindCol(hcol_stmt, 18, SQL_INTEGER,\n+\t\t\t&proc_argtypes[12], 4, NULL);\n+\n+ if ((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO))\n+ {\n+\tstmt->errormsg = col_stmt->errormsg;\n+\tstmt->errornumber = col_stmt->errornumber;\n+\tSC_log_error(func, \"\", stmt);\n+\tSQLFreeStmt(hcol_stmt, SQL_DROP);\n+\treturn SQL_ERROR;\n+ }\n+\n+ result = SQLBindCol(hcol_stmt, 19, SQL_INTEGER,\n+\t\t\t&proc_argtypes[13], 4, NULL);\n+\n+ if ((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO))\n+ {\n+\tstmt->errormsg = col_stmt->errormsg;\n+\tstmt->errornumber = col_stmt->errornumber;\n+\tSC_log_error(func, \"\", stmt);\n+\tSQLFreeStmt(hcol_stmt, SQL_DROP);\n+\treturn SQL_ERROR;\n+ }\n+\n+ result = SQLBindCol(hcol_stmt, 20, SQL_INTEGER,\n+\t\t\t&proc_argtypes[14], 4, NULL);\n+\n+ if ((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO))\n+ {\n+\tstmt->errormsg = col_stmt->errormsg;\n+\tstmt->errornumber = col_stmt->errornumber;\n+\tSC_log_error(func, \"\", stmt);\n+\tSQLFreeStmt(hcol_stmt, SQL_DROP);\n+\treturn SQL_ERROR;\n+ }\n+\n+ result = SQLBindCol(hcol_stmt, 21, SQL_INTEGER,\n+\t\t\t&proc_argtypes[15], 4, NULL);\n+\n+ if ((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO))\n+ {\n+\tstmt->errormsg = col_stmt->errormsg;\n+\tstmt->errornumber = col_stmt->errornumber;\n+\tSC_log_error(func, \"\", stmt);\n+\tSQLFreeStmt(hcol_stmt, SQL_DROP);\n+\treturn SQL_ERROR;\n+ }\n+\n+ result = SQLBindCol(hcol_stmt, 22, SQL_INTEGER, &proc_oid, 4, NULL);\n+\n+ if ((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO))\n+ {\n+\tstmt->errormsg = col_stmt->errormsg;\n+\tstmt->errornumber = col_stmt->errornumber;\n+\tSC_log_error(func, \"\", stmt);\n+\tSQLFreeStmt(hcol_stmt, SQL_DROP);\n+\treturn SQL_ERROR;\n+ }\n+\n+ /*\n+ * Now we can set up the manual result set and fill in everything we\n+ * can tell... \n+ */\n+\n+ if (res = QR_Constructor(), !res)\n+ {\n+\tstmt->errormsg =\n+\t \"Couldn't allocate memory for SQLProcedureColumns result.\";\n+\tstmt->errornumber = STMT_NO_MEMORY_ERROR;\n+\tSC_log_error(func, \"\", stmt);\n+\tSQLFreeStmt(hcol_stmt, SQL_DROP);\n+\treturn SQL_ERROR;\n+ }\n+\n+ SC_set_Result(stmt, res);\n+\n+ /*\n+ * There are 6 additional columns returned by SQLProcedureColumns\n+ * with ODBC Version 3.0. \n+ */\n+#if (ODBCVER >= 0x0300)\n+ result_cols = 19;\n+#else\n+ result_cols = 13;\n+#endif\n+ extend_column_bindings(SC_get_ARD(stmt), result_cols);\n+\n+ QR_set_num_fields(res, result_cols);\n+ QR_set_field_info(res, 0, \"PROCEDURE_CAT\", PG_TYPE_TEXT,\n+\t\t MAX_INFO_STRING);\n+ QR_set_field_info(res, 1, \"PROCEDURE_OWNER\", PG_TYPE_TEXT,\n+\t\t MAX_INFO_STRING);\n+ QR_set_field_info(res, 2, \"PROCEDURE_NAME\", PG_TYPE_TEXT,\n+\t\t MAX_INFO_STRING);\n+ QR_set_field_info(res, 3, \"COLUMN_NAME\", PG_TYPE_TEXT,\n+\t\t MAX_INFO_STRING);\n+ QR_set_field_info(res, 4, \"COLUMN_TYPE\", PG_TYPE_INT2, 2);\n+ QR_set_field_info(res, 5, \"DATA_TYPE\", PG_TYPE_INT2, 2);\n+ QR_set_field_info(res, 6, \"TYPE_NAME\", PG_TYPE_TEXT, MAX_INFO_STRING);\n+ QR_set_field_info(res, 7, \"COLUMN_SIZE\", PG_TYPE_INT4, 4);\n+ QR_set_field_info(res, 8, \"BUFFER_LENGTH\", PG_TYPE_INT4, 4);\n+ QR_set_field_info(res, 9, \"DECIMAL_DIGITS\", PG_TYPE_INT2, 2);\n+ QR_set_field_info(res, 10, \"NUM_PREC_RADIX\", PG_TYPE_INT2, 2);\n+ QR_set_field_info(res, 11, \"NULLABLE\", PG_TYPE_INT2, 2);\n+ QR_set_field_info(res, 12, \"REMARKS\", PG_TYPE_TEXT, 254);\n+#if (ODBCVER >= 0x0300)\n+ QR_set_field_info(res, 13, \"COLUMN_DEF\", PG_TYPE_INT4, 254);\n+ QR_set_field_info(res, 14, \"SQL_DATA_TYPE\", PG_TYPE_INT2, 2);\n+ QR_set_field_info(res, 15, \"SQL_DATETIME_SUB\", PG_TYPE_INT2, 2);\n+ QR_set_field_info(res, 16, \"CHAR_OCTET_LENGTH\", PG_TYPE_INT2, 2);\n+ QR_set_field_info(res, 17, \"ORDINAL_POSITION\", PG_TYPE_INT4, 4);\n+ QR_set_field_info(res, 18, \"IS_NULLABLE\", PG_TYPE_TEXT, 254);\n+#endif\n+\n+ /*\n+ * now we start filling each row for the result set of\n+ * SQLProcedureColumns. The documentation says, we have to build one\n+ * row for each return value, argument and result set column - in\n+ * that order \n+ */\n+ result = PGAPI_Fetch(hcol_stmt);\n+\n+ if ((result != SQL_SUCCESS) && (result != SQL_SUCCESS_WITH_INFO))\n+ {\n+\tstmt->errormsg = col_stmt->errormsg;\n+\tstmt->errornumber = col_stmt->errornumber;\n+\tSC_log_error(func, \"\", stmt);\n+\tSQLFreeStmt(hcol_stmt, SQL_DROP);\n+\treturn SQL_ERROR;\n+ }\n+\n+\n+#define UNKNOWNS_AS_LONGEST 2\n+\n+ while ((result == SQL_SUCCESS) || (result == SQL_SUCCESS_WITH_INFO))\n+ {\n+\tmylog(\"%s: While loop pn: %d...\\n\", func, proc_nargs);\n+\tfor (i = 0; i <= proc_nargs; i++)\n+\t{\n+\t mylog(\"%s: For loop %d...\\n\", func, i);\n+\t row =\n+\t\t(TupleNode *) malloc(sizeof(TupleNode) +\n+\t\t\t\t (result_cols -\n+\t\t\t\t 1) * sizeof(TupleField));\n+\n+\t set_tuplefield_null(&row->tuple[0]);\t/* ProcedureCat */\n+\t set_nullfield_string(&row->tuple[1], proc_owner);\t/* ProcedureOwner */\n+\t set_tuplefield_string(&row->tuple[2], proc_name);\t/* ProcedureName */\n+\n+\t /*\n+\t * Index 0 is for the return value. I assume we always have\n+\t * one. Don't know if this is really the case... FIXME if\n+\t * you know better \n+\t */\n+\t if (i == 0)\n+\t {\n+\t\tset_tuplefield_string(&row->tuple[3], \"RETURN_VALUE\");\t/* Column Name */\n+\t\tset_tuplefield_int2(&row->tuple[4], SQL_RETURN_VALUE);\t/* Column / Argument Type */\n+\t\tset_tuplefield_int2(&row->tuple[5],\t/* SQL data type for that column */\n+\t\t\t\t pgtype_to_sqldesctype(stmt,\n+\t\t\t\t\t\t\t proc_rettype));\n+\t\tset_tuplefield_string(&row->tuple[6],\t/* name for that data type, driver specific */\n+\t\t\t\t pgtype_to_name(stmt, proc_rettype));\n+\t\tdlen =\n+\t\t pgtype_desclength(stmt, proc_rettype, 0,\n+\t\t\t\t UNKNOWNS_AS_LONGEST);\n+\t\tset_tuplefield_int4(&row->tuple[7], dlen);\t/* lenght of that data type */\n+\t\tset_tuplefield_int4(&row->tuple[8], dlen);\t/* buffer size for that argument */\n+\t }\n+\t /*\n+\t * now we have to construct one row for each argument\n+\t * required to call our function \n+\t */\n+\t else if (i <= proc_nargs)\n+\t {\n+\t\tsprintf(colname, \"argument%d\", i);\n+\t\tset_tuplefield_string(&row->tuple[3], colname);\n+\t\tset_tuplefield_int2(&row->tuple[4], SQL_PARAM_INPUT);\n+\t\tset_tuplefield_int2(&row->tuple[5],\n+\t\t\t\t pgtype_to_sqldesctype(stmt,\n+\t\t\t\t\t\t\t proc_argtypes[i -\n+\t\t\t\t\t\t\t\t\t1]));\n+\t\tset_tuplefield_string(&row->tuple[6],\n+\t\t\t\t pgtype_to_name(stmt,\n+\t\t\t\t\t\t proc_argtypes[i -\n+\t\t\t\t\t\t\t\t 1]));\n+\t\tdlen =\n+\t\t pgtype_desclength(stmt, proc_argtypes[i - 1], 0,\n+\t\t\t\t UNKNOWNS_AS_LONGEST);\n+\t\tset_tuplefield_int4(&row->tuple[7], dlen);\n+\t\tset_tuplefield_int4(&row->tuple[8], dlen);\n+\t }\n+\t else\n+\t {\n+\t\t/*\n+\t\t * This actually does not happen. Anyway, here we\n+\t\t * could start to construct rows to descripe each\n+\t\t * columnd of the result set. Until now, we do not\n+\t\t * have any information about what our function /\n+\t\t * procedure might return. \n+\t\t */\n+\t\tset_tuplefield_string(&row->tuple[3], \"unknown\");\n+\t\tset_tuplefield_int2(&row->tuple[4],\n+\t\t\t\t SQL_PARAM_TYPE_UNKNOWN);\n+\t\tset_tuplefield_int2(&row->tuple[5], 0);\n+\t\tset_tuplefield_string(&row->tuple[6], \"\");\n+\t\tset_tuplefield_int4(&row->tuple[7], 0);\n+\t\tset_tuplefield_null(&row->tuple[8]);\n+\t }\n+\n+\t /*\n+\t * we do not know much about the argument types, do we?\n+\t * These are just reasonable defaults. FIXME if you know\n+\t * better \n+\t */\n+\t set_tuplefield_null(&row->tuple[9]);\t/* DEC DIGITS */\n+\t set_tuplefield_null(&row->tuple[10]);\t/* PREC RADIX */\n+\t set_tuplefield_int2(&row->tuple[11], SQL_NULLABLE_UNKNOWN);\n+\n+\n+\t set_tuplefield_string(&row->tuple[12],\n+\t\t\t\t \"prodedure column remark\");\n+#if (ODBCVER >= 0x0300)\n+\t /*\n+\t * Lots of reasonable defaults follow. \n+\t */\n+\t set_tuplefield_null(&row->tuple[13]);\n+\n+\t if (i == 0)\n+\t {\n+\t\tif ((proc_rettype == PG_TYPE_DATE)\n+\t\t || (proc_rettype == PG_TYPE_TIME)\n+\t\t || (proc_rettype == PG_TYPE_TIME_WITH_TMZONE)\n+\t\t || (proc_rettype == PG_TYPE_DATETIME)\n+\t\t || (proc_rettype == PG_TYPE_ABSTIME)\n+\t\t || (proc_rettype == PG_TYPE_TIMESTAMP_NO_TMZONE)\n+\t\t || (proc_rettype == PG_TYPE_TIMESTAMP))\n+\t\t{\n+\t\t set_tuplefield_int2(&row->tuple[14], SQL_DATETIME);\n+\t\t set_tuplefield_int2(&row->tuple[15],\n+\t\t\t\t\tpgtype_to_datetime_sub(stmt,\n+\t\t\t\t\t\t\t proc_rettype));\n+\t\t}\n+\t\telse\n+\t\t{\n+\t\t set_tuplefield_int2(&row->tuple[14],\n+\t\t\t\t\tpgtype_to_sqldesctype(stmt,\n+\t\t\t\t\t\t\t proc_rettype));\n+\t\t set_tuplefield_null(&row->tuple[15]);\n+\t\t}\n+\t }\n+\t else if (i <= proc_nargs)\n+\t {\n+\t\tif ((proc_argtypes[i - 1] == PG_TYPE_DATE)\n+\t\t || (proc_argtypes[i - 1] == PG_TYPE_TIME)\n+\t\t || (proc_argtypes[i - 1] == PG_TYPE_TIME_WITH_TMZONE)\n+\t\t || (proc_argtypes[i - 1] == PG_TYPE_DATETIME)\n+\t\t || (proc_argtypes[i - 1] == PG_TYPE_ABSTIME)\n+\t\t || (proc_argtypes[i - 1] ==\n+\t\t\tPG_TYPE_TIMESTAMP_NO_TMZONE)\n+\t\t || (proc_argtypes[i - 1] == PG_TYPE_TIMESTAMP))\n+\t\t{\n+\t\t set_tuplefield_int2(&row->tuple[14], SQL_DATETIME);\n+\t\t set_tuplefield_int2(&row->tuple[15],\n+\t\t\t\t\tpgtype_to_datetime_sub(stmt,\n+\t\t\t\t\t\t\t proc_argtypes\n+\t\t\t\t\t\t\t [i - 1]));\n+\t\t}\n+\t\telse\n+\t\t{\n+\t\t set_tuplefield_int2(&row->tuple[14],\n+\t\t\t\t\tpgtype_to_sqldesctype(stmt,\n+\t\t\t\t\t\t\t proc_argtypes\n+\t\t\t\t\t\t\t [i - 1]));\n+\t\t set_tuplefield_null(&row->tuple[15]);\n+\t\t}\n+\t }\n+\t else\n+\t {\n+\t\tset_tuplefield_int2(&row->tuple[14], 0);\n+\t\tset_tuplefield_null(&row->tuple[15]);\n+\t }\n+\n+\t set_tuplefield_null(&row->tuple[16]);\t/* CHAR_OCTET_LENGTH */\n+\n+\t /*\n+\t * This one is actually meaningful \n+\t */\n+\t set_tuplefield_int4(&row->tuple[17], i);\t/* ORDINAL_POSITION */\n+\n+\t set_tuplefield_string(&row->tuple[18], \"\");\t/* IS_NULLABLE */\n+#endif\n+\n+\t /*\n+\t * finally we add the manually constructed row to the result\n+\t * set to be returned as the SQLProcedureColumns \n+\t */\n+\t QR_add_tuple(stmt->result, row);\n+\n+\n+\t}\n+\tresult = PGAPI_Fetch(hcol_stmt);\n+\n+ }\n+ if (result != SQL_NO_DATA_FOUND)\n+ {\n+\tstmt->errormsg = SC_create_errormsg(hcol_stmt);\n+\tstmt->errornumber = col_stmt->errornumber;\n+\tSC_log_error(func, \"\", stmt);\n+\tSQLFreeStmt(hcol_stmt, SQL_DROP);\n+\treturn SQL_ERROR;\n+ }\n+\n+ /*\n+ * also, things need to think that this statement is finished so \n+ * the results can be retrieved. \n+ */\n+ stmt->status = STMT_FINISHED;\n+\n+ /*\n+ * set up the current tuple pointer for SQLFetch \n+ */\n+ stmt->currTuple = -1;\n+ stmt->rowset_start = -1;\n+ stmt->current_col = -1;\n+\n+ SQLFreeStmt(hcol_stmt, SQL_DROP);\n+ mylog(\"SQLProcedureColumns(): EXIT, stmt=%u\\n\", stmt);\n+\n+ return SQL_SUCCESS;\n }\n \n \n@@ -4206,7 +4851,7 @@\n \t\t\" proname as \" \"PROCEDURE_NAME\" \", '' as \" \"NUM_INPUT_PARAMS\" \",\"\n \t\t \" '' as \" \"NUM_OUTPUT_PARAMS\" \", '' as \" \"NUM_RESULT_SETS\" \",\"\n \t\t \" '' as \" \"REMARKS\" \",\"\n-\t\t \" case when prorettype = 0 then 1::int2 else 2::int2 end as \" \"PROCEDURE_TYPE\" \" from pg_namespace, pg_proc where\");\n+\t\t \" case when prorettype = 0 then 1::int2 else 2::int2 end as \" \"PROCEDURE_TYPE\" \" from pg_namespace, pg_proc\");\n \telse\n \t\tstrcpy(proc_query, \"select '' as \" \"PROCEDURE_CAT\" \", '' as \" \"PROCEDURE_SCHEM\" \",\"\n \t\t\" proname as \" \"PROCEDURE_NAME\" \", '' as \" \"NUM_INPUT_PARAMS\" \",\"\ndiff -Nur psqlodbc-dist/odbcapi30.c psqlodbc-neu/odbcapi30.c\n--- psqlodbc-dist/odbcapi30.c\t2002-09-21 16:23:48.000000000 +0200\n+++ psqlodbc-neu/odbcapi30.c\t2002-09-21 17:20:12.000000000 +0200\n@@ -491,8 +491,7 @@\n \tSQL_FUNC_ESET(pfExists, SQL_API_SQLNUMPARAMS);\t\t/* 63 */\n \t/* SQL_FUNC_ESET(pfExists, SQL_API_SQLPARAMOPTIONS); 64 deprecated */\n \tSQL_FUNC_ESET(pfExists, SQL_API_SQLPRIMARYKEYS);\t/* 65 */\n-\tif (ci->drivers.lie)\n-\t\tSQL_FUNC_ESET(pfExists, SQL_API_SQLPROCEDURECOLUMNS); /* 66 not implemeted yet */ \n+\tSQL_FUNC_ESET(pfExists, SQL_API_SQLPROCEDURECOLUMNS);\t/* 66 */ \n \tSQL_FUNC_ESET(pfExists, SQL_API_SQLPROCEDURES);\t\t/* 67 */\n \tSQL_FUNC_ESET(pfExists, SQL_API_SQLSETPOS);\t\t/* 68 */\n \t/* SQL_FUNC_ESET(pfExists, SQL_API_SQLSETSCROLLOPTIONS); 69 deprecated */\n\n\n----- End forwarded message -----\n\n-- \nDr. Michael Meskes, Geschᅵftsfᅵhrer, credativ GmbH\nKarl-Heinz-Beckurts-Str. 13, 52428 Jᅵlich, Germany\nTel.: +49 (2461) 69071-0\nFax: +49 (2461) 69071-1\nMobil: +49 (170) 1857143\nEmail: Michael.Meskes@credativ.de",
"msg_date": "Fri, 20 Sep 2002 21:26:01 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "[s.hetze@linux-ag.de: PostgreSQL integration Visual Basic,\n\tSQLProcedureColumns]"
},
{
"msg_contents": "Michael Meskes wrote:\n> into. As far as I understand the problem, the application uses stored\n> procedures for each and every select statement. Thus he needs his\n> procedures to return the whole query result, which is not doable with\n> our functions.\n\nIt is in 7.3.\n\nIf the return tuple definition is fixed:\ninstead of:\n exec sp_myproc()\n go\ndo\n select * from sp_myproc();\n\nIf the return tuple definition is *not* fixed:\ndo\n select * from sp_myproc() as table_alias([col definition list]);\n\nDoes this help any? Can he try the 7.3 beta?\n\nJoe\n\n\n\n\n",
"msg_date": "Fri, 27 Sep 2002 09:53:02 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: [ODBC] [s.hetze@linux-ag.de: PostgreSQL integration Visual Basic, "
},
{
"msg_contents": "On Fri, Sep 27, 2002 at 09:53:02AM -0700, Joe Conway wrote:\n> It is in 7.3.\n> \n> If the return tuple definition is fixed:\n> instead of:\n> exec sp_myproc()\n> go\n> do\n> select * from sp_myproc();\n\nThat's a great feature to have. \n\n> If the return tuple definition is *not* fixed:\n> do\n> select * from sp_myproc() as table_alias([col definition list]);\n> \n> Does this help any? Can he try the 7.3 beta?\n\nUnfortunately no. They are not willing to use a beta so they are appearantly switching to SAP DB. Sorry.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Sun, 29 Sep 2002 19:20:32 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: [ODBC] [s.hetze@linux-ag.de: PostgreSQL integration Visual Basic,\n\tSQLProcedureColumns]"
}
] |
[
{
"msg_contents": "Hello -- I'm the lead programmer of Lyris ListManager, an email list server that run on PostgreSQL, Oracle, and MS/SQL.\n\nAbout 20% of our client base of 4000 runs on PostgresSQL -- it's very popular with our clients -- much more than Oracle is (about 3%).\n\nUnfortunately we have about a dozen clients who have stability problems with PostgresSQL. This week a major television network cancelled their order with us due to their PostgresSQL stability issues, which is what prompted me to write this email and get involved with the PostgresSQL community. \n\nIt seems that with larger database sizes (500,000 rows and larger) and high stress, the server daemon has a tendency to core. We've also had cases where a single connection doing a million inserts into a table will cause the daemon to core. We've seen problems with both 7.1 and 7.2.x, with built-on-the-machine and with RPMs. We've also had big stability problems with Solaris 8/Sparc, and don't ship on that platform because of that.\n\nWhat I'd like to do is help solve these problems in the core distribution, so that PostrgesSQL can indeed be able to handle the large databases and high transaction loads that Microsoft SQL can.\n\nMy company has hired open source people before to help fix bugs or add features to open source projects, most notable from the Tcl community, as we use Tcl quite a bit (we have two programmers from the Tcl Core team working here). This works out well for the Tcl community, as we fund the development of the project, as well as pay someone to work on something they want to work on anyhow.\n\nSo... what I'm looking for are recommendations on a PostgresSQL guru who could help nail the stability/load issues, and make sure that the fixes make their way back into the PostgresSQL core. What I'd prefer is to get a regular contributor to this list, so that this person could investigate our problems, and then get the community's help in solving them.\n\nThanks!\n\n-john\n",
"msg_date": "Fri, 20 Sep 2002 14:18:24 -0700",
"msg_from": "John Buckman <john@lyris.com>",
"msg_from_op": true,
"msg_subject": "Lyris looking to help fix PostgresSQL crashing problems"
},
{
"msg_contents": "John Buckman <john@lyris.com> writes:\n> It seems that with larger database sizes (500,000 rows and larger) and\n> high stress, the server daemon has a tendency to core.\n\nWe'd love to see some stack traces ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 Sep 2002 17:59:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Lyris looking to help fix PostgresSQL crashing problems "
}
] |
[
{
"msg_contents": "Is there ever a need to have more than one conversion for a given\ncombination of encodings? And if I have more than one combination\nregistered, which one is used by the implicit server/client conversion?\n\nAlso, if my server encoding is A and my client encoding is B, and I do\n\nSELECT convert('some string' using a_to_c); -- not B\n\nor even\n\nSELECT convert('some string' using e_to_f);\n\nthis would surely lead to bogus results? What's the use of all this?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sat, 21 Sep 2002 01:43:48 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Conversion Questions"
},
{
"msg_contents": "> Is there ever a need to have more than one conversion for a given\n> combination of encodings?\n\nSure. For example, several Unicode and SJIS mappings exist depending\non vendors or standards. M$ has its own, Apple has another one...\nIf a user want to employ Apple's map, he could define his own implicit\nconversion.\n\n> And if I have more than one combination\n> registered, which one is used by the implicit server/client conversion?\n\nThat depends on current name space. You could find such an example in\nthe conversion regression test. Note that you cannot define more than\none implicit conversion for a schema/server encoding/client encoding\ncombination.\n\n> Also, if my server encoding is A and my client encoding is B, and I do\n> \n> SELECT convert('some string' using a_to_c); -- not B\n> \n> or even\n> \n> SELECT convert('some string' using e_to_f);\n> \n> this would surely lead to bogus results?\n\nYes. Choosing right conversion is callers responsibilty.\n\n> What's the use of all this?\n\nOne example. A user wants to apply lower() to Unicode database.\n\nselect convert(lower(convert('X' using utf_8_to_iso_8859_1)) using iso_8859_1_to_utf_8);\n--\nTatsuo Ishii\n",
"msg_date": "Sat, 21 Sep 2002 09:39:58 +0900 (JST)",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Conversion Questions"
}
] |
[
{
"msg_contents": "As a result of some disk errors on another drive, an admin in our group\nbrought down the server hosting our pgsql databases with a kill -KILL\nafter having gone to runlevel 1 and finding the postmaster process still\nrunning. No surprise, our installation was hosed in the process. \n\nAfter talking on #postgresql with klamath for about an hour or so to\nwork through the issue (many thanks!), it was suggested that I send\nthe info to this list.\n\nCurrently, PostGreSQL will no longer start, and gives this error.\n\nbash-2.05$ /usr/bin/pg_ctl -D $PGDATA -p /usr/bin/postmaster start\npostmaster successfully started\nbash-2.05$ DEBUG: database system shutdown was interrupted at\n2002-09-19 22:59:54 EDT\nDEBUG: open(logfile 0 seg 0) failed: No such file or directory\nDEBUG: Invalid primary checkPoint record\nDEBUG: open(logfile 0 seg 0) failed: No such file or directory\nDEBUG: Invalid secondary checkPoint record\nFATAL 2: Unable to locate a valid CheckPoint record\n/usr/bin/postmaster: Startup proc 11735 exited with status 512 - abort\n\n\nOur setup is vanilla Red Hat 7.2, having pretty much all of the\npostgresql-*-7.1.3-2 packages installed. Klamath asked if I had disabled\nfsync in postgresql.conf, and the only non-default (read: non-commented)\nsetting in the file is: `tcpip_socket = true`\n\n\nKlamath suggested that I run pg_controldata:\n\nbash-2.05$ ./pg_controldata \npg_control version number: 71\nCatalog version number: 200101061\nDatabase state: SHUTDOWNING\npg_control last modified: Thu Sep 19 22:59:54 2002\nCurrent log file id: 0\nNext log file segment: 1\nLatest checkpoint location: 0/1739A0\nPrior checkpoint location: 0/1718F0\nLatest checkpoint's REDO location: 0/1739A0\nLatest checkpoint's UNDO location: 0/0\nLatest checkpoint's StartUpID: 21\nLatest checkpoint's NextXID: 615\nLatest checkpoint's NextOID: 18720\nTime of latest checkpoint: Thu Sep 19 22:49:42 2002\nDatabase block size: 8192\nBlocks per segment of large relation: 131072\nLC_COLLATE: en_US\nLC_CTYPE: en_US\n\n\nIf I look into the pg_xlog directory, I see this:\n\nsh-2.05$ cd pg_xlog/\nbash-2.05$ ls -l\ntotal 32808\n-rw------- 1 postgres postgres 16777216 Sep 20 23:13 0000000000000002\n-rw------- 1 postgres postgres 16777216 Sep 19 22:09 000000020000007E\n\n\nThere is one caveat. The installation resides on a partition of its own:\n/dev/hda3 17259308 6531140 9851424 40% /var/lib/pgsql/data\n\nfdisk did not report errors for this partition at boot time after the\nforced shutdown, however.\n\nThis installation serves a university research project, and although\nmost of the code / schemas are in development (and should be in cvs by\nrights), I can't confirm that all projects have indeed done that. So any\nadvice, ideas or suggestions on how the data and / or schemas can be\nrecovered would be greatly appreciated.\n\nMany thanks!\n\n-- pete\n\nP.S.: I've been using pgsql for about four years now, and it played a\nbig role during my grad work. In fact, the availability of pgsql was one\nof the reasons why I was able to complete and graduate. Many thanks for\nsuch a great database!\n\n\n-- \nPete St. Onge\nResearch Associate, Computational Biologist, UNIX Admin\nBanting and Best Institute of Medical Research\nProgram in Bioinformatics and Proteomics\nUniversity of Toronto\nhttp://www.utoronto.ca/emililab/ pete@seul.org\n",
"msg_date": "Sat, 21 Sep 2002 01:54:55 -0400",
"msg_from": "\"Pete St. Onge\" <pete@seul.org>",
"msg_from_op": true,
"msg_subject": "Hosed PostGreSQL Installation"
},
{
"msg_contents": "\"Pete St. Onge\" <pete@seul.org> writes:\n> As a result of some disk errors on another drive, an admin in our group\n> brought down the server hosting our pgsql databases with a kill -KILL\n> after having gone to runlevel 1 and finding the postmaster process still\n> running. No surprise, our installation was hosed in the process. \n\nThat should not have been a catastrophic mistake in any version >= 7.1.\nI suspect you had disk problems or other problems.\n\n> Klamath suggested that I run pg_controldata:\n\n> ...\n> Latest checkpoint's StartUpID: 21\n> Latest checkpoint's NextXID: 615\n> Latest checkpoint's NextOID: 18720\n\nThese numbers are suspiciously small for an installation that's been\nin production awhile. I suspect you have not told us the whole story;\nin particular I suspect you already tried \"pg_resetxlog -f\", which was\nprobably not a good idea.\n\n> If I look into the pg_xlog directory, I see this:\n\n> -rw------- 1 postgres postgres 16777216 Sep 20 23:13 0000000000000002\n> -rw------- 1 postgres postgres 16777216 Sep 19 22:09 000000020000007E\n\nYeah, your xlog positions should be a great deal higher than they are,\nif segment 2/7E was previously in use.\n\nIt is likely that you can recover (with some uncertainty about integrity\nof recent transactions) if you proceed as follows:\n\n1. Get contrib/pg_resetxlog/pg_resetxlog.c from the 7.2.2 release (you\ncan't use 7.1's pg_resetxlog because it doesn't offer the switches\nyou'll need). Compile it *against your 7.1 headers*. It should compile\nexcept you'll have to remove this change:\n\n***************\n*** 853,858 ****\n--- 394,403 ----\n \tpage->xlp_magic = XLOG_PAGE_MAGIC;\n \tpage->xlp_info = 0;\n \tpage->xlp_sui = ControlFile.checkPointCopy.ThisStartUpID;\n+ \tpage->xlp_pageaddr.xlogid =\n+ \t\tControlFile.checkPointCopy.redo.xlogid;\n+ \tpage->xlp_pageaddr.xrecoff =\n+ \t\tControlFile.checkPointCopy.redo.xrecoff - SizeOfXLogPHD;\n \trecord = (XLogRecord *) ((char *) page + SizeOfXLogPHD);\n \trecord->xl_prev.xlogid = 0;\n \trecord->xl_prev.xrecoff = 0;\n\nTest it using its -n switch to make sure it reports sane values.\n\n2. Run the hacked-up pg_resetxlog like this:\n\n\tpg_resetxlog -l 2 127 -x 1000000000 $PGDATA\n\n(the -l position is next beyond what we see in pg_xlog, the 1-billion\nXID is just a guess at something past where you were. Actually, can\nyou give us the size of pg_log, ie, $PGDATA/global/1269? That would\nallow computing a correct next-XID to use. Figure 4 XIDs per byte,\nthus if pg_log is 1 million bytes you need -x at least 4 million.)\n\n3. The postmaster should start now.\n\n4. *Immediately* attempt to do a pg_dumpall. Do not pass GO, do not\ncollect $200, do not let in any interactive clients until you've done\nit. (I'd suggest tweaking pg_hba.conf to disable all logins but your\nown.)\n\n5. If pg_dumpall succeeds and produces sane-looking output, then you've\nsurvived. initdb, reload the dump file, re-open for business, go have\na beer. (Recommended: install 7.2.2 and reload into that, not 7.1.*.)\nYou will probably still need to check for partially-applied recent\ntransactions, but for the most part you should be OK.\n\n6. If pg_dumpall fails then let us know what the symptoms are, and we'll\nsee if we can figure out a workaround for whatever the corruption is.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 21 Sep 2002 11:13:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Hosed PostGreSQL Installation "
},
{
"msg_contents": "Just following up on Tom Lane's email - \n\nA couple of things that I hadn't mentioned: After bringing up the\nmachine, the first thing I did before mucking about with PostGreSQL was\nto tarball $PGDATA so that I'd have a second chance if I messed up. I\nthen ran pg_resetlog -f the first time, as Tom surmised, with the\nunwanted results. \n\nThat done, I sent out the email, and followed Tom's instructions (yay\nbackups!) and did it properly.\n\nOn Sat, Sep 21, 2002 at 11:13:44AM -0400, Tom Lane wrote:\n> \"Pete St. Onge\" <pete@seul.org> writes:\n> \n> That should not have been a catastrophic mistake in any version >= 7.1.\n> I suspect you had disk problems or other problems.\n We did, but these were on a different disk according to the logs,\nAFAIK. \n\n> These numbers are suspiciously small for an installation that's been\n> in production awhile. I suspect you have not told us the whole story;\n> in particular I suspect you already tried \"pg_resetxlog -f\", which was\n> probably not a good idea.\n *raises hand* Yep.\n\nHere's the contents of the pg_xlog directory. PGSQL has only been used\nhere for approximately 4 months of fairly light use, so perhaps the\nnumbers aren't as strange as they could be (this is from the backup).\n\n-rw------- 1 postgres postgres 16777216 Sep 19 22:09 000000020000007E\n\n\n> Yeah, your xlog positions should be a great deal higher than they are,\n> if segment 2/7E was previously in use.\n> \n> It is likely that you can recover (with some uncertainty about integrity\n> of recent transactions) if you proceed as follows:\n> \n> 1. Get contrib/pg_resetxlog/pg_resetxlog.c from the 7.2.2 release ...\n[Chomp]\n\nThe compile worked without a hitch after doing ./configure in the\ntop-level directory. I just downloaded the src for both trees, made the\nchanges manually, copied the file into the 7.1.3 tree and compiled it\nthere. \n\n> 2. Run the hacked-up pg_resetxlog like this:\n> \n> \tpg_resetxlog -l 2 127 -x 1000000000 $PGDATA\n> \n> (the -l position is next beyond what we see in pg_xlog, the 1-billion\n> XID is just a guess at something past where you were. Actually, can\n> you give us the size of pg_log, ie, $PGDATA/global/1269? That would\n> allow computing a correct next-XID to use. Figure 4 XIDs per byte,\n> thus if pg_log is 1 million bytes you need -x at least 4 million.)\n\n -rw------- 1 postgres postgres 11870208 Sep 19 17:00 1269\n\n This gives a min WAL starting location of 47480832. I used\n47500000.\n\n\n> 3. The postmaster should start now.\n I had to use pg_resetxlog's force option, but yeah, it worked like\nyou said it would.\n\n> 4. *Immediately* attempt to do a pg_dumpall. Do not pass GO, do not\n> collect $200, do not let in any interactive clients until you've done\n> it. (I'd suggest tweaking pg_hba.conf to disable all logins but your\n> own.)\n I did not pass go, I did not collect $200. I *did* do a pg_dumpall\nright there and then, and was able to dump everything I needed. One\nof the projects uses large objects - image files and html files (don't\nask, I've already tried to dissuade the Powers-That-Be) - and these\ndidn't come out. However, since this stuff is entered via script, the\nproject leader was fine with re-running the scripts tomorrow.\n\n\n> 5. If pg_dumpall succeeds and produces sane-looking output, then you've\n> survived. initdb, reload the dump file, re-open for business, go have\n> a beer. (Recommended: install 7.2.2 and reload into that, not 7.1.*.)\n> You will probably still need to check for partially-applied recent\n> transactions, but for the most part you should be OK.\n rpm -Uvh'ed the 7.2.2 RPMs, initdb'd and reloaded data into the new\ninstallation. Pretty painless. I've just sent out an email to folks here\nto let them know the situation, and we should know in the next day or so\nwhat is up.\n\n\n> 6. If pg_dumpall fails then let us know what the symptoms are, and we'll\n> see if we can figure out a workaround for whatever the corruption is.\n I've kept the tarball with the corrupted data. I'll hold onto it\nfor a bit, in case, but will likely expunge it in the next week or so.\nIf this can have a use for the project (whatever it may be), let me know\nand I can burn it to DVD.\n\n Of course, without your help, Tom, there would be a lot of Very\nUnhappy People here, me only being one of them. Many thanks for your\nhelp and advice!\n\n Cheers,\n\n Pete \n\n\n-- \nPete St. Onge\nResearch Associate, Computational Biologist, UNIX Admin\nBanting and Best Institute of Medical Research\nProgram in Bioinformatics and Proteomics\nUniversity of Toronto\nhttp://www.utoronto.ca/emililab/\n",
"msg_date": "Tue, 24 Sep 2002 00:29:36 -0400",
"msg_from": "\"Pete St. Onge\" <pete@seul.org>",
"msg_from_op": true,
"msg_subject": "Re: Hosed PostGreSQL Installation"
}
] |
[
{
"msg_contents": "I have noticed a change in behavior following the recent changes for\ncasting of numeric constants. In prior releases, we got\n\nregression=# select log(10.1);\n log\n------------------\n 1.00432137378264\n(1 row)\n\nCVS tip gives\n\nregression=# select log(10.1);\n log\n--------------\n 1.0043213738\n(1 row)\n\nThe reason for the change is that 10.1 used to be implicitly typed as\nfloat8, but now it's typed as numeric, so this command invokes\nlog(numeric) instead of log(float8). And log(numeric)'s idea of\nadequate output precision seems low.\n\nSimilar problems occur with sqrt(), exp(), ln(), pow().\n\nI realize that there's a certain amount of cuteness in being able to\ncalculate these functions to arbitrary precision, but this is a database\nnot a replacement for bc(1). ISTM the numeric datatype is intended for\nexact calculations, and so transcendental functions (which inherently\nhave inexact results) don't belong.\n\nSo proposal #1 is to rip out the numeric versions of these functions.\n\nIf you're too attached to them, proposal #2 is to force them to\ncalculate at least 16 digits of output, so that we aren't losing any\naccuracy compared to prior behavior.\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 21 Sep 2002 12:01:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "NUMERIC's transcendental functions"
},
{
"msg_contents": "\nWhen you say:\n\n> So proposal #1 is to rip out the numeric versions of these functions.\n\nyou mean remove the ability to do transendentals on numerics to prevent\nsuch unusual auto-casting? Are you suggesting that in all other cases,\nautocast to numeric is OK? I am a little confused.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> I have noticed a change in behavior following the recent changes for\n> casting of numeric constants. In prior releases, we got\n> \n> regression=# select log(10.1);\n> log\n> ------------------\n> 1.00432137378264\n> (1 row)\n> \n> CVS tip gives\n> \n> regression=# select log(10.1);\n> log\n> --------------\n> 1.0043213738\n> (1 row)\n> \n> The reason for the change is that 10.1 used to be implicitly typed as\n> float8, but now it's typed as numeric, so this command invokes\n> log(numeric) instead of log(float8). And log(numeric)'s idea of\n> adequate output precision seems low.\n> \n> Similar problems occur with sqrt(), exp(), ln(), pow().\n> \n> I realize that there's a certain amount of cuteness in being able to\n> calculate these functions to arbitrary precision, but this is a database\n> not a replacement for bc(1). ISTM the numeric datatype is intended for\n> exact calculations, and so transcendental functions (which inherently\n> have inexact results) don't belong.\n> \n> So proposal #1 is to rip out the numeric versions of these functions.\n> \n> If you're too attached to them, proposal #2 is to force them to\n> calculate at least 16 digits of output, so that we aren't losing any\n> accuracy compared to prior behavior.\n> \n> Comments?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 22 Sep 2002 21:57:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: NUMERIC's transcendental functions"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> I have noticed a change in behavior following the recent changes for\n> casting of numeric constants. In prior releases, we got\n> \n> regression=# select log(10.1);\n> log\n> ------------------\n> 1.00432137378264\n> (1 row)\n> \n> CVS tip gives\n> \n> regression=# select log(10.1);\n> log\n> --------------\n> 1.0043213738\n> (1 row)\n> \n> The reason for the change is that 10.1 used to be implicitly typed as\n> float8, but now it's typed as numeric, so this command invokes\n> log(numeric) instead of log(float8). And log(numeric)'s idea of\n> adequate output precision seems low.\n> \n> Similar problems occur with sqrt(), exp(), ln(), pow().\n> \n> I realize that there's a certain amount of cuteness in being able to\n> calculate these functions to arbitrary precision, but this is a database\n> not a replacement for bc(1). ISTM the numeric datatype is intended for\n> exact calculations, and so transcendental functions (which inherently\n> have inexact results) don't belong.\n> \n> So proposal #1 is to rip out the numeric versions of these functions.\n> \n> If you're too attached to them, proposal #2 is to force them to\n> calculate at least 16 digits of output, so that we aren't losing any\n> accuracy compared to prior behavior.\n> \n> Comments?\n\nOne problem is, that division already has an inherently inexact\nresult. Do you intend to rip that out too while at it? (Just\nkidding)\n\nProposal #2.667 would be to have a GUC variable for the default\nprecision.\n\n\nJan\n\n\n> \n> regards, tom lane\n\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being\nright. #\n# Let's break this rule - forgive\nme. #\n#==================================================\nJanWieck@Yahoo.com #\n",
"msg_date": "Mon, 23 Sep 2002 17:07:28 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: NUMERIC's transcendental functions"
},
{
"msg_contents": "Jan Wieck <JanWieck@Yahoo.com> writes:\n> One problem is, that division already has an inherently inexact\n> result. Do you intend to rip that out too while at it? (Just\n> kidding)\n\nNo, but that too is now delivering less precision than it used to:\n\nregression=# select 10.1/7.0;\n ?column?\n--------------\n 1.4428571429\n(1 row)\n\nversus 1.44285714285714 in prior releases.\n\n> Proposal #2.667 would be to have a GUC variable for the default\n> precision.\n\nPerhaps, but I'd be satisfied if the default precision were at least\n16 digits. Again, the point is not to have any apparent regression\nfrom 7.2.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Sep 2002 17:41:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: NUMERIC's transcendental functions "
},
{
"msg_contents": "\nSeems we need to resolve this before beta2.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Jan Wieck <JanWieck@Yahoo.com> writes:\n> > One problem is, that division already has an inherently inexact\n> > result. Do you intend to rip that out too while at it? (Just\n> > kidding)\n> \n> No, but that too is now delivering less precision than it used to:\n> \n> regression=# select 10.1/7.0;\n> ?column?\n> --------------\n> 1.4428571429\n> (1 row)\n> \n> versus 1.44285714285714 in prior releases.\n> \n> > Proposal #2.667 would be to have a GUC variable for the default\n> > precision.\n> \n> Perhaps, but I'd be satisfied if the default precision were at least\n> 16 digits. Again, the point is not to have any apparent regression\n> from 7.2.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 23 Sep 2002 20:25:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "BETA2 HOLD: was Re: NUMERIC's transcendental functions"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Seems we need to resolve this before beta2.\n\nNot really. It's just a bug; we have others.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Sep 2002 20:33:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: BETA2 HOLD: was Re: NUMERIC's transcendental functions "
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Seems we need to resolve this before beta2.\n\nI'd go with making the NUMERIC default precision 16 for v7.3, so\nwe are backwards compatible on this release (except that it is\nnow a predictable 16 digit precision instead of an hardware\nimplementation dependent one).\n\nFor v7.4 we can discuss that a while.\n\n\nJan\n\n> \n> ---------------------------------------------------------------------------\n> \n> Tom Lane wrote:\n> > Jan Wieck <JanWieck@Yahoo.com> writes:\n> > > One problem is, that division already has an inherently inexact\n> > > result. Do you intend to rip that out too while at it? (Just\n> > > kidding)\n> >\n> > No, but that too is now delivering less precision than it used to:\n> >\n> > regression=# select 10.1/7.0;\n> > ?column?\n> > --------------\n> > 1.4428571429\n> > (1 row)\n> >\n> > versus 1.44285714285714 in prior releases.\n> >\n> > > Proposal #2.667 would be to have a GUC variable for the default\n> > > precision.\n> >\n> > Perhaps, but I'd be satisfied if the default precision were at least\n> > 16 digits. Again, the point is not to have any apparent regression\n> > from 7.2.\n> >\n> > regards, tom lane\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> >\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being\nright. #\n# Let's break this rule - forgive\nme. #\n#==================================================\nJanWieck@Yahoo.com #\n",
"msg_date": "Mon, 23 Sep 2002 20:34:30 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: BETA2 HOLD: was Re: NUMERIC's transcendental functions"
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Seems we need to resolve this before beta2.\n> \n> Not really. It's just a bug; we have others.\n\nOh, it doesn't effect initdb?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 23 Sep 2002 20:38:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BETA2 HOLD: was Re: NUMERIC's transcendental functions"
},
{
"msg_contents": "\nIs this an open item?\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Jan Wieck <JanWieck@Yahoo.com> writes:\n> > One problem is, that division already has an inherently inexact\n> > result. Do you intend to rip that out too while at it? (Just\n> > kidding)\n> \n> No, but that too is now delivering less precision than it used to:\n> \n> regression=# select 10.1/7.0;\n> ?column?\n> --------------\n> 1.4428571429\n> (1 row)\n> \n> versus 1.44285714285714 in prior releases.\n> \n> > Proposal #2.667 would be to have a GUC variable for the default\n> > precision.\n> \n> Perhaps, but I'd be satisfied if the default precision were at least\n> 16 digits. Again, the point is not to have any apparent regression\n> from 7.2.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sat, 28 Sep 2002 23:24:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: NUMERIC's transcendental functions"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Is this an open item?\n\nYes. (Fooling with it right now, in fact, in a desultory way ...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 28 Sep 2002 23:26:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: NUMERIC's transcendental functions "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Is this an open item?\n> \n> Yes. (Fooling with it right now, in fact, in a desultory way ...)\n\nOK, added.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sat, 28 Sep 2002 23:27:00 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: NUMERIC's transcendental functions"
}
] |
[
{
"msg_contents": "> John Buckman <john@lyris.com> writes:\n> > It seems that with larger database sizes (500,000 rows and larger) and\n> > high stress, the server daemon has a tendency to core.\n\n> We'd love to see some stack traces ...\n\nYeah, I just didn't know what form this list prefers to work on things, which is why I'd prefer to hire a regular participant of this list. If gcc 'where' stack traces are what you want, we can do that. \n\nI suspect that the problems may be platform-or-build related, because we've often had trouble replicating customer problems on our own sysems. For example, we had many reports of problems with 7.2.x, and saw it crash often on a customer's redhat machine that we had ssh access to, but couldn't make it crash in our own lab. :( That's why we need help. If we could make a simple C test case that crashed pgsql, I'm sure you guys could fix the problem in a jiffy.\n\n-john\n",
"msg_date": "Sat, 21 Sep 2002 19:04:21 -0700",
"msg_from": "John Buckman <john@lyris.com>",
"msg_from_op": true,
"msg_subject": "Re: Lyris looking to help fix PostgresSQL crashing problems"
},
{
"msg_contents": "John Buckman wrote:\n> > John Buckman <john@lyris.com> writes:\n> > > It seems that with larger database sizes (500,000 rows and larger) and\n> > > high stress, the server daemon has a tendency to core.\n> \n> > We'd love to see some stack traces ...\n> \n> Yeah, I just didn't know what form this list prefers to work on\n> things, which is why I'd prefer to hire a regular participant\n> of this list. If gcc 'where' stack traces are what you want,\n> we can do that.\n\nYep, in most cases, the crash creates a core file in the database\ndirectory. A backtrace of that core file is usually a good start. You\nshould to sure there are debugging symbols in the binary (gcc -g).\n\nThe server log files also often contain valuable information.\n\n> I suspect that the problems may be platform-or-build related,\n> because we've often had trouble replicating customer problems\n> on our own systems. For example, we had many reports of problems\n> with 7.2.x, and saw it crash often on a customer's redhat machine\n> that we had ssh access to, but couldn't make it crash in our\n> own lab. :( That's why we need help. If we could make a simple\n> C test case that crashed pgsql, I'm sure you guys could fix the\n> problem in a jiffy.\n\nYes, that does make it harder, but a backtrace usually gets us started. \nIt may also be tickling some OS bug or a hardware failure, or a simple\nexhaustion of some resource.\n\n--\n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 22 Sep 2002 00:22:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Lyris looking to help fix PostgresSQL crashing problems"
}
] |
[
{
"msg_contents": "> John Buckman <john@lyris.com> writes:\n> > It seems that with larger database sizes (500,000 rows and larger) and\n> > high stress, the server daemon has a tendency to core.\n\n> We'd love to see some stack traces ...\n\nYeah, I just didn't know what form this list prefers in terms of info to be able to work on things, which is why I'd prefer to hire a regular participant of this list. If gcc 'where' stack traces from core files are what you want, we can do that. \n\nI suspect that the problems may be platform-or-build related, because we've often had trouble replicating customer problems on our own sysems. For example, we had many reports of problems with 7.2.x, and saw it crash often on a customer's redhat machine that we had ssh access to, but couldn't make it crash in our own lab. :( That's why we need help. If we could make a simple C test case that crashed pgsql, I'm sure you guys could fix the problem in a jiffym but localizing and recreating a problem is always 80% of it.\n\n-john\n\n",
"msg_date": "Sat, 21 Sep 2002 22:43:27 -0700",
"msg_from": "John Buckman <john@lyris.com>",
"msg_from_op": true,
"msg_subject": "Re: Lyris looking to help fix PostgresSQL crashing problems"
}
] |
[
{
"msg_contents": "I need to be able to download info from my public library website a\nprogram called Reference USA it will only allow you to download 10 at\na time...I would think there is a way to open this up...any help would\nbe appreciated.\n",
"msg_date": "Sun, 22 Sep 2002 09:04:22 -0400",
"msg_from": "fostered <fostered@mindspring.com>",
"msg_from_op": true,
"msg_subject": "Will Pay for Help"
},
{
"msg_contents": "Is it just me, or is this not very clear?\n\nCould you be more specific on what you need?\n\n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of fostered\nSent: Sunday, September 22, 2002 7:04 AM\nTo: pgsql-hackers@postgresql.org\nSubject: [HACKERS] Will Pay for Help\n\n\nI need to be able to download info from my public library website a\nprogram called Reference USA it will only allow you to download 10 at\na time...I would think there is a way to open this up...any help would\nbe appreciated.\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n",
"msg_date": "Sat, 28 Sep 2002 11:41:19 -0600",
"msg_from": "\"Jonah H. Harris\" <jharris@nightstarcorporation.com>",
"msg_from_op": false,
"msg_subject": "Re: Will Pay for Help"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nHave been putting together a tool called \"pg_autotune\" for automatically\ntuning a PostgreSQL database (either local or remote). It does this by\nrepetitively benchmarking PostgreSQL (using Tatsuo's pgbench code) with\ndifferent buffer settings, then fine tuning those settings depending on\nthe results returned. All of the data generated is stored into a\nseparate PostgreSQL database for further aggregate analysis later on.\n\nThis should be a default tool for all new PostgreSQL installations.\n\nThe URL for pg_autotune is:\n\nhttp://gborg.postgresql.org/project/pgautotune\n\nIt was created on a FreeBSD system, but should also work on at least\nLinux, Solaris, and MacOS X.\n\nThis is a time & load intensive tool, so you'll need to ensure you only\nrun it when you have a couple of hours to wait for the results. \nOvernight is good. :)\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Mon, 23 Sep 2002 01:07:58 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": true,
"msg_subject": "New PostgreSQL Tool available : pg_autotune"
},
{
"msg_contents": "Justin Clift <justin@postgresql.org> writes:\n> Have been putting together a tool called \"pg_autotune\" for automatically\n> tuning a PostgreSQL database (either local or remote). It does this by\n> repetitively benchmarking PostgreSQL (using Tatsuo's pgbench code) with\n> different buffer settings, then fine tuning those settings depending on\n> the results returned.\n\nYou should have chosen a better foundation. pg_bench is notorious for\nproducing results that are (a) nonrepeatable and (b) not relevant to\na wide variety of situations. All it really tells you about is the\nefficiency of a large number of updates to a small number of rows.\n\nI'd take the results with a large grain of salt.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 22 Sep 2002 11:30:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New PostgreSQL Tool available : pg_autotune "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Justin Clift <justin@postgresql.org> writes:\n> > Have been putting together a tool called \"pg_autotune\" for automatically\n> > tuning a PostgreSQL database (either local or remote). It does this by\n> > repetitively benchmarking PostgreSQL (using Tatsuo's pgbench code) with\n> > different buffer settings, then fine tuning those settings depending on\n> > the results returned.\n> \n> You should have chosen a better foundation. pg_bench is notorious for\n> producing results that are (a) nonrepeatable and (b) not relevant to\n> a wide variety of situations. All it really tells you about is the\n> efficiency of a large number of updates to a small number of rows.\n\nHi Tom,\n\nYou're totally right about this. Have been forced to ensure that each\nclient connection does a minimum of 200 transactions per connection,\netc, just to get anything in the way of reliable results.\n\nIt's just that this started out as playing around with pgbench, then\ngrew from that. However, it's been put together so that other tests can\nbe added easily, and it doesn't even have to use Tatsuo's pgbench code.\n\nWas thinking of asking Andy Riebs if we'd be ok to use his OSDB code, as\nwe'd need him to ok this in order to have it still be under the BSD\nlicense.\n\n> I'd take the results with a large grain of salt.\n\nIt takes the inaccuracy of Tatsuo's pgbench code into account.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n \n> regards, tom lane\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Mon, 23 Sep 2002 01:55:06 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: New PostgreSQL Tool available : pg_autotune"
},
{
"msg_contents": "> You should have chosen a better foundation. pg_bench is notorious for\n> producing results that are (a) nonrepeatable and (b) not relevant to\n> a wide variety of situations. All it really tells you about is the\n> efficiency of a large number of updates to a small number of rows.\n\nYou might want to try -N option of pgbench. It avoids updates to\nbranches and tellers tables.\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 26 Sep 2002 10:26:21 +0900 (JST)",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] New PostgreSQL Tool available :"
},
{
"msg_contents": "Tatsuo Ishii wrote:\n> \n> > You should have chosen a better foundation. pg_bench is notorious for\n> > producing results that are (a) nonrepeatable and (b) not relevant to\n> > a wide variety of situations. All it really tells you about is the\n> > efficiency of a large number of updates to a small number of rows.\n> \n> You might want to try -N option of pgbench. It avoids updates to\n> branches and tellers tables.\n\nCool. Do you feel this will noticeable increase the consistency of the\nmeasurements?\n\nThe inconsistency of the internal benchmark results means that\npg_autotune has been using 5-run averages, and using a large tolerance\nfactor by default. It would be good to improving on that.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> --\n> Tatsuo Ishii\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Thu, 26 Sep 2002 11:33:48 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] New PostgreSQL Tool available :pg_autotune"
}
] |
[
{
"msg_contents": "Hi all,\n\nI just submitted a project to GBorg. I got it submitted and it told me that \nGBorg staff would be back to me after review.\n\nI would love to have a check box on project registration page which asks \nwhether you have some code to submit or not. \n\nBecause in my case I have some..;-)\n\nTIA..\nBye\n Shridhar\n\n--\nSilverman's Law:\tIf Murphy's Law can go wrong, it will.\n\n",
"msg_date": "Sun, 22 Sep 2002 20:41:31 +0530",
"msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>",
"msg_from_op": true,
"msg_subject": "Gborg projects"
}
] |
[
{
"msg_contents": "What's the strategy for naming things schema or namespace? In notice that\npg_dump messages are all about namespaces. That seems confusing from a\nuser's viewpoint.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sun, 22 Sep 2002 17:42:30 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Schema vs Namespace"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> What's the strategy for naming things schema or namespace? In notice that\n> pg_dump messages are all about namespaces. That seems confusing from a\n> user's viewpoint.\n\nProbably the user-visible messages should all mention schemas.\n\nI named the catalog pg_namespace because I didn't want to nail down\na presumption that the things in it are exactly equivalent to SQL\nschemas; namespaces are an implementation mechanism to support schemas,\nbut not necessarily an equivalent concept. But this bit of\nimplementation philosophy isn't very relevant for users.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 22 Sep 2002 13:05:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Schema vs Namespace "
}
] |
[
{
"msg_contents": "$ postmaster --help\n...\nReport bugs to <pgsql-bugs@postgresql.org>.\nDEBUG: proc_exit(0)\nDEBUG: shmem_exit(0)\nDEBUG: exit(0)\n$\n\nThis is from a fresh installation, no debugging turned on.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sun, 22 Sep 2002 17:43:16 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Postmaster help output"
},
{
"msg_contents": "Peter Eisentraut dijo: \n\n> $ postmaster --help\n> ...\n> Report bugs to <pgsql-bugs@postgresql.org>.\n> DEBUG: proc_exit(0)\n> DEBUG: shmem_exit(0)\n> DEBUG: exit(0)\n> $\n\nThis is weird:\n\n$ postmaster -d1 --help\nFATAL: --help requires argument\n$\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"La espina, desde que nace, ya pincha\" (Proverbio africano)\n\n",
"msg_date": "Sun, 22 Sep 2002 11:57:49 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: Postmaster help output"
},
{
"msg_contents": "Alvaro Herrera <alvherre@atentus.com> writes:\n> Peter Eisentraut dijo: \n>> $ postmaster --help\n>> ...\n>> Report bugs to <pgsql-bugs@postgresql.org>.\n>> DEBUG: proc_exit(0)\n>> DEBUG: shmem_exit(0)\n>> DEBUG: exit(0)\n>> $\n\nFixed: someone was sloppy about the initial value of server_min_messages.\n\n> This is weird:\n\n> $ postmaster -d1 --help\n> FATAL: --help requires argument\n> $\n\nThis is because --help is special-cased and only works at the first\nargument position. As you have it, it's being taken as a GUC switch\nname.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 22 Sep 2002 15:54:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Postmaster help output "
}
] |
[
{
"msg_contents": "> >From /.\n> \n> \n> \"Ever since Oracle announced they wouldn't port 9i to NetWare, Novell\n> has been scrambling to find an enterprise-capable DB. Now it looks like\n> they're settling on PostgreSQL. This follows their decision to ship\n> Apache as the default web server for NetWare 6. Linux aficionados might\n> sneer at an old workhorse like NetWare, but it's got more than 80\n> million client licenses worldwide, and it ain't going anywhere anytime\n> soon.\" \n> \n> http://developer.novell.com/connections/091902.html\n\n",
"msg_date": "Mon, 23 Sep 2002 10:58:58 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "FW: PostgreSQL for Netware"
}
] |
[
{
"msg_contents": "Hi Guys,\n\nI'm heading off on a 5 week European trip tommorrow, so I'm not going to be\naround until the 31st oct.\n\nI hope there'll be a nice new release version of Postgres I can upgrade to\nwhen I get back!\n\nChris\n\n",
"msg_date": "Mon, 23 Sep 2002 11:59:29 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "European Vacation"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.