threads
listlengths
1
2.99k
[ { "msg_contents": "SQL9x defines bit string constants with a format like\n\n B'101010'\nand\n X'ABCD'\n\nfor binary and hexadecimal representations. But at the moment we don't\nexplicitly handle both of these cases as bit strings; the hex version is\nconverted to decimal in the scanner (*really* early in the parsing\nstage) and then handled as an integer.\n\nIt looks like our current bit string type support looks for a \"B\" or \"X\"\nembedded in the actual input string, rather than outside the quote as in\nthe standard.\n\nI'd like to have more support for the SQL9x syntax, which requires a\nlittle more invasive modification of at least the scanner and parser. I\nhave a couple of questions:\n\n1) the SQL standard says what hex values should be translated to in\nbinary, which implies that all values may be *output* in binary format.\nShould we do this, or should we preserve the info on what units were\nused for input in the internal storage? Anyone interpret the standard\ndifferently from this??\n\n2) we would need to be able to determine the format style when a string\nis output to be able to reconstruct the SQL shorthand representation (if\npreserving binary or hex is to be done). So a column or value should\nhave a corresponding is_hex() function call. Any other suggestions??\n\n - Thomas\n", "msg_date": "Mon, 15 Jul 2002 20:04:02 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "bit type external representation" }, { "msg_contents": "> for binary and hexadecimal representations. But at the moment we don't\n> explicitly handle both of these cases as bit strings; the hex version is\n> converted to decimal in the scanner (*really* early in the parsing\n> stage) and then handled as an integer.\n> It looks like our current bit string type support looks for a \"B\" or \"X\"\n> embedded in the actual input string, rather than outside the quote as in\n> the standard.\n\nI should point out that this is probably for historical reasons; I'd\nimplemented the hex to decimal conversion way before we had bit string\nsupport, and we didn't consolidate those features when bit strings came\nalong.\n\n - Thomas\n", "msg_date": "Mon, 15 Jul 2002 20:08:53 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: bit type external representation" }, { "msg_contents": "> for binary and hexadecimal representations. But at the moment we don't\n> explicitly handle both of these cases as bit strings; the hex version is\n> converted to decimal in the scanner (*really* early in the parsing\n> stage) and then handled as an integer.\n>\n> It looks like our current bit string type support looks for a \"B\" or \"X\"\n> embedded in the actual input string, rather than outside the quote as in\n> the standard.\n\nPostgres supports both:\n\ntest=# create table test (a bit(3));\nCREATE\ntest=# insert into test values (B'101');\nINSERT 3431020 1\ntest=# insert into test values (b'101');\nINSERT 3431021 1\ntest=# insert into test values ('B101');\nINSERT 3431022 1\ntest=# insert into test values ('b101');\nINSERT 3431023 1\ntest=# select * from test;\n a\n-----\n 101\n 101\n 101\n 101\n(4 rows)\n\nIn fact, some of our apps actually _rely_ on the old 'b101' syntax...\nAlthough these could be changed with some effort...\n\nChris\n\n", "msg_date": "Tue, 16 Jul 2002 11:12:21 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: bit type external representation" }, { "msg_contents": "> > It looks like our current bit string type support looks for a \"B\" or \"X\"\n> > embedded in the actual input string, rather than outside the quote as in\n> > the standard.\n> Postgres supports both:\n> test=# insert into test values (B'101');\n> test=# insert into test values ('B101');\n\nRight. But internally the first example has the \"B\" stripped out, and\nthe bit string input routine assumes a binary bit string if there is no\nembedded leading [bBxX]. However, if we were to just pass something\nwithout a [xX] as an explicit prefix on the string then it will always\nbe interpreted as a binary bit string (remember that currently X'ABCD'\nis converted to decimal, not to a bit string).\n\n> In fact, some of our apps actually _rely_ on the old 'b101' syntax...\n> Although these could be changed with some effort...\n\nYes, it seems that applications reading from something like libpq would\nneed to detect that this is a bit string and then figure out how to\nrepresent it on input or output.\n\n\"Cheating\" with a leading \"B\" or \"X\" helps with this a lot. Note that\nsimply a leading \"B\" is not sufficient to distinguish between a binary\nvalue and some hex values, if we were to allow output in hex...\n\n - Thomas\n", "msg_date": "Mon, 15 Jul 2002 20:20:25 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: bit type external representation" }, { "msg_contents": "Thomas Lockhart wrote:\n> 1) the SQL standard says what hex values should be translated to in\n> binary, which implies that all values may be *output* in binary format.\n> Should we do this, or should we preserve the info on what units were\n> used for input in the internal storage? Anyone interpret the standard\n> differently from this??\n\nSQL99, Section \"5.3 <literal>\":\n11) The declared type of a <bit string literal> is fixed-length\n bit string. The length of a <bit string literal> is the number\n of bits that it contains.\n12) The declared type of a <hex string literal> is fixed-length bit\n string. Each <hexit> appearing in the literal is equivalent to\n a quartet of bits: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E,\n and F are interpreted as 0000, 0001, 0010, 0011, 0100, 0101,\n 0110, 0111, 1000, 1001, 1010, 1011, 1100, 1101, 1110, and 1111,\n respectively. The <hexit>s a, b, c, d, e, and f have respectively\n the same values as the <hexit>s A, B, C, D, E, and F.\n\nSo the standard says both represent a fixed-length bit string data type. \nISTM that we should not try to preserve any information on the units \nused for input, and that both should be output in the same way.\n\n> \n> 2) we would need to be able to determine the format style when a string\n> is output to be able to reconstruct the SQL shorthand representation (if\n> preserving binary or hex is to be done). So a column or value should\n> have a corresponding is_hex() function call. Any other suggestions??\n> \n\nBased on above comment, I'd say no. We might want to be able to specify \nthat the output format should be hex using a formatting function though. \nBut I guess hex output format would have to be reserved for bit strings \nthat are integer multiples of 4 bits in length.\n\nJoe\n\n", "msg_date": "Mon, 15 Jul 2002 21:19:59 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: bit type external representation" }, { "msg_contents": "> > 1) the SQL standard says what hex values should be translated to in\n> > binary, which implies that all values may be *output* in binary format.\n> So the standard says both represent a fixed-length bit string data type.\n> ISTM that we should not try to preserve any information on the units\n> used for input, and that both should be output in the same way.\n\nOK, that's how I read it too. I'll work on making it behave with hex\ninput and not worry about the output, which takes care of itself.\n\nI also notice that the following has trouble:\n\n thomas=# select bit '1010';\n ERROR: bit string length does not match type bit(1)\n\nwhich is similar to the very recent problem with fixed-length character\nstrings. I've got patches to fix this one too.\n\n - Thomas\n", "msg_date": "Mon, 15 Jul 2002 23:21:12 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: bit type external representation" }, { "msg_contents": "Thomas Lockhart writes:\n\n> SQL9x defines bit string constants with a format like\n>\n> B'101010'\n> and\n> X'ABCD'\n>\n> for binary and hexadecimal representations. But at the moment we don't\n> explicitly handle both of these cases as bit strings; the hex version is\n> converted to decimal in the scanner (*really* early in the parsing\n> stage) and then handled as an integer.\n\nThe sole reason that this is still unresolved is that the SQL standard is\nambiguous about whether a literal of the form X'something' is of type bit\n(<hex string literal>) or of type blob (<binary string literal>). If I\nhad to choose one, I'd actually lean toward blob (or bytea in our case).\nTwo ideas: 1. make an \"unknownhex\" type and resolve it late, like the\n\"unknown\" type. 2. allow an implicit cast from bytea to bit.\n\n> It looks like our current bit string type support looks for a \"B\" or \"X\"\n> embedded in the actual input string, rather than outside the quote as in\n> the standard.\n\nThis was a stopgap measure before the B'' syntax was implemented. I guess\nit's grown to be a feature now. :-/\n\n> 1) the SQL standard says what hex values should be translated to in\n> binary, which implies that all values may be *output* in binary format.\n> Should we do this, or should we preserve the info on what units were\n> used for input in the internal storage? Anyone interpret the standard\n> differently from this??\n\nI believe you are caught in the confusion I was referring to above: hex\nvalues are possibly not even of type bit at all.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 16 Jul 2002 23:58:00 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: bit type external representation" } ]
[ { "msg_contents": "Hi,\n\nWould it be possible to add a new attribute to pg_views that stores the\noriginal view definition, as entered via SQL?\n\nThis would make the lives of those of us who make admin interfaces a lot\neasier...\n\nChris\n\n", "msg_date": "Tue, 16 Jul 2002 13:42:34 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "pg_views.definition" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> Hi,\n> \n> Would it be possible to add a new attribute to pg_views that stores the\n> original view definition, as entered via SQL?\n> \n> This would make the lives of those of us who make admin interfaces a lot\n> easier...\n\nWe actually reverse it on the fly:\n\t\n\ttest=> \\d xx\n\t View \"xx\"\n\t Column | Type | Modifiers \n\t---------+------+-----------\n\t relname | name | \n\tView definition: SELECT pg_class.relname FROM pg_class;\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Jul 2002 01:45:12 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_views.definition" }, { "msg_contents": "> We actually reverse it on the fly:\n>\n> \ttest=> \\d xx\n> \t View \"xx\"\n> \t Column | Type | Modifiers\n> \t---------+------+-----------\n> \t relname | name |\n> \tView definition: SELECT pg_class.relname FROM pg_class;\n\nWell, no - that's just dumping out the parsed form.\n\neg.\n\ntest=# create view v as select 1 in (1,2,3,4);\nCREATE\ntest=# select * from v;\n ?column?\n----------\n t\n(1 row)\n\ntest=# \\d v\n View \"v\"\n Column | Type | Modifiers\n----------+---------+-----------\n ?column? | boolean |\nView definition: SELECT ((((1 = 1) OR (1 = 2)) OR (1 = 3)) OR (1 = 4));\n\nIt's really annoying when people save their view definition in phpPgAdmin\nand when they load it up again it's lost all formatting. Functions and\nrules, for instance keep the original formatting somewhere.\n\nChris\n\n", "msg_date": "Tue, 16 Jul 2002 14:51:41 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: pg_views.definition" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> It's really annoying when people save their view definition in phpPgAdmin\n> and when they load it up again it's lost all formatting. Functions and\n> rules, for instance keep the original formatting somewhere.\n\nRules do not. (A view is just a rule anyway.)\n\nFunctions do, but that's because their definition is entered as a text\nstring, which leads directly to those quoting headaches that you're\nall too familiar with.\n\nI've thought occasionally about improving the lexer so that parsetree\nnodes could be tagged with the section of the source text they were\nbuilt from (probably in the form of a (start offset, end offset) pair).\nThis was mainly for use in improving error reporting in the\nparse-analysis phase, but it might be useful for storing original source\ntext for rules too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Jul 2002 09:31:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_views.definition " }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Christopher Kings-Lynne wrote:\n> > Hi,\n> >\n> > Would it be possible to add a new attribute to pg_views that stores the\n> > original view definition, as entered via SQL?\n> >\n> > This would make the lives of those of us who make admin interfaces a lot\n> > easier...\n> \n> We actually reverse it on the fly:\n\nWe do, but as soon as you break the view by dropping an underlying\nobject it fails to reconstruct. So having the original view definition\nat hand could be useful for some ALTER VIEW RECOMPILE command.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n", "msg_date": "Tue, 16 Jul 2002 10:22:20 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: pg_views.definition" }, { "msg_contents": "Jan Wieck <JanWieck@Yahoo.com> writes:\n>> We actually reverse it on the fly:\n\n> We do, but as soon as you break the view by dropping an underlying\n> object it fails to reconstruct. So having the original view definition\n> at hand could be useful for some ALTER VIEW RECOMPILE command.\n\nNote that the assumptions underlying this discussion have changed in\nCVS tip: you can't break a view by dropping underlying objects.\n\nregression=# create table foo(f1 int, f2 text);\nCREATE TABLE\nregression=# create view bar as select * from foo;\nCREATE VIEW\nregression=# drop table foo;\nNOTICE: rule _RETURN on view bar depends on table foo\nNOTICE: view bar depends on rule _RETURN on view bar\nERROR: Cannot drop table foo because other objects depend on it\n Use DROP ... CASCADE to drop the dependent objects too\n\nor\n\nregression=# drop table foo cascade;\nNOTICE: Drop cascades to rule _RETURN on view bar\nNOTICE: Drop cascades to view bar\nDROP TABLE\n-- bar is now gone\n\nAuto reconstruction of a view based on its original textual definition\nis still potentially interesting, but I submit that it won't necessarily\nalways give the right answer.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Jul 2002 12:05:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_views.definition " }, { "msg_contents": "Tom Lane wrote:\n> Auto reconstruction of a view based on its original textual definition\n> is still potentially interesting, but I submit that it won't necessarily\n> always give the right answer.\n\nSure, it's another bullet to shoot yourself into someone elses foot.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n", "msg_date": "Tue, 16 Jul 2002 13:05:00 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: pg_views.definition" }, { "msg_contents": "Jan Wieck wrote:\n> Tom Lane wrote:\n> > Auto reconstruction of a view based on its original textual definition\n> > is still potentially interesting, but I submit that it won't necessarily\n> > always give the right answer.\n> \n> Sure, it's another bullet to shoot yourself into someone elses foot.\n\nDo we want this on TODO?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Jul 2002 13:07:43 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_views.definition" }, { "msg_contents": "> > We do, but as soon as you break the view by dropping an underlying\n> > object it fails to reconstruct. So having the original view definition\n> > at hand could be useful for some ALTER VIEW RECOMPILE command.\n> \n> Note that the assumptions underlying this discussion have changed in\n> CVS tip: you can't break a view by dropping underlying objects.\n> \n> regression=# create table foo(f1 int, f2 text);\n> CREATE TABLE\n> regression=# create view bar as select * from foo;\n> CREATE VIEW\n> regression=# drop table foo;\n> NOTICE: rule _RETURN on view bar depends on table foo\n> NOTICE: view bar depends on rule _RETURN on view bar\n> ERROR: Cannot drop table foo because other objects depend on it\n> Use DROP ... CASCADE to drop the dependent objects too\n\nHrm - looks like we really need CREATE OR REPLACE VIEW...\n\nChris\n\n", "msg_date": "Wed, 17 Jul 2002 09:31:29 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: pg_views.definition " }, { "msg_contents": "On Wed, 17 Jul 2002, Christopher Kings-Lynne wrote:\n\n> > > We do, but as soon as you break the view by dropping an underlying\n> > > object it fails to reconstruct. So having the original view definition\n> > > at hand could be useful for some ALTER VIEW RECOMPILE command.\n> > \n> > Note that the assumptions underlying this discussion have changed in\n> > CVS tip: you can't break a view by dropping underlying objects.\n> > \n> > regression=# create table foo(f1 int, f2 text);\n> > CREATE TABLE\n> > regression=# create view bar as select * from foo;\n> > CREATE VIEW\n> > regression=# drop table foo;\n> > NOTICE: rule _RETURN on view bar depends on table foo\n> > NOTICE: view bar depends on rule _RETURN on view bar\n> > ERROR: Cannot drop table foo because other objects depend on it\n> > Use DROP ... CASCADE to drop the dependent objects too\n> \n> Hrm - looks like we really need CREATE OR REPLACE VIEW...\n\nI have written a patch for this. It is in an old source tree. I intend on\ngetting it together by august, along with create or replace trigger.\n\nGavin\n\n", "msg_date": "Wed, 17 Jul 2002 11:44:39 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: pg_views.definition " }, { "msg_contents": "> > Hrm - looks like we really need CREATE OR REPLACE VIEW...\n>\n> I have written a patch for this. It is in an old source tree. I intend on\n> getting it together by august, along with create or replace trigger.\n\nSweet. I was going to email to see if you had a copy of your old create or\nreplace function patch that I could convert. (Just as soon as this drop\ncolumn stuff is done.)\n\nChris\n\n", "msg_date": "Wed, 17 Jul 2002 10:07:44 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: pg_views.definition " }, { "msg_contents": "Christopher Kings-Lynne wrote:\n>>>We do, but as soon as you break the view by dropping an underlying\n>>>object it fails to reconstruct. So having the original view definition\n>>>at hand could be useful for some ALTER VIEW RECOMPILE command.\n>>\n>>Note that the assumptions underlying this discussion have changed in\n>>CVS tip: you can't break a view by dropping underlying objects.\n>>\n>>regression=# create table foo(f1 int, f2 text);\n>>CREATE TABLE\n>>regression=# create view bar as select * from foo;\n>>CREATE VIEW\n>>regression=# drop table foo;\n>>NOTICE: rule _RETURN on view bar depends on table foo\n>>NOTICE: view bar depends on rule _RETURN on view bar\n>>ERROR: Cannot drop table foo because other objects depend on it\n>> Use DROP ... CASCADE to drop the dependent objects too\n> \n> \n> Hrm - looks like we really need CREATE OR REPLACE VIEW...\n\nThe problem is that you would still need to keep a copy of your view \naround to recreate it if you wanted to drop and recreate a table it \ndepends on. I really like the idea about keeping the original view \nsource handy in the system catalogs.\n\nIt is common in Oracle to have dependent objects like views and packages \nget invalidated when something they depend on is dropped/recreated. \nWould it make sense to do something like that? I.e. set a relisvalid \nflag to false, and generate an ERROR telling you to recompile the object \nif you try to use it while invalid.\n\nJoe\n\n\n", "msg_date": "Tue, 16 Jul 2002 20:06:33 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: pg_views.definition" }, { "msg_contents": "Joe Conway wrote:\n> The problem is that you would still need to keep a copy of your view\n> around to recreate it if you wanted to drop and recreate a table it\n> depends on. I really like the idea about keeping the original view\n> source handy in the system catalogs.\n\nThis has been the case all the time. I only see an attempt to\nmake this impossible with the new dependency system. If I *must*\nspecify CASCADE to drop an object, my view depends on, my view\nwill be gone. If I don't CASCADE, I cannot drop the object.\n\nSo there is no way left to break the view temporarily (expert\nmode here, I know what I do so please let me) and fix it later by\njust reparsing the views definition.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being\nright. #\n# Let's break this rule - forgive\nme. #\n#==================================================\nJanWieck@Yahoo.com #\n", "msg_date": "Wed, 17 Jul 2002 03:56:55 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: pg_views.definition" }, { "msg_contents": "On Wed, 2002-07-17 at 09:56, Jan Wieck wrote:\n> Joe Conway wrote:\n> > The problem is that you would still need to keep a copy of your view\n> > around to recreate it if you wanted to drop and recreate a table it\n> > depends on. I really like the idea about keeping the original view\n> > source handy in the system catalogs.\n> \n> This has been the case all the time. I only see an attempt to\n> make this impossible with the new dependency system. If I *must*\n> specify CASCADE to drop an object, my view depends on, my view\n> will be gone. If I don't CASCADE, I cannot drop the object.\n> \n> So there is no way left to break the view temporarily (expert\n> mode here, I know what I do so please let me) and fix it later by\n> just reparsing the views definition.\n\nAs somebody said, this is the place where CREATE OR REPLACE TABLE could\nbe useful. (IMHO it should recompile dependent views/rules/...\nautomatically or mark them as broken if compilation fails)\n\n-------------\nHannu\n\n", "msg_date": "17 Jul 2002 11:33:37 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: pg_views.definition" }, { "msg_contents": "On Wed, 2002-07-17 at 09:56, Jan Wieck wrote:\n> Joe Conway wrote:\n> > The problem is that you would still need to keep a copy of your view\n> > around to recreate it if you wanted to drop and recreate a table it\n> > depends on. I really like the idea about keeping the original view\n> > source handy in the system catalogs.\n> \n> This has been the case all the time. I only see an attempt to\n> make this impossible with the new dependency system. If I *must*\n> specify CASCADE to drop an object, my view depends on, my view\n> will be gone. If I don't CASCADE, I cannot drop the object.\n> \n> So there is no way left to break the view temporarily (expert\n> mode here, I know what I do so please let me)\n\nI guess the real expert could manipulate pg_depends ;)\n\n> and fix it later by just reparsing the views definition.\n\n---------\nHannu\n\n", "msg_date": "17 Jul 2002 11:36:52 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: pg_views.definition" } ]
[ { "msg_contents": "Actually... as one with the vested interest...\n\nI'm not opposed to entering an equation in one of the basic algebraic forms. Given that line types and line segment types both exist, I'm happy to weigh the cost/benefit between choosing an lseg and entering 2 points, or choosing a line and entering the equation.\n\nAre there database functions to translate between a line and a line seg? If so, that would address my only reservation for restricting the line type to an equation. And - to address Tom's continuing concern over casting ;), I have no need for implicit casts in this case.\n\nIf there are concerns about precision being lost in the translation - As long as what precision can be guaranteed is documented, I have no qualms. If I needed absolute precision I'd create my own line table with numerics, and write my own functions as necessary. The line type to me is there for speed and ease of use, not unlimited precision.\n\nOn Tuesday, 16, 2002, at 09:38AM, Bruce Momjian <pgman@candle.pha.pa.us> wrote:\n\n>No one likes entering an equation. Two points seems the simplest.\n>\n>-- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n>\n\n", "msg_date": "Tue, 16 Jul 2002 09:00:34 -0700 (PDT)", "msg_from": "Tim Hart <tjhart@mac.com>", "msg_from_op": true, "msg_subject": "Re: [SQL] line datatype" } ]
[ { "msg_contents": "I've been thinking that it's really not a good idea to make the OID\nfield optional without any indication in the tuple header whether it\nis present. In particular, this will make life difficult for tools\nlike pg_filedump that try to display tuple headers without any outside\ninformation. I think it'd be a good idea to expend a bit in t_infomask\nto show whether an OID field is allocated or not.\n\nWe currently have two free bits in t_infomask, which is starting to get\na bit tight, but offhand I do not see anything else coming down the pike\nthat would need another bit. Also, we could consider expanding\nt_infomask to three bytes if we had to.\n\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Jul 2002 12:23:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "OID suppression issues" }, { "msg_contents": "On Tue, 16 Jul 2002 12:23:01 -0400, Tom Lane <tgl@sss.pgh.pa.us>\nwrote:\n>I've been thinking that it's really not a good idea to make the OID\n>field optional without any indication in the tuple header whether it\n>is present. In particular, this will make life difficult for tools\n>like pg_filedump that try to display tuple headers without any outside\n>information. I think it'd be a good idea to expend a bit in t_infomask\n>to show whether an OID field is allocated or not.\n\nIf we want, we can get along without using a bit. On machines with 4\nbyte alignment the presence of an oid can be deduced from the tuple\nheader length, t_hoff. The \"Optional Oid\" patch defines a macro\nHeapTupleHeaderExpectedLen in htup.h. This macro is currently only\nused for debugging; we could expose this or a similar macro or\nfunction to tools.\n\nA little problem arises with 8 byte alignment: due to padding tuple\nheaders with or without oid might have the same length. If padding\nbytes are properly set to 0, those tools would display \"Invalid Oid\"\ninstead of \"No Oid\".\n\n>We currently have two free bits in t_infomask, which is starting to get\n>a bit tight, but offhand I do not see anything else coming down the pike\n>that would need another bit. Also, we could consider expanding\n>t_infomask to three bytes if we had to.\n\n... and if we are really miserly, we use those two or three bits in\nt_hoff, which are always 0 because t_hoff = MAXALIGN(...).\n\nI'm not strongly biased towards either direction. If we end up using\na bit in t_infomask, I'll change the announcement to \"This patch\nreduces per tuple overhead by 31 bits ...\"\n\nI've finished the patch yesterday and have posted it to -patches this\nmorning. Please have a look at it.\n\nServus\n Manfred\n", "msg_date": "Wed, 17 Jul 2002 10:35:25 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "Re: OID suppression issues" } ]
[ { "msg_contents": "Command syntax is\n\n CREATE CAST (source AS target) WITH FUNCTION name(arg) [AS ASSIGNMENT]\n\nin compliance with SQL99 (AS ASSIGNMENT means implicitly invokable).\n\nDeclaration of binary compatible casts:\n\n CREATE CAST (source AS target) WITHOUT FUNCTION [AS ASSIGNMENT]\n\nDoes not have to be implicit (although it must be for use in function\nargument resultion, etc.). You must declare both directions explicitly.\n\nCast functions must be immutable. This is following SQL99 as well.\n\nCompatibility:\n\nThe old way to create casts has already been broken in 1.5 ways: the\nintroduction of the implicit flag and the introduction of schemas. To\nmaintain full compatibility we'd have to revert to making the implicit\nflag the default, which would undermine the compliance of the new CREATE\nCAST command from the start.\n\nHence I suggest that we migrate through pg_dump: User-defined functions\nthat would have been casts up to 7.2 will emit an appropriate CREATE CAST\ncommand. This in combination with a release note will also give the users\na better chance to inspect the dump and adjust the cast specifications for\nimplicitness as they wish.\n\nComments?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 16 Jul 2002 19:33:43 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Spec on pg_cast table/CREATE CAST command" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Command syntax is\n> CREATE CAST (source AS target) WITH FUNCTION name(arg) [AS ASSIGNMENT]\n> in compliance with SQL99 (AS ASSIGNMENT means implicitly invokable).\n> Declaration of binary compatible casts:\n> CREATE CAST (source AS target) WITHOUT FUNCTION [AS ASSIGNMENT]\n\nSo the idea is to remove proimplicit again? We could still do that\nbefore 7.3, since no user depends on it yet. Are you intending a new\nsystem catalog to hold casts?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Jul 2002 14:07:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Spec on pg_cast table/CREATE CAST command " }, { "msg_contents": "Tom Lane writes:\n\n> So the idea is to remove proimplicit again? We could still do that\n> before 7.3, since no user depends on it yet. Are you intending a new\n> system catalog to hold casts?\n\nYeah, it seems I forgot to mention that.\n\nBtw., it occurred to me that this could also be the direction to\ngeneralize the \"preferred type\" games for resolving union and case\nconstructs. Each declared cast could carry some additional properties,\nsuch as \"possibly truncating\", \"possible precision loss\", or simply\n\"preferred\", which could help the system to make smarter choices (such as\nnot doing int4+int8=int4).\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 17 Jul 2002 20:45:36 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: Spec on pg_cast table/CREATE CAST command " }, { "msg_contents": "Could anyone familiar with the pg version of GiST tell me if the\nframework allows entries in the tree as non-uniform sizes (in other\nwords, variable-length keys)? I want to write an extension for a\nTV-tree, but this is an essential component.\nThanks;\nEric Redmond\n\n", "msg_date": "Wed, 17 Jul 2002 14:32:43 -0500", "msg_from": "\"Eric Redmond\" <redmonde@purdue.edu>", "msg_from_op": false, "msg_subject": "GiST Indexing" }, { "msg_contents": "Yes, our GiST supports variable-length keys.\nTake a look on http://www.sai.msu.su/~megera/postgres/gist/\n\n\n\tOleg\nOn Wed, 17 Jul 2002, Eric Redmond wrote:\n\n> Could anyone familiar with the pg version of GiST tell me if the\n> framework allows entries in the tree as non-uniform sizes (in other\n> words, variable-length keys)? I want to write an extension for a\n> TV-tree, but this is an essential component.\n> Thanks;\n> Eric Redmond\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Thu, 18 Jul 2002 04:25:39 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: GiST Indexing" } ]
[ { "msg_contents": "So being very new with this code, I'd just like to get a feel for whether\nwhat I am seeing is consistent with what others have seen when they have\nattempted to build on Mac OS X 10.1.x.\n\nI built Tcl and Tk 8.4b1, and then PostgreSQL with Tcl support enabled.\nWhen PostgreSQL builds it reports the below fatal link error. I believe Tk\nyielded some warnings, but like an idiot I overwrote the log file instead of\nappending so I am now rebuilding Tk to find out what the warnings are.\nUnfortunately that takes about 3 hours, so I thought I'd at least get this\nout here. If this is at least similar to what others have seen, I'll begin\ntracking it down. Feel free to take this offline if you like.\n\nThanks,\nMike Ditto\n\n\ngcc -traditional-cpp -g -O2 -Wall -Wmissing-prototypes\n-Wmissing-declarations pgtclAppInit.o -L../../../src/interfaces/libpgtcl\n-lpgtcl -L../../../src/interfaces/libpq -lpq -L/sw/lib -lz -ldl -lm -o\npgtclsh\n/usr/bin/ld: Undefined symbols:\n_Tcl_Init\n_Tcl_Main\n_Tcl_SetVar\n_Tcl_CreateCommand\n_Tcl_CreateObjCommand\n_Tcl_GetDouble\n_Tcl_GetVar\n_Tcl_PkgProvide\n_Tcl_AddErrorInfo\n_Tcl_Alloc\n_Tcl_AppendElement\n_Tcl_AppendResult\n_Tcl_CallWhenDeleted\n_Tcl_DStringAppendElement\n_Tcl_DStringEndSublist\n_Tcl_DStringFree\n_Tcl_DStringInit\n_Tcl_DStringResult\n_Tcl_DStringStartSublist\n_Tcl_DeleteHashEntry\n_Tcl_Eval\n_Tcl_Free\n_Tcl_GetChannel\n_Tcl_GetIntFromObj\n_Tcl_GetStringFromObj\n_Tcl_InitHashTable\n_Tcl_NewIntObj\n_Tcl_NewStringObj\n_Tcl_ObjSetVar2\n_Tcl_ResetResult\n_Tcl_SetObjResult\n_Tcl_SetResult\n_Tcl_SetVar2\n_Tcl_UnregisterChannel\n_Tcl_UnsetVar\n_Tcl_BackgroundError\n_Tcl_CreateChannel\n_Tcl_CreateChannelHandler\n_Tcl_DeleteChannelHandler\n_Tcl_DeleteEvents\n_Tcl_DeleteHashTable\n_Tcl_DontCallWhenDeleted\n_Tcl_EventuallyFree\n_Tcl_FirstHashEntry\n_Tcl_GetChannelInstanceData\n_Tcl_GetChannelName\n_Tcl_GetChannelType\n_Tcl_GetInt\n_Tcl_GlobalEval\n_Tcl_MakeTcpClientChannel\n_Tcl_NextHashEntry\n_Tcl_Preserve\n_Tcl_QueueEvent\n_Tcl_Realloc\n_Tcl_RegisterChannel\n_Tcl_Release\n_Tcl_SetChannelOption\ngnumake[3]: *** [pgtclsh] Error 1\ngnumake[2]: *** [all] Error 2\ngnumake[1]: *** [all] Error 2\ngnumake: *** [all] Error 2\n\n", "msg_date": "Tue, 16 Jul 2002 15:24:47 -0600", "msg_from": "\"Michael J. Ditto\" <janus@frii.com>", "msg_from_op": true, "msg_subject": "Building PostgreSQL 7.2.1 w/ Tcl/Tk support on Mac OS X" } ]
[ { "msg_contents": "I am considering removing the following notices/warnings, since they\nseem to be unnecessary in the brave new world of dependencies:\n\n* The one about dropping a built-in function; you can't do it anyway.\n\nregression=# drop function now();\nWARNING: Removing built-in function \"now\"\nERROR: Cannot drop function now because it is required by the database system\nregression=#\n\n* The one about creating implicit triggers for FOREIGN KEY constraints:\n\nregression=# create table bar (f1 int references foo);\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nCREATE TABLE\nregression=#\n\nSince those triggers (a) will be auto-dropped when you drop the\nconstraint, and (b) can't be dropped without dropping the constraint,\nthis notice seems like it's just noise now.\n\nregression=# \\d bar\n Table \"bar\"\n Column | Type | Modifiers\n--------+---------+-----------\n f1 | integer |\nTriggers: RI_ConstraintTrigger_140127\n\nregression=# drop trigger \"RI_ConstraintTrigger_140127\" on bar;\nERROR: Cannot drop trigger RI_ConstraintTrigger_140127 on table bar because constraint $1 on table bar requires it\n You may drop constraint $1 on table bar instead\nregression=# alter table bar drop constraint \"$1\";\nALTER TABLE\nregression=# \\d bar\n Table \"bar\"\n Column | Type | Modifiers\n--------+---------+-----------\n f1 | integer |\n\nregression=#\n\n* The ones about implicit indexes for primary key/unique constraints\nand about implicit sequences for SERIAL columns also seem unnecessary\nnow --- as with the trigger case, you can't drop the implicit object\ndirectly anymore. However, the messages do convey some useful\ninformation, namely the exact name that was assigned to the index or\nsequence. So I'm undecided about removing 'em. The sequence message\nseems particularly useful since people do often want to refer directly\nto the sequence in manual nextval/currval commands. OTOH psql's \\d is a\nperfectly reasonable way to get the sequence and index names if you need\n'em. Moreover, that still works after the fact whereas a NOTICE soon\ndisappears from sight.\n\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Jul 2002 18:53:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Do we still need these NOTICEs?" }, { "msg_contents": "> * The one about dropping a built-in function; you can't do it anyway.\n>\n> regression=# drop function now();\n> WARNING: Removing built-in function \"now\"\n> ERROR: Cannot drop function now because it is required by the\n> database system\n> regression=#\n\nGet rid of it.\n\n> * The one about creating implicit triggers for FOREIGN KEY constraints:\n>\n> regression=# create table bar (f1 int references foo);\n> NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN\n> KEY check(s)\n> CREATE TABLE\n> regression=#\n>\n> Since those triggers (a) will be auto-dropped when you drop the\n> constraint, and (b) can't be dropped without dropping the constraint,\n> this notice seems like it's just noise now.\n\nYep - may as well.\n\n> regression=# \\d bar\n> Table \"bar\"\n> Column | Type | Modifiers\n> --------+---------+-----------\n> f1 | integer |\n> Triggers: RI_ConstraintTrigger_140127\n>\n> regression=# drop trigger \"RI_ConstraintTrigger_140127\" on bar;\n> ERROR: Cannot drop trigger RI_ConstraintTrigger_140127 on table\n> bar because constraint $1 on table bar requires it\n> You may drop constraint $1 on table bar instead\n> regression=# alter table bar drop constraint \"$1\";\n> ALTER TABLE\n> regression=# \\d bar\n> Table \"bar\"\n> Column | Type | Modifiers\n> --------+---------+-----------\n> f1 | integer |\n>\n> regression=#\n>\n> * The ones about implicit indexes for primary key/unique constraints\n> and about implicit sequences for SERIAL columns also seem unnecessary\n> now --- as with the trigger case, you can't drop the implicit object\n> directly anymore. However, the messages do convey some useful\n> information, namely the exact name that was assigned to the index or\n> sequence. So I'm undecided about removing 'em. The sequence message\n> seems particularly useful since people do often want to refer directly\n> to the sequence in manual nextval/currval commands. OTOH psql's \\d is a\n> perfectly reasonable way to get the sequence and index names if you need\n> 'em. Moreover, that still works after the fact whereas a NOTICE soon\n> disappears from sight.\n\nHmmmm...undecided. I generally wouldn't care I guess, but some people\nmight...\n\nChris\n\n", "msg_date": "Wed, 17 Jul 2002 09:43:45 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Do we still need these NOTICEs?" }, { "msg_contents": "Tom Lane wrote:\n> I am considering removing the following notices/warnings, since they\n> seem to be unnecessary in the brave new world of dependencies:\n> \n> * The one about dropping a built-in function; you can't do it anyway.\n> \n> regression=# drop function now();\n> WARNING: Removing built-in function \"now\"\n> ERROR: Cannot drop function now because it is required by the database system\n> regression=#\n> \n> * The one about creating implicit triggers for FOREIGN KEY constraints:\n\nYep, remove them.\n\n> regression=# create table bar (f1 int references foo);\n> NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\n> CREATE TABLE\n> regression=#\n> \n> Since those triggers (a) will be auto-dropped when you drop the\n> constraint, and (b) can't be dropped without dropping the constraint,\n> this notice seems like it's just noise now.\n> \n> regression=# \\d bar\n> Table \"bar\"\n> Column | Type | Modifiers\n> --------+---------+-----------\n> f1 | integer |\n> Triggers: RI_ConstraintTrigger_140127\n> \n> regression=# drop trigger \"RI_ConstraintTrigger_140127\" on bar;\n> ERROR: Cannot drop trigger RI_ConstraintTrigger_140127 on table bar because constraint $1 on table bar requires it\n> You may drop constraint $1 on table bar instead\n> regression=# alter table bar drop constraint \"$1\";\n> ALTER TABLE\n> regression=# \\d bar\n> Table \"bar\"\n> Column | Type | Modifiers\n> --------+---------+-----------\n> f1 | integer |\n> \n> regression=#\n\nRemove.\n\n> * The ones about implicit indexes for primary key/unique constraints\n> and about implicit sequences for SERIAL columns also seem unnecessary\n> now --- as with the trigger case, you can't drop the implicit object\n> directly anymore. However, the messages do convey some useful\n> information, namely the exact name that was assigned to the index or\n> sequence. So I'm undecided about removing 'em. The sequence message\n> seems particularly useful since people do often want to refer directly\n> to the sequence in manual nextval/currval commands. OTOH psql's \\d is a\n> perfectly reasonable way to get the sequence and index names if you need\n> 'em. Moreover, that still works after the fact whereas a NOTICE soon\n> disappears from sight.\n\nI would remove them all. If people complain, we can add them back in. \nWhy not remove them and keep the diff on your machine somewhere. If\nwe get complaints, we can re-add them. We already get complaints about\npeople _not_ wanting to see them, and hence the request to disable\nNOTICE messages in psql, which will be possible in 7.3.\n\nNow that we have them auto-dropped, it is appropriate for them not to\nappear during creation. We mentioned them in the past specifically so\npeople would know they existed to drop them. Now, they don't need to\nknow that anymore.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Jul 2002 23:10:43 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Do we still need these NOTICEs?" }, { "msg_contents": "Bruce Momjian wrote:\n> Tom Lane wrote:\n> \n>>I am considering removing the following notices/warnings, since they\n>>seem to be unnecessary in the brave new world of dependencies:\n\nI also agree with removing all of these.\n\n>>* The ones about implicit indexes for primary key/unique constraints\n>>and about implicit sequences for SERIAL columns also seem unnecessary\n>>now --- as with the trigger case, you can't drop the implicit object\n>>directly anymore.\n\nOne thing I wondered about here -- is it still possible to use a \nsequence, which is autogenerated by a SERIAL column, as the default \nvalue for another table? If so, does this create another dependency to \nprevent dropping the sequence, and hence the original (creating) table also?\n\nJoe\n\n\n", "msg_date": "Tue, 16 Jul 2002 20:19:32 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Do we still need these NOTICEs?" }, { "msg_contents": "Joe Conway wrote:\n> Bruce Momjian wrote:\n> > Tom Lane wrote:\n> > \n> >>I am considering removing the following notices/warnings, since they\n> >>seem to be unnecessary in the brave new world of dependencies:\n> \n> I also agree with removing all of these.\n> \n> >>* The ones about implicit indexes for primary key/unique constraints\n> >>and about implicit sequences for SERIAL columns also seem unnecessary\n> >>now --- as with the trigger case, you can't drop the implicit object\n> >>directly anymore.\n> \n> One thing I wondered about here -- is it still possible to use a \n> sequence, which is autogenerated by a SERIAL column, as the default \n> value for another table? If so, does this create another dependency to \n> prevent dropping the sequence, and hence the original (creating) table also?\n\nMy guess is that the dependency code will now track it(?). A harder\nissue is if you use nextval() in the INSERT, there is no way for the\ndependency code to know it is used by that table, so it will be dropped\nif the parent table that created it is dropped. In such cases, the\nsequence should always be created manually or a DEFAULT defined, even if\nyou never use it as a default.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Jul 2002 23:24:25 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Do we still need these NOTICEs?" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> One thing I wondered about here -- is it still possible to use a \n> sequence, which is autogenerated by a SERIAL column, as the default \n> value for another table?\n\nSure, same as before.\n\n> If so, does this create another dependency to \n> prevent dropping the sequence, and hence the original (creating) table also?\n\nAs the code stands, no. The other table's default would look like\n\tnextval('first_table_col_seq')\nand the dependency deducer only sees nextval() and a string constant\nin this.\n\nSomeday I'd like to see us support the Oracle-ish syntax\n\tfirst_table_col_seq.nextval\nwhich would expose the sequence reference in a way that allows the\nsystem to understand it during static examination of a query.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Jul 2002 00:10:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Do we still need these NOTICEs? " }, { "msg_contents": "Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n> > One thing I wondered about here -- is it still possible to use a \n> > sequence, which is autogenerated by a SERIAL column, as the default \n> > value for another table?\n> \n> Sure, same as before.\n> \n> > If so, does this create another dependency to \n> > prevent dropping the sequence, and hence the original (creating) table also?\n> \n> As the code stands, no. The other table's default would look like\n> \tnextval('first_table_col_seq')\n> and the dependency deducer only sees nextval() and a string constant\n> in this.\n> \n> Someday I'd like to see us support the Oracle-ish syntax\n> \tfirst_table_col_seq.nextval\n> which would expose the sequence reference in a way that allows the\n> system to understand it during static examination of a query.\n\nOK, so creator tracks it, and referencers, even in DEFAULT, don't. Good\nto know and probably something we need to point out in the release\nnotes.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Jul 2002 01:29:26 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Do we still need these NOTICEs?" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Hiroshi Inoue [mailto:Inoue@tpf.co.jp] \n> Sent: 17 July 2002 05:12\n> To: Bruce Momjian\n> Cc: Christopher Kings-Lynne; Tom Lane; Rod Taylor; \n> PostgreSQL-development\n> Subject: Re: [HACKERS] DROP COLUMN\n> \n>\n> > >From my perspective, when client coders like Dave Page and \n> others say\n> > they would prefer the flag to the negative attno's, I don't have to \n> > understand. I just take their word for it.\n> \n> do they really love to check attisdropped everywhere ?\n> Isn't it the opposite of the encapsulation ?\n> I don't understand why we would do nothing for clients.\n\nIn pgAdmin's case, this involves one test (maybe 3 lines of code),\nbecause all access to column info is made through one class. The reason\nI voted for attisdropped is that the negative attnum's are assumed by\npgAdmin to be 'system columns', not 'any column that doesn't belong to\nthe user'. Coding around a change like that - whilst not necessarily\nharder - would certainly be messier.\n\nRegards, Dave.\n", "msg_date": "Wed, 17 Jul 2002 08:36:54 +0100", "msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: DROP COLUMN" } ]
[ { "msg_contents": "Hi All,\n\nHas anyone committed something that would cause me to be getting doubles of\nall my ELOG messages? Or is it something I've changed in my local CVS?\n\nChris\n\n", "msg_date": "Wed, 17 Jul 2002 16:36:15 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "ELOGs doubled up" }, { "msg_contents": "On Wed, 2002-07-17 at 04:36, Christopher Kings-Lynne wrote:\n> Hi All,\n> \n> Has anyone committed something that would cause me to be getting doubles of\n> all my ELOG messages? Or is it something I've changed in my local CVS?\n\nYou probably started the daemon without redirecting the server logs to a\nfile.\n\nBoth the server and client were started from the same terminal and are\nprinting the output ;)\n\n", "msg_date": "17 Jul 2002 07:41:19 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: ELOGs doubled up" }, { "msg_contents": "> > Has anyone committed something that would cause me to be getting doubles of\n> > all my ELOG messages? Or is it something I've changed in my local CVS?\n>\n> You probably started the daemon without redirecting the server logs to a\n> file.\n>\n> Both the server and client were started from the same terminal and are\n> printing the output ;)\n\nDamn. You picked it!\n\nChris\n\n\n", "msg_date": "Wed, 17 Jul 2002 23:17:47 +0800 (WST)", "msg_from": "Christopher Kings-Lynne <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: ELOGs doubled up" } ]
[ { "msg_contents": "I have a query and estimations and results don´t look similar, here is explain analyze:\n\n NOTICE: QUERY PLAN:\n\nSort (cost=12443.90..12443.90 rows=1 width=93) (actual time=505331.94..505332.67 rows=175 loops=1)\n -> Aggregate (cost=12443.88..12443.89 rows=1 width=93) (actual time=472520.29..505326.48 rows=175 loops=1)\n -> Group (cost=12443.88..12443.89 rows=1 width=93) (actual time=472307.31..485173.92 rows=325302 loops=1)\n -> Sort (cost=12443.88..12443.88 rows=1 width=93) (actual time=472307.24..473769.79 rows=325302 loops=1)\n -> Nested Loop (cost=12439.25..12443.87 rows=1 width=93) (actual time=103787.68..441614.43 rows=325302 loops=1)\n -> Hash Join (cost=12439.25..12440.64 rows=1 width=85) (actual time=103733.76..120916.86 rows=325302 loops=1)\n -> Seq Scan on nation (cost=0.00..1.25 rows=25 width=15) (actual time=7.81..8.72 rows=25 loops=1)\n -> Hash (cost=12439.25..12439.25 rows=1 width=70) (actual time=103722.25..103722.25 rows=0 loops=1)\n -> Nested Loop (cost=0.00..12439.25 rows=1 width=70) (actual time=95.43..100162.91 rows=325302 loops=1)\n -> Nested Loop (cost=0.00..12436.23 rows=1 width=62) (actual time=84.91..47502.93 rows=325302 loops=1)\n -> Nested Loop (cost=0.00..12412.93 rows=4 width=24) (actual time=66.86..8806.01 rows=43424 loops=1)\n -> Seq Scan on part (cost=0.00..12399.00 rows=1 width=4) (actual time=24.88..4076.81 rows=10856 loops=1)\n -> Index Scan using partsupp_pkey on partsupp (cost=0.00..13.89 rows=4 width=20) (actual time=0.20..0.34 rows=4 loops=10856)\n -> Index Scan using l_partsupp_index on lineitem (cost=0.00..6.02 rows=1 width=38) (actual time=0.20..0.61 rows=7 loops=43424)\n -> Index Scan using supplier_pkey on supplier (cost=0.00..3.01 rows=1 width=8) (actual time=0.08..0.10 rows=1 loops=325302)\n -> Index Scan using orders_pkey on orders (cost=0.00..3.22 rows=1 width=8) (actual time=0.85..0.87 rows=1 loops=325302)\nTotal runtime: 505563.85 msec\n\nestimated 12000msec\n\nhere is the query:\nSELECT\n nation,\n o_year,\n CAST((sum(amount))AS NUMERIC(10,2))AS sum_profit\nFROM(\n SELECT\n nation.name AS nation,\n EXTRACT(year FROM orders.orderdate) AS o_year,\n lineitem.extendedprice*(1-lineitem.discount)-partsupp.supplycost*lineitem.quantity AS amount\n FROM\n part,\n supplier,\n lineitem,\n partsupp,\n orders,\n nation\n WHERE\n supplier.suppkey=lineitem.suppkey\n AND partsupp.suppkey=lineitem.suppkey\n AND partsupp.partkey=lineitem.partkey\n AND part.partkey=lineitem.partkey\n AND orders.orderkey=lineitem.orderkey\n AND supplier.nationkey=nation.nationkey\n AND part.name LIKE '%green%'\n ) AS profit\nGROUP BY\n nation,\n o_year\nORDER BY\n nation,\n o_year DESC;\n\nlineitem is about 6M rows\npartsupp 800K rows\npart 200K rows\n\nany advice?\nThanks and regards\n\n\n\n\n\n\n\n\n\nI have a query and estimations and results don´t \nlook similar, here is explain analyze:\n \n NOTICE:  QUERY PLAN:\n \nSort  (cost=12443.90..12443.90 rows=1 \nwidth=93) (actual time=505331.94..505332.67 rows=175 loops=1)  \n->  Aggregate  (cost=12443.88..12443.89 rows=1 width=93) (actual \ntime=472520.29..505326.48 rows=175 \nloops=1)        ->  Group  \n(cost=12443.88..12443.89 rows=1 width=93) (actual time=472307.31..485173.92 \nrows=325302 \nloops=1)              \n->  Sort  (cost=12443.88..12443.88 rows=1 width=93) (actual \ntime=472307.24..473769.79 rows=325302 \nloops=1)                    \n->  Nested Loop  (cost=12439.25..12443.87 rows=1 width=93) (actual \ntime=103787.68..441614.43 rows=325302 \nloops=1)                          \n->  Hash Join  (cost=12439.25..12440.64 rows=1 width=85) (actual \ntime=103733.76..120916.86 rows=325302 \nloops=1)                                \n->  Seq Scan on nation  (cost=0.00..1.25 rows=25 width=15) (actual \ntime=7.81..8.72 rows=25 \nloops=1)                                \n->  Hash  (cost=12439.25..12439.25 rows=1 width=70) (actual \ntime=103722.25..103722.25 rows=0 \nloops=1)                                      \n->  Nested Loop  (cost=0.00..12439.25 rows=1 width=70) (actual \ntime=95.43..100162.91 rows=325302 \nloops=1)                                            \n->  Nested Loop  (cost=0.00..12436.23 rows=1 width=62) (actual \ntime=84.91..47502.93 rows=325302 \nloops=1)                                                  \n->  Nested Loop  (cost=0.00..12412.93 rows=4 width=24) (actual \ntime=66.86..8806.01 rows=43424 \nloops=1)                                                        \n->  Seq Scan on part  (cost=0.00..12399.00 rows=1 width=4) (actual \ntime=24.88..4076.81 rows=10856 \nloops=1)                                                        \n->  Index Scan using partsupp_pkey on partsupp  (cost=0.00..13.89 \nrows=4 width=20) (actual time=0.20..0.34 rows=4 \nloops=10856)                                                  \n->  Index Scan using l_partsupp_index on lineitem  (cost=0.00..6.02 \nrows=1 width=38) (actual time=0.20..0.61 rows=7 \nloops=43424)                                            \n->  Index Scan using supplier_pkey on supplier  (cost=0.00..3.01 \nrows=1 width=8) (actual time=0.08..0.10 rows=1 \nloops=325302)                          \n->  Index Scan using orders_pkey on orders  (cost=0.00..3.22 rows=1 \nwidth=8) (actual time=0.85..0.87 rows=1 loops=325302)Total runtime: \n505563.85 msec\nestimated 12000msec\n \nhere is the query:\nSELECT nation, o_year, CAST((sum(amount))AS \nNUMERIC(10,2))AS sum_profitFROM( SELECT  nation.name \nAS nation,  EXTRACT(year FROM orders.orderdate) AS \no_year,  lineitem.extendedprice*(1-lineitem.discount)-partsupp.supplycost*lineitem.quantity \nAS \namount FROM  part,  supplier,  lineitem,  partsupp,  orders,  nation WHERE  supplier.suppkey=lineitem.suppkey  AND \npartsupp.suppkey=lineitem.suppkey  AND \npartsupp.partkey=lineitem.partkey  AND \npart.partkey=lineitem.partkey  AND \norders.orderkey=lineitem.orderkey  AND \nsupplier.nationkey=nation.nationkey  AND part.name LIKE \n'%green%' ) AS profitGROUP \nBY nation, o_yearORDER BY nation, o_year \nDESC;\nlineitem is about 6M rows\npartsupp 800K rows\npart 200K rows\n \nany advice?\nThanks and regards", "msg_date": "Wed, 17 Jul 2002 12:17:40 +0200", "msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>", "msg_from_op": true, "msg_subject": "why is postgres estimating so badly?" }, { "msg_contents": "The first thing to point out is that the estimated cost is measured in\nterms of page reads while the actual time is measured in milliseconds. So\neven if the cost estimate is accurate it is unlikely that those numbers\nwill be the same.\n\n-N\n\n--\nNathan C. Burnett\nResearch Assistant, Wisconsin Network Disks\nDepartment of Computer Sciences\nUniversity of Wisconsin - Madison\nncb@cs.wisc.edu\n\nOn Wed, 17 Jul 2002, Luis Alberto Amigo Navarro wrote:\n\n> I have a query and estimations and results don�t look similar, here is explain analyze:\n> \n> NOTICE: QUERY PLAN:\n> \n> Sort (cost=12443.90..12443.90 rows=1 width=93) (actual time=505331.94..505332.67 rows=175 loops=1)\n> -> Aggregate (cost=12443.88..12443.89 rows=1 width=93) (actual time=472520.29..505326.48 rows=175 loops=1)\n> -> Group (cost=12443.88..12443.89 rows=1 width=93) (actual time=472307.31..485173.92 rows=325302 loops=1)\n> -> Sort (cost=12443.88..12443.88 rows=1 width=93) (actual time=472307.24..473769.79 rows=325302 loops=1)\n> -> Nested Loop (cost=12439.25..12443.87 rows=1 width=93) (actual time=103787.68..441614.43 rows=325302 loops=1)\n> -> Hash Join (cost=12439.25..12440.64 rows=1 width=85) (actual time=103733.76..120916.86 rows=325302 loops=1)\n> -> Seq Scan on nation (cost=0.00..1.25 rows=25 width=15) (actual time=7.81..8.72 rows=25 loops=1)\n> -> Hash (cost=12439.25..12439.25 rows=1 width=70) (actual time=103722.25..103722.25 rows=0 loops=1)\n> -> Nested Loop (cost=0.00..12439.25 rows=1 width=70) (actual time=95.43..100162.91 rows=325302 loops=1)\n> -> Nested Loop (cost=0.00..12436.23 rows=1 width=62) (actual time=84.91..47502.93 rows=325302 loops=1)\n> -> Nested Loop (cost=0.00..12412.93 rows=4 width=24) (actual time=66.86..8806.01 rows=43424 loops=1)\n> -> Seq Scan on part (cost=0.00..12399.00 rows=1 width=4) (actual time=24.88..4076.81 rows=10856 loops=1)\n> -> Index Scan using partsupp_pkey on partsupp (cost=0.00..13.89 rows=4 width=20) (actual time=0.20..0.34 rows=4 loops=10856)\n> -> Index Scan using l_partsupp_index on lineitem (cost=0.00..6.02 rows=1 width=38) (actual time=0.20..0.61 rows=7 loops=43424)\n> -> Index Scan using supplier_pkey on supplier (cost=0.00..3.01 rows=1 width=8) (actual time=0.08..0.10 rows=1 loops=325302)\n> -> Index Scan using orders_pkey on orders (cost=0.00..3.22 rows=1 width=8) (actual time=0.85..0.87 rows=1 loops=325302)\n> Total runtime: 505563.85 msec\n> \n> estimated 12000msec\n> \n> here is the query:\n> SELECT\n> nation,\n> o_year,\n> CAST((sum(amount))AS NUMERIC(10,2))AS sum_profit\n> FROM(\n> SELECT\n> nation.name AS nation,\n> EXTRACT(year FROM orders.orderdate) AS o_year,\n> lineitem.extendedprice*(1-lineitem.discount)-partsupp.supplycost*lineitem.quantity AS amount\n> FROM\n> part,\n> supplier,\n> lineitem,\n> partsupp,\n> orders,\n> nation\n> WHERE\n> supplier.suppkey=lineitem.suppkey\n> AND partsupp.suppkey=lineitem.suppkey\n> AND partsupp.partkey=lineitem.partkey\n> AND part.partkey=lineitem.partkey\n> AND orders.orderkey=lineitem.orderkey\n> AND supplier.nationkey=nation.nationkey\n> AND part.name LIKE '%green%'\n> ) AS profit\n> GROUP BY\n> nation,\n> o_year\n> ORDER BY\n> nation,\n> o_year DESC;\n> \n> lineitem is about 6M rows\n> partsupp 800K rows\n> part 200K rows\n> \n> any advice?\n> Thanks and regards\n> \n> \n> \n\n", "msg_date": "Wed, 17 Jul 2002 12:04:04 -0500 (CDT)", "msg_from": "\"Nathan C. Burnett\" <ncb@cs.wisc.edu>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] why is postgres estimating so badly?" }, { "msg_contents": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es> writes:\n> -> Seq Scan on part (cost=0.00..12399.00 rows=1 width=4) (actual time=24.88..4076.81 rows=10856 loops=1)\n\nSeems like the major misestimation is above: the LIKE clause on part is\nestimated to select just one row, but it selects 10856 of 'em. Had the\nplanner realized the number of returned rows would be in the thousands,\nit'd likely have used quite a different plan structure.\n\n> AND part.name LIKE '%green%'\n\nIt's difficult for the planner to produce a decent estimate for the\nselectivity of an unanchored LIKE clause, since there are no statistics\nit can use for the purpose. We recently changed FIXED_CHAR_SEL in\nsrc/backend/utils/adt/selfuncs.c from 0.04 to 0.20, which would make\nthis particular case come out better. (I believe the estimate would\nwork out to about 320, if part is 200K rows; that should be enough to\nproduce at least some change of plan.) You could try patching your\nlocal installation likewise.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Jul 2002 13:43:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] why is postgres estimating so badly? " }, { "msg_contents": "\n> > AND part.name LIKE '%green%'\n>\n> It's difficult for the planner to produce a decent estimate for the\n> selectivity of an unanchored LIKE clause, since there are no statistics\n> it can use for the purpose. We recently changed FIXED_CHAR_SEL in\n> src/backend/utils/adt/selfuncs.c from 0.04 to 0.20, which would make\n> this particular case come out better. (I believe the estimate would\n> work out to about 320, if part is 200K rows; that should be enough to\n> produce at least some change of plan.) You could try patching your\n> local installation likewise.\n\nHere are the results, worse than before:\nNOTICE: QUERY PLAN:\n\nSort (cost=25209.88..25209.88 rows=1 width=93) (actual\ntime=1836143.78..1836144.48 rows=175 loops=1)\n -> Aggregate (cost=25209.85..25209.87 rows=1 width=93) (actual\ntime=1803559.97..1836136.47 rows=175 loops=1)\n -> Group (cost=25209.85..25209.86 rows=2 width=93) (actual\ntime=1803348.04..1816093.89 rows=325302 loops=1)\n -> Sort (cost=25209.85..25209.85 rows=2 width=93) (actual\ntime=1803347.97..1804795.41 rows=325302 loops=1)\n -> Hash Join (cost=25208.43..25209.84 rows=2 width=93)\n(actual time=1744714.61..1772790.19 rows=325302 loops=1)\n -> Seq Scan on nation (cost=0.00..1.25 rows=25\nwidth=15) (actual time=13.92..14.84 rows=25 loops=1)\n -> Hash (cost=25208.42..25208.42 rows=2\nwidth=78) (actual time=1744603.74..1744603.74 rows=0 loops=1)\n -> Nested Loop (cost=0.00..25208.42 rows=2\nwidth=78) (actual time=139.21..1740110.04 rows=325302 loops=1)\n -> Nested Loop (cost=0.00..25201.19\nrows=2 width=70) (actual time=122.37..1687895.49 rows=325302 loops=1)\n -> Nested Loop\n(cost=0.00..25187.93 rows=4 width=62) (actual time=121.75..856097.27\nrows=325302 loops=1)\n -> Nested Loop\n(cost=0.00..17468.91 rows=1280 width=24) (actual time=78.43..19698.77\nrows=43424 loops=1)\n -> Seq Scan on part\n(cost=0.00..12399.00 rows=320 width=4) (actual time=29.57..4179.70\nrows=10856 loops=1)\n -> Index Scan using\npartsupp_pkey on partsupp (cost=0.00..15.79 rows=4 width=20) (actual\ntime=1.17..1.33 rows=4 loops=10856)\n -> Index Scan using\nl_partsupp_index on lineitem (cost=0.00..6.02 rows=1 width=38) (actual\ntime=2.83..18.97 rows=7 loops=43424)\n -> Index Scan using orders_pkey\non orders (cost=0.00..3.23 rows=1 width=8) (actual time=2.47..2.50 rows=1\nloops=325302)\n -> Index Scan using supplier_pkey on\nsupplier (cost=0.00..3.01 rows=1 width=8) (actual time=0.08..0.09 rows=1\nloops=325302)\nTotal runtime: 1836375.16 msec\n\n\nIt looks even worse, another advice?, or maybe a query change. here is the\nquery again:\nSELECT\n nation,\n o_year,\n CAST((sum(amount))AS NUMERIC(10,2))AS sum_profit\nFROM(\n SELECT\n nation.name AS nation,\n EXTRACT(year FROM orders.orderdate) AS o_year,\n\nlineitem.extendedprice*(1-lineitem.discount)-partsupp.supplycost*lineitem.qu\nantity AS amount\n FROM\n part,\n supplier,\n lineitem,\n partsupp,\n orders,\n nation\n WHERE\n supplier.suppkey=lineitem.suppkey\n AND partsupp.suppkey=lineitem.suppkey\n AND partsupp.partkey=lineitem.partkey\n AND part.partkey=lineitem.partkey\n AND orders.orderkey=lineitem.orderkey\n AND supplier.nationkey=nation.nationkey\n AND part.name LIKE '%green%'\n ) AS profit\nGROUP BY\n nation,\n o_year\nORDER BY\n nation,\n o_year DESC;\n\n\nThanks and regards\n\n\n", "msg_date": "Thu, 18 Jul 2002 10:18:35 +0200", "msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] why is postgres estimating so badly? " } ]
[ { "msg_contents": "hi all. long time.\n\ni spoke w/jan some time ago (in a hurry now -- have to call salvation army \nto have them pick up my couch!).\n\ni need to jump in an discuss/get an assignment off the todo list. i am a cs \ndoctoral student at gmu in va.\n\ni am the best programmer in the world.\n\ni will spend some time soon emailing and reading the todo list.\n\ngreat to be back (rare time now where i own a computer!). kudos to person \nwho advised me earlier -- thank you!\n\nR. W. Kernell\nM. S. Computer Science\nDec. 1995 - ODU.\n\n\n\n\n_________________________________________________________________\nMSN Photos is the easiest way to share and print your photos: \nhttp://photos.msn.com/support/worldwide.aspx\n\n", "msg_date": "Wed, 17 Jul 2002 12:38:04 +0000", "msg_from": "\"Robert Kernell\" <kernell0000@hotmail.com>", "msg_from_op": true, "msg_subject": "need assignment" }, { "msg_contents": "> i spoke w/jan some time ago (in a hurry now -- have to call salvation army\n> to have them pick up my couch!).\n>\n> i need to jump in an discuss/get an assignment off the todo list. i am a cs\n> doctoral student at gmu in va.\n>\n> i am the best programmer in the world.\n\nWow.\n\nChris\n\n\n", "msg_date": "Wed, 17 Jul 2002 23:19:40 +0800 (WST)", "msg_from": "Christopher Kings-Lynne <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: need assignment" }, { "msg_contents": "On Wed, Jul 17, 2002 at 11:19:40PM +0800, Christopher Kings-Lynne wrote:\n> > i spoke w/jan some time ago (in a hurry now -- have to call salvation army\n> > to have them pick up my couch!).\n> >\n> > i need to jump in an discuss/get an assignment off the todo list. i am a cs\n> > doctoral student at gmu in va.\n> >\n> > i am the best programmer in the world.\n> \n> Wow.\n\nYeah, my reaction as well. Here's hoping there's an invisible smiley\nafter that statement. Especially since there's no other content.\n\nRoss\n", "msg_date": "Wed, 17 Jul 2002 10:42:28 -0500", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: need assignment" } ]
[ { "msg_contents": "I'd like to implement error codes. I think they would be pretty useful,\nalthough there are a few difficulties in the implementation I'd like\nto get some input on.\n\nShould every elog() have an error code? I'm not sure -- there are many\nelog() calls that will never been seen by the user, since the error\nthey represent will be caught before control reaches the elog (e.g.\nparse errors, internal consistency checks, multiple elog(ERROR)\nfor the same user error, etc.) Perhaps for those error messages\nthat don't have numbers, we could just give them ERRNO_UNKNOWN or\na similar constant.\n\nHow should the backend code signal an error with an error number?\nPerhaps we could report errors with error numbers via a separate\nfunction, which would take the error number as its only param.\nFor example:\n\nerror(ERRNO_REF_INT_VIOLATION);\n\nThe problem here is that many errors will require more information\nthat that, in order for the client to handle them properly. For\nexample, how should a COPY indicate that an RI violation has\noccured? Ideally, we'd like to allow the client application to\nknow that in line a, column b, the literal value 'c' was\nnot found in the referenced column d of the referenced table d.\n\nHow should the error number be sent to the client, and would this\nrequire an FE/BE change? I think we can avoid that: including the\nerror number in the error message itself, make PQgetErrorMessage()\n(and similar funcs) smart enough to remove the error number, and\nadd a separate function (such as PQgetErrorNumber() ) to report\nthe error number, if there is one.\n\nOn a related note, it would be nice to get a consistent style of\npunctuation for elog() messages -- currently, some of them end\nin a period (e.g. \"Database xxx does not exist in the system\ncatalog.\"), while the majority do not. Which is better?\n\nAlso, I think it was Bruce who mentioned that it would be nice\nto add function names, source files, and/or line numbers to error\nmessages, instead of the current inconsistent method of sometimes\nspecifying the function name in the elog() message. Would this\nbe a good idea? Should all errors include this information?\n\nIt would be relatively easy to replace elog() with a macro of\nthe form:\n\n#define elog(...) real_elog(__FILE__, __LINE__, __VA_ARGS__)\n\nAnd then adjust real_elog() to report that information as\nappropriate.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Wed, 17 Jul 2002 14:27:39 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": true, "msg_subject": "error codes" }, { "msg_contents": "nconway@klamath.dyndns.org (Neil Conway) writes:\n> Should every elog() have an error code?\n\nI believe we decided that it'd be okay to use one or two codes defined\nlike \"internal error\", \"corrupted data\", etc for all the elogs that are\nnot-supposed-to-happen conditions. What error codes are really for is\ndistinguishing different kinds of user mistakes, and so that's where\nyou need specificness.\n\n> How should the backend code signal an error with an error number?\n\nPlease read some of the archived discussions about this. All the points\nyou mention have been brought up before.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Jul 2002 15:57:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: error codes " }, { "msg_contents": "On Wed, Jul 17, 2002 at 03:57:56PM -0400, Tom Lane wrote:\n> nconway@klamath.dyndns.org (Neil Conway) writes:\n> > Should every elog() have an error code?\n> \n> I believe we decided that it'd be okay to use one or two codes defined\n> like \"internal error\", \"corrupted data\", etc for all the elogs that are\n> not-supposed-to-happen conditions.\n\nOk, makes sense to me.\n\n> > How should the backend code signal an error with an error number?\n> \n> Please read some of the archived discussions about this. All the points\n> you mention have been brought up before.\n\nWoops -- I wasn't aware that this had been discussed before, my apologies.\nI'm reading the archives now...\n\nPeter: are you planning to implement this?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Wed, 17 Jul 2002 16:17:58 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": true, "msg_subject": "Re: error codes" }, { "msg_contents": "Neil, attached are three email messages dealing with error message\nwording.\n\nI like Tom's idea of coding only the messages that are common/user\nerrors and leaving the others with a catch-all code.\n\nWe now have more elog levels in 7.3, so it should be easier to classify\nthe messages.\n\nI can see this job as several parts:\n\n---------------------------------------------------------------------------\n\nCleanup of error wording, removal of function names. See attached\nemails for possible standard.\n\n---------------------------------------------------------------------------\n\nReporting of file, line, function reporting using GUC/SET variable. For\nfunction names I see in the gcc 3.1 docs at\nhttp://gcc.gnu.org/onlinedocs/gcc-3.1/cpp/Standard-Predefined-Macros.html:\n\n\tC99 introduces __func__, and GCC has provided __FUNCTION__ for a long\n\ttime. Both of these are strings containing the name of the current\n\tfunction (there are slight semantic differences; see the GCC manual).\n\tNeither of them is a macro; the preprocessor does not know the name of\n\tthe current function. They tend to be useful in conjunction with\n\t__FILE__ and __LINE__, though.\n\nMy gcc 2.95 (gcc version 2.95.2 19991024) supports both __FUNCTION__ and\n__func__, even though they are not documented in the info manual pages I\nhave. I think we will need a configure.in test for this because it\nisn't a macro you can #ifdef.\n\n---------------------------------------------------------------------------\n\nActual error code numbers/letters. I think the new elog levels will\nhelp with this. We have to decide if we want error numbers, or some\npneumonic like NOATTR or CONSTVIOL. I suggest the latter.\n\n---------------------------------------------------------------------------\n\nI think we have plenty of time to get this done for 7.3.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026", "msg_date": "Wed, 17 Jul 2002 18:04:17 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: error codes" }, { "msg_contents": "Neil Conway writes:\n\n> I'd like to implement error codes. I think they would be pretty useful,\n> although there are a few difficulties in the implementation I'd like\n> to get some input on.\n\nOK, allow me to pass on to you the accumulated wisdom on this topic. :-)\n\n> Should every elog() have an error code?\n\nThe set of error codes should primarily be that defined by SQL99 part 2\nclause 22 \"Status codes\" and PostgreSQL extensions that follow that\nspirit. That means that all those \"can't happen\" or \"all is lost anyway\"\ntypes should be lumped (perhaps in some implicit way) under \"internal\nerror\". That means, yes, an error code should be returned in every case\nof an error, but it doesn't have to be a distinct error code for every\ncondition.\n\n> How should the backend code signal an error with an error number?\n\n> The problem here is that many errors will require more information\n> that that, in order for the client to handle them properly. For\n> example, how should a COPY indicate that an RI violation has\n> occured? Ideally, we'd like to allow the client application to\n> know that in line a, column b, the literal value 'c' was\n> not found in the referenced column d of the referenced table d.\n\nPrecisely. You will find that SQL99 part 2 clause 19 \"Diagnostics\nmanagement\" defines all the fields that form part of a diagnostic (i.e.,\nerror or notice). This includes for example, fields that contain the name\nand schema of the table that was involved, if appropriate. (Again,\nappropriate PostgreSQL extensions could be made.) It is recommendable\nthat this scheme be followed, so PL/pgSQL and ECPG, to name some\ncandidates, could implement the GET DIAGNOSTICS statement as in the\nstandard. (Notice that, for example, a diagnostic still contains a text\nmessage in addition to a code.)\n\n> How should the error number be sent to the client, and would this\n> require an FE/BE change? I think we can avoid that: including the\n> error number in the error message itself, make PQgetErrorMessage()\n> (and similar funcs) smart enough to remove the error number, and\n> add a separate function (such as PQgetErrorNumber() ) to report\n> the error number, if there is one.\n\nI would advise against trying to cram everything into a string. Consider\nthe extra fields explained above. Consider being nice to old clients.\nAlso, libpq allows that more than one error message might arrive per query\ncycle. Currently, the error messages are concatenated. That won't work\nanymore. You need a new API anyway. You need a new API for notices, too.\n\nOne possiblity to justify a protocol change is to break something else\nwith it, like a new copy protocol.\n\n> On a related note, it would be nice to get a consistent style of\n> punctuation for elog() messages -- currently, some of them end\n> in a period (e.g. \"Database xxx does not exist in the system\n> catalog.\"), while the majority do not. Which is better?\n\nYup, we've talked about that some time ago. I have a style guide mostly\nworked out for discussion.\n\n> It would be relatively easy to replace elog() with a macro of\n> the form:\n>\n> #define elog(...) real_elog(__FILE__, __LINE__, __VA_ARGS__)\n>\n> And then adjust real_elog() to report that information as\n> appropriate.\n\nAnd it would be relatively easy to break every old compiler that way...\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 18 Jul 2002 00:33:22 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: error codes" }, { "msg_contents": "> Should every elog() have an error code? I'm not sure -- there are many\n> elog() calls that will never been seen by the user, since the error\n> they represent will be caught before control reaches the elog (e.g.\n> parse errors, internal consistency checks, multiple elog(ERROR)\n> for the same user error, etc.) Perhaps for those error messages\n> that don't have numbers, we could just give them ERRNO_UNKNOWN or\n> a similar constant.\n\nIt might be cool to a little command utility \"pg_error\" or whatever that you\npass an error code to and it prints out a very detailed description of the\nproblem...\n\nChris\n\n", "msg_date": "Thu, 18 Jul 2002 09:57:56 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: error codes" } ]
[ { "msg_contents": "\nOne of my machines has two CPUs, and in some cases I build a pair\nof indexes in parallel to take advantage of this. (I can't seem\nto do an ALTER TABLE ADD PRIMARY KEY in parallel with an index\nbuild, though.) Recently, though, I received the message \"ERROR:\nsimple_heap_update: tuple concurrently updated.\" Can anybody tell\nme more about this error and its consequences?\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Thu, 18 Jul 2002 09:12:10 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": true, "msg_subject": "tuple concurrently updated" }, { "msg_contents": "I guess two transactions updated a tuple concurrently.\nBecause versioning scheme allows old versions can be\nread by another transaction, the old version can be updated too.\n\nFor example,\n\nWe have a row whose value is 1\ncreate table t1 (i1 integer);\ninsert into t1 values(1);\n\nAnd,\n\nTx1 executes update t1 set i1=2 where i1=1\nTx2 executes update t1 set i1=10 where i1=1\n\nNow suppose that\nTx1 read i1 with value 1\nTx2 read i1 with value 1\nTx1 updates i1 to 2 by inserting 2 into t1 according to versioning scheme.\nTx2 tries to update i1 to 10 but fails because it is already updated by Tx1.\n\nBecause you created different index with different transaction,\nThe concurrently updated tuple cannot be any of index node.\n\nIf the index on the same class,\ntwo concurrent CREATE INDEX command can update pg_class.relpages\nat the same time.\n\nI guess that is not a bug of pgsql, but a weak point of\nMVCC DBMS.\n\n\"Curt Sampson\" <cjs@cynic.net> wrote in message\nnews:Pine.NEB.4.44.0207180848400.681-100000@angelic.cynic.net...\n>\n> One of my machines has two CPUs, and in some cases I build a pair\n> of indexes in parallel to take advantage of this. (I can't seem\n> to do an ALTER TABLE ADD PRIMARY KEY in parallel with an index\n> build, though.) Recently, though, I received the message \"ERROR:\n> simple_heap_update: tuple concurrently updated.\" Can anybody tell\n> me more about this error and its consequences?\n>\n> cjs\n> --\n> Curt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n> Don't you know, in this new Dark Age, we're all light. --XTC\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n\n", "msg_date": "Fri, 26 Jul 2002 05:01:08 +0900", "msg_from": "\"Kangmo, Kim\" <ilvsusie@hanafos.com>", "msg_from_op": false, "msg_subject": "Re: tuple concurrently updated" }, { "msg_contents": "A solution to this problem is not versioning catalog tables but in-place\nupdating them.\nOf course anther transaction that wants to update the same row in the\ncatalog table should wait,\nwhich leads to bad concurrency.\nBut this problem can be solved by commiting every DDL right after its\nexecution successfully ends.\nNote that catalog table updators are DDL, not DML.\nI think that's why Oracle commits on execution of every DDL.\n\n\"Kangmo, Kim\" <ilvsusie@hanafos.com> wrote in message\nnews:ahple2$1rgh$1@news.hub.org...\n> I guess two transactions updated a tuple concurrently.\n> Because versioning scheme allows old versions can be\n> read by another transaction, the old version can be updated too.\n>\n> For example,\n>\n> We have a row whose value is 1\n> create table t1 (i1 integer);\n> insert into t1 values(1);\n>\n> And,\n>\n> Tx1 executes update t1 set i1=2 where i1=1\n> Tx2 executes update t1 set i1=10 where i1=1\n>\n> Now suppose that\n> Tx1 read i1 with value 1\n> Tx2 read i1 with value 1\n> Tx1 updates i1 to 2 by inserting 2 into t1 according to versioning scheme.\n> Tx2 tries to update i1 to 10 but fails because it is already updated by\nTx1.\n>\n> Because you created different index with different transaction,\n> The concurrently updated tuple cannot be any of index node.\n>\n> If the index on the same class,\n> two concurrent CREATE INDEX command can update pg_class.relpages\n> at the same time.\n>\n> I guess that is not a bug of pgsql, but a weak point of\n> MVCC DBMS.\n>\n> \"Curt Sampson\" <cjs@cynic.net> wrote in message\n> news:Pine.NEB.4.44.0207180848400.681-100000@angelic.cynic.net...\n> >\n> > One of my machines has two CPUs, and in some cases I build a pair\n> > of indexes in parallel to take advantage of this. (I can't seem\n> > to do an ALTER TABLE ADD PRIMARY KEY in parallel with an index\n> > build, though.) Recently, though, I received the message \"ERROR:\n> > simple_heap_update: tuple concurrently updated.\" Can anybody tell\n> > me more about this error and its consequences?\n> >\n> > cjs\n> > --\n> > Curt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n> > Don't you know, in this new Dark Age, we're all light. --XTC\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n>\n>\n\n\n", "msg_date": "Fri, 26 Jul 2002 05:19:24 +0900", "msg_from": "\"Kangmo, Kim\" <ilvsusie@hanafos.com>", "msg_from_op": false, "msg_subject": "Re: tuple concurrently updated" }, { "msg_contents": "\"Kangmo, Kim\" <ilvsusie@hanafos.com> writes:\n> If the index on the same class,\n> two concurrent CREATE INDEX command can update pg_class.relpages\n> at the same time.\n\nOr try to, anyway. The problem here is that the code that updates\nsystem catalogs is not prepared to cope with concurrent updates\nto the same tuple.\n\n> I guess that is not a bug of pgsql, but a weak point of\n> MVCC DBMS.\n\nNo, it's not a limitation of MVCC per se, it's only an implementation\nshortcut for catalog updates. Fixing this across all system catalog\nupdates seems more trouble than it's worth. It'd be nice if the\nconcurrent-CREATE-INDEX case worked, though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 25 Jul 2002 16:25:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: tuple concurrently updated " }, { "msg_contents": "How do you think about my suggestion to not versioning system catalogs?\n\np.s. It's unbelivable that I got a reply from legendary Tom Lane. :)\n\nBest,\nKim.\n\n\"Tom Lane\" <tgl@sss.pgh.pa.us> wrote in message\nnews:20755.1027628748@sss.pgh.pa.us...\n> \"Kangmo, Kim\" <ilvsusie@hanafos.com> writes:\n> > If the index on the same class,\n> > two concurrent CREATE INDEX command can update pg_class.relpages\n> > at the same time.\n>\n> Or try to, anyway. The problem here is that the code that updates\n> system catalogs is not prepared to cope with concurrent updates\n> to the same tuple.\n>\n> > I guess that is not a bug of pgsql, but a weak point of\n> > MVCC DBMS.\n>\n> No, it's not a limitation of MVCC per se, it's only an implementation\n> shortcut for catalog updates. Fixing this across all system catalog\n> updates seems more trouble than it's worth. It'd be nice if the\n> concurrent-CREATE-INDEX case worked, though.\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n\n", "msg_date": "Fri, 26 Jul 2002 05:34:25 +0900", "msg_from": "\"Kangmo, Kim\" <ilvsusie@hanafos.com>", "msg_from_op": false, "msg_subject": "Re: tuple concurrently updated" }, { "msg_contents": "\"Kangmo, Kim\" wrote:\n> \n> How do you think about my suggestion to not versioning system catalogs?\n> \n> p.s. It's unbelivable that I got a reply from legendary Tom Lane. :)\n> \n> Best,\n> Kim.\n\nI can guess what Tom's going to say, since I argued your position\napprox. 3 years ago. Implicitly committing transactions would not allow\nfor rollback of DDL statements. This is a great feature that PostgreSQL\nhas striven for that Oracle lacks. 3 years ago, it seemed to be a\nproblem, when a table created in an aborted transaction was not being\ncorrectly cleaned up. It gets in the way of implementing an ALTER TABLE\nDROP COLUMN easily, since rollback of a dropped column means that the\nunderlying data must be preserved. However, the developers have made\nsuch great progress in properly rolling back DDL statements that it\nwould be a real shame to lose such a great feature. Additionally, it\ncould seriously break a lot of applications out there that do not\nexpect:\n\nINSERT INTO foo VALUES (1);\nCREATE TABLE bar (key integer);\nROLLBACK;\n\nto fail to rollback the INSERT into foo. When the original discussion\ncame up, there were even a few of Oracle developers that didn't know\nOracle was committing their transactions behind their back.\n\nMike Mascari\nmascarm@mascari.com\n", "msg_date": "Thu, 25 Jul 2002 17:00:32 -0400", "msg_from": "Mike Mascari <mascarm@mascari.com>", "msg_from_op": false, "msg_subject": "Re: tuple concurrently updated" }, { "msg_contents": "On Thu, 25 Jul 2002, Tom Lane wrote:\n\n> \"Kangmo, Kim\" <ilvsusie@hanafos.com> writes:\n> > If the index on the same class,\n> > two concurrent CREATE INDEX command can update pg_class.relpages\n> > at the same time.\n>\n> Or try to, anyway. The problem here is that the code that updates\n> system catalogs is not prepared to cope with concurrent updates\n> to the same tuple.\n\nI see. So the error is basically harmless, and can be rectified anyway\nwith an ANALYZE, right?\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Sun, 28 Jul 2002 18:02:44 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": true, "msg_subject": "Re: tuple concurrently updated " }, { "msg_contents": "Curt Sampson <cjs@cynic.net> writes:\n> On Thu, 25 Jul 2002, Tom Lane wrote:\n>> \"Kangmo, Kim\" <ilvsusie@hanafos.com> writes:\n> If the index on the same class,\n> two concurrent CREATE INDEX command can update pg_class.relpages\n> at the same time.\n>> \n>> Or try to, anyway. The problem here is that the code that updates\n>> system catalogs is not prepared to cope with concurrent updates\n>> to the same tuple.\n\n> I see. So the error is basically harmless, and can be rectified anyway\n> with an ANALYZE, right?\n\nOther than the fact that the second CREATE INDEX fails and rolls back,\nthere's no problem ;-)\n\nI was thinking that it might help to have UpdateStats not try to update\nthe pg_class tuple if the correct value is already present. This will\njust narrow the window for failure, not eliminate it; but it'd be a\nsimple change and would probably help some. Another thing to look at\nis merging that routine with setRelhasindex so that a CREATE INDEX\ninvolves only one update to the table's pg_class row, not two.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 28 Jul 2002 12:32:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: tuple concurrently updated " }, { "msg_contents": "On Sun, 28 Jul 2002, Tom Lane wrote:\n\n> Other than the fact that the second CREATE INDEX fails and rolls back,\n> there's no problem ;-)\n\nAgh!\n\nSo what, in the current version of postgres, are my options for\ndoing parallel index builds?\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Mon, 29 Jul 2002 01:39:09 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": true, "msg_subject": "Re: tuple concurrently updated " } ]
[ { "msg_contents": "I'm using Postgres 7.2.1 on a dual-Athlon running RedHat 7.3bigmem\nwith 2 Gig of RAM and a 240 Gig RAID 5 (3ware IDE RAID). I just did a\n'vacuum analyze' on the database, however the same query to two\nsimilar tables is coming up quite different. The two tables only\ndiffer in that one (\"center_out_cell\") has an extra int2 field called\n\"target\" which can take up to 8 different values.\n\nHere are the queries:\n\ndb02=# explain select distinct area from center_out_cell where subject\n= 'M' and arm = 'R' and rep = 10 and success = 1 and direction = 1;\nNOTICE: QUERY PLAN:\n\nUnique (cost=87795.47..87795.80 rows=13 width=5)\n -> Sort (cost=87795.47..87795.47 rows=131 width=5)\n -> Seq Scan on center_out_cell (cost=0.00..87790.87 rows=131\nwidth=5)\n\nEXPLAIN\ndb02=# explain select distinct area from circles_cell where subject =\n'M' and arm = 'R' and rep = 10 and success = 1 and direction = 1;\nNOTICE: QUERY PLAN:\n\nUnique (cost=258.36..258.52 rows=6 width=5)\n -> Sort (cost=258.36..258.36 rows=64 width=5)\n -> Index Scan using pk1circles_cell on circles_cell \n(cost=0.00..256.43 rows=64 width=5)\n\nEXPLAIN\n\n\nHere are the definitions for the 2 tables:\n\ndb02=# \\d center_out_cell\n Table \"center_out_cell\"\n Column | Type | Modifiers\n------------+--------------------+-----------\n subject | text |\n arm | character(1) |\n target | smallint |\n rep | integer |\n direction | smallint |\n success | smallint |\n hemisphere | character(1) |\n area | text |\n filenumber | integer |\n dsp_chan | text |\n num_spikes | integer |\n spike_data | double precision[] |\nUnique keys: pk0center_out_cell,\n pk1center_out_cell\n\nwhere: \ndb02=# \\d pk1center_out_cell\nIndex \"pk1center_out_cell\"\n Column | Type\n------------+--------------\n subject | text\n arm | character(1)\n target | smallint\n rep | integer\n hemisphere | character(1)\n area | text\n filenumber | integer\n dsp_chan | text\n direction | smallint\nunique btree\nIndex predicate: (success = 1)\n\n\nand\n \ndb02=# \\d pk0center_out_cell\nIndex \"pk0center_out_cell\"\n Column | Type\n------------+--------------\n subject | text\n arm | character(1)\n target | smallint\n rep | integer\n hemisphere | character(1)\n area | text\n filenumber | integer\n dsp_chan | text\n direction | smallint\nunique btree\nIndex predicate: (success = 0)\n\n\ndb02=# \\d circles_cell\n Table \"circles_cell\"\n Column | Type | Modifiers\n------------+--------------------+-----------\n subject | text |\n arm | character(1) |\n rep | integer |\n direction | smallint |\n success | smallint |\n hemisphere | character(1) |\n area | text |\n filenumber | integer |\n dsp_chan | text |\n num_spikes | integer |\n spike_data | double precision[] |\nUnique keys: pk0circles_cell,\n pk1circles_cell\n\nwhere:\n\ndb02=# \\d pk1circles_cell\n Index \"pk1circles_cell\"\n Column | Type\n------------+--------------\n subject | text\n arm | character(1)\n rep | integer\n direction | smallint\n hemisphere | character(1)\n area | text\n filenumber | integer\n dsp_chan | text\nunique btree\nIndex predicate: (success = 1)\n\ndb02=# \\d pk0circles_cell\n Index \"pk0circles_cell\"\n Column | Type\n------------+--------------\n subject | text\n arm | character(1)\n rep | integer\n direction | smallint\n hemisphere | character(1)\n area | text\n filenumber | integer\n dsp_chan | text\nunique btree\nIndex predicate: (success = 0)\n\n\nNow I know that, due to the extra field \"target\", \"center_out_cell\"\ncan be as large as 8 times \"circles_cell\", but according to the cost\nof the planner, the statement is 340 times more costly. I think this\nis because the planner is using the index in the circles_cell case and\nnot in the center_out_cell case. However, I don't pretend to\nunderstand the intricasies of the planner to make an intelligent\nguess. I've been trying random changes to postgresql.conf like\nincreasing the shared memory size, changing the random_page_cost size,\netc., but would like some help in trying to speed things up.\n\nHere are some relevant settings from my postgresql.conf (made in an\nattempt to max out buffers):\n\nshared_buffers = 9000 # 2*max_connections, min 16 \nwal_buffers = 32 # min 4\nsort_mem = 64000 # min 32 \nvacuum_mem = 16384 # min 1024\nwal_files = 32\neffective_cache_size = 1000 # default in 8k pages\n\n\nThanks in advance.\n-Tony\n", "msg_date": "17 Jul 2002 17:56:12 -0700", "msg_from": "reina@nsi.edu (Tony Reina)", "msg_from_op": true, "msg_subject": "Planner very slow on same query to slightly different tables" }, { "msg_contents": "reina@nsi.edu (Tony Reina) writes:\n> db02=# explain select distinct area from center_out_cell where subject\n> = 'M' and arm = 'R' and rep = 10 and success = 1 and direction = 1;\n> NOTICE: QUERY PLAN:\n\n> Unique (cost=87795.47..87795.80 rows=13 width=5)\n> -> Sort (cost=87795.47..87795.47 rows=131 width=5)\n> -> Seq Scan on center_out_cell (cost=0.00..87790.87 rows=131\n> width=5)\n\n> Index \"pk1center_out_cell\"\n> Column | Type\n> ------------+--------------\n> subject | text\n> arm | character(1)\n> target | smallint\n> rep | integer\n> hemisphere | character(1)\n> area | text\n> filenumber | integer\n> dsp_chan | text\n> direction | smallint\n> unique btree\n> Index predicate: (success = 1)\n\nI imagine the problem with this index is that there's no constraint for\n\"target\" in the query; so the planner could only use the first two index\ncolumns (subject and arm), which probably isn't very selective. The\nindex used in the other query is defined differently:\n\n> db02=# \\d pk1circles_cell\n> Index \"pk1circles_cell\"\n> Column | Type\n> ------------+--------------\n> subject | text\n> arm | character(1)\n> rep | integer\n> direction | smallint\n> hemisphere | character(1)\n> area | text\n> filenumber | integer\n> dsp_chan | text\n> unique btree\n> Index predicate: (success = 1)\n\nThis allows \"rep\" to be used in the indexscan too (and if you were to\ncast properly, viz \"direction = 1::smallint\", then that column could be\nused as well).\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Jul 2002 21:47:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Planner very slow on same query to slightly different tables " }, { "msg_contents": "If I understand correctly, I tried specifying the target and even casting \nall of the smallint's, but it still is a slow estimate. Perhaps, this is \njust due to a large amount of data, but my gut is telling me that I have \nsomething wrong here.\n\n\ndb02=# explain select distinct area from center_out_cell where subject = \n'M' and arm = 'R' and rep = 10 and success = 1::smallint and direction = \n1::smallint and target = 3::smallint;\nNOTICE: QUERY PLAN:\nUnique (cost=100105115.88..100105115.93 rows=2 width=5)\n -> Sort (cost=100105115.88..100105115.88 rows=19 width=5)\n -> Seq Scan on center_out_cell (cost=100000000.00..100105115.47 \nrows=19 width=5)\nEXPLAIN\ndb02=# explain select distinct area from center_out_cell where subject = \n'M' and arm = 'R' and rep = 10::int and success = 1::smallint and direction \n= 1::smallint and target = 3::smallint;\nNOTICE: QUERY PLAN:\nUnique (cost=100105115.88..100105115.93 rows=2 width=5)\n -> Sort (cost=100105115.88..100105115.88 rows=19 width=5)\n -> Seq Scan on center_out_cell (cost=100000000.00..100105115.47 \nrows=19 width=5)\nEXPLAIN\ndb02=#\n\n\n-Tony\n\n\n\n\n\n\nAt 09:47 PM 7/17/02 -0400, Tom Lane wrote:\n>reina@nsi.edu (Tony Reina) writes:\n> > db02=# explain select distinct area from center_out_cell where subject\n> > = 'M' and arm = 'R' and rep = 10 and success = 1 and direction = 1;\n> > NOTICE: QUERY PLAN:\n>\n> > Unique (cost=87795.47..87795.80 rows=13 width=5)\n> > -> Sort (cost=87795.47..87795.47 rows=131 width=5)\n> > -> Seq Scan on center_out_cell (cost=0.00..87790.87 rows=131\n> > width=5)\n>\n> > Index \"pk1center_out_cell\"\n> > Column | Type\n> > ------------+--------------\n> > subject | text\n> > arm | character(1)\n> > target | smallint\n> > rep | integer\n> > hemisphere | character(1)\n> > area | text\n> > filenumber | integer\n> > dsp_chan | text\n> > direction | smallint\n> > unique btree\n> > Index predicate: (success = 1)\n>\n>I imagine the problem with this index is that there's no constraint for\n>\"target\" in the query; so the planner could only use the first two index\n>columns (subject and arm), which probably isn't very selective. The\n>index used in the other query is defined differently:\n>\n> > db02=# \\d pk1circles_cell\n> > Index \"pk1circles_cell\"\n> > Column | Type\n> > ------------+--------------\n> > subject | text\n> > arm | character(1)\n> > rep | integer\n> > direction | smallint\n> > hemisphere | character(1)\n> > area | text\n> > filenumber | integer\n> > dsp_chan | text\n> > unique btree\n> > Index predicate: (success = 1)\n>\n>This allows \"rep\" to be used in the indexscan too (and if you were to\n>cast properly, viz \"direction = 1::smallint\", then that column could be\n>used as well).\n>\n> regards, tom lane\n\n", "msg_date": "Thu, 18 Jul 2002 09:56:45 -0700", "msg_from": "Tony Reina <reina@nsi.edu>", "msg_from_op": false, "msg_subject": "Re: Planner very slow on same query to slightly" }, { "msg_contents": "As a followup to my slow query plans:\n\nI've experimented with removing the inheritance schema to see how it\naffects my database calls. Originally, because several fields of the\nprimary key (subject, arm, rep, direction, success) were common to\nevery table, I made those fields a separate table and had subsequent\ntables inherit the fields. Just out of curiosity, I built a new\ndatabase with the same data but didn't use the inheritance (i.e. each\ntable had its own copy of those common fields). It looks like about a\n20% increase in execution speed when I run my programs side by side.\nI'm not sure if that kind of performance hit should be expected.\nAnyone have an idea about this?\n\n-Tony\n\n\nreina@nsi.edu (Tony Reina) wrote in message news:<5.1.1.6.0.20020718095319.009ecec0@schubert.nsi.edu>...\n> If I understand correctly, I tried specifying the target and even casting \n> all of the smallint's, but it still is a slow estimate. Perhaps, this is \n> just due to a large amount of data, but my gut is telling me that I have \n> something wrong here.\n>\n", "msg_date": "19 Jul 2002 14:33:08 -0700", "msg_from": "reina@nsi.edu (Tony Reina)", "msg_from_op": true, "msg_subject": "Inheritance a burden?" }, { "msg_contents": "On 19 Jul 2002, Tony Reina wrote:\n\n> Just out of curiosity, I built a new\n> database with the same data but didn't use the inheritance (i.e. each\n> table had its own copy of those common fields). It looks like about a\n> 20% increase in execution speed when I run my programs side by side.\n\nHave you tried it using the standard relational method of doing this?\n(I.e., you put the common fields in one table, and the extra fields in\nother tables, along with a foreign key relating the extra fields back\nto the main table.) That would more accurately replacate what you were\ndoing with inheritance.\n\nI have a suspicion, in fact, that inheritance may just be syntatic sugar\nfor doing this and adding a couple of views. :-)\n\nAnyway, it could be that by denormalizing the data (copying it to the\nother tables), you reduced the number of joins you do, and so you got a\nperformance increase.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Sat, 20 Jul 2002 14:17:08 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Inheritance a burden?" }, { "msg_contents": "At 02:17 PM 7/20/02 +0900, Curt Sampson wrote:\n\n>Have you tried it using the standard relational method of doing this?\n>(I.e., you put the common fields in one table, and the extra fields in\n>other tables, along with a foreign key relating the extra fields back\n>to the main table.) That would more accurately replacate what you were\n>doing with inheritance.\n>\n>I have a suspicion, in fact, that inheritance may just be syntatic sugar\n>for doing this and adding a couple of views. :-)\n\nYes, I thought this was the case too. I haven't specifically setup foreign \nkeys, but I was under the impression that the \"INHERITS\" command would do this.\n\n>Anyway, it could be that by denormalizing the data (copying it to the\n>other tables), you reduced the number of joins you do, and so you got a\n>performance increase.\n\nYes, I guess this is probably the case although it speaks against \nnormalizing too much. I guess too much of a good thing is bad.\n\n-Tony\n\n", "msg_date": "Mon, 22 Jul 2002 09:17:09 -0700", "msg_from": "Tony Reina <reina@nsi.edu>", "msg_from_op": false, "msg_subject": "Re: Inheritance a burden?" } ]
[ { "msg_contents": "I have committed many support files for CREATE CONVERSION. Default\nconversion procs and conversions are added in initdb. Currently\nsupported conversions are:\n\nUTF-8(UNICODE) <--> SQL_ASCII, ISO-8859-1 to 16, EUC_JP, EUC_KR,\n\t\t EUC_CN, EUC_TW, SJIS, BIG5, GBK, GB18030, UHC,\n\t\t JOHAB, TCVN\n\nEUC_JP <--> SJIS\nEUC_TW <--> BIG5\nMULE_INTERNAL <--> EUC_JP, SJIS, EUC_TW, BIG5\n\nNote that initial contents of pg_conversion system catalog are created\nin the initdb process. So doing initdb required is ideal, it's\npossible to add them to your databases by hand, however. To accomplish\nthis:\n\npsql -f your_postgresql_install_path/share/conversion_create.sql your_database\n\nSo I did not bump up the version in cataversion.h.\n\nTODO:\nAdd more conversion procs\nAdd [CASCADE|RESTRICT] to DROP CONVERSION\nAdd tuples to pg_depend\nAdd regression tests\nWrite docs\nAdd SQL99 CONVERT command?\n--\nTatsuo Ishii\n", "msg_date": "Thu, 18 Jul 2002 11:04:29 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "CREATE CONVERSION mostly works now" } ]
[ { "msg_contents": "I have faced a problem with encoding conversion while the database is\nstarting up. If postmaster accepts a connection request while in the\nstate, it issues a fatal message \"The database system is starting\nup\". Then the encoding conversion system tries to convert the message\nto client encoding if neccessary (e.g. PGCLIENTENCODING is set for\npostmaster process). Then it calles recomputeNamespacePath() which\ncalls GetUserId(), it ends up with an assersion error (see below).\n\nTo prevent this, I would like to add a public function to postmaster.c\nto know we are in the database starting up phase:\n\nCAC_state BackendState()\n\nComments?\n--\nTatsuo Ishii\n\n#0 0x40103841 in __kill () from /lib/libc.so.6\n#1 0x40103594 in raise (sig=6) at ../sysdeps/posix/raise.c:27\n#2 0x40104c81 in abort () at ../sysdeps/generic/abort.c:88\n#3 0x0817932b in Letext () at excabort.c:27\n#4 0x08179292 in ExcUnCaught (excP=0x821813c, detail=0, data=0x0, \n message=0x820dd40 \"!(((bool) ((CurrentUserId) != ((Oid) 0))))\")\n at exc.c:168\n#5 0x081792d9 in ExcRaise (excP=0x821813c, detail=0, data=0x0, \n message=0x820dd40 \"!(((bool) ((CurrentUserId) != ((Oid) 0))))\")\n at exc.c:185\n#6 0x08177fa2 in ExceptionalCondition (\n conditionName=0x820dd40 \"!(((bool) ((CurrentUserId) != ((Oid) 0))))\", \n exceptionP=0x821813c, detail=0x0, fileName=0x820dcc0 \"miscinit.c\", \n lineNumber=502) at assert.c:70\n#7 0x0817c8b2 in GetUserId () at miscinit.c:502\n#8 0x080a2726 in recomputeNamespacePath () at namespace.c:1301\n#9 0x080a26e8 in FindDefaultConversionProc (for_encoding=0, to_encoding=4)\n at namespace.c:1280\n#10 0x0818930e in pg_do_encoding_conversion (\n src=0xbfffe510 \"FATAL: The database system is starting up\\n\", len=43, \n src_encoding=0, dest_encoding=4) at mbutils.c:108\n#11 0x081896a3 in pg_server_to_client (\n s=0xbfffe510 \"FATAL: The database system is starting up\\n\", len=43)\n at mbutils.c:243\n#12 0x080f2d76 in pq_sendstring (buf=0xbfffe474, \n str=0xbfffe510 \"FATAL: The database system is starting up\\n\")\n at pqformat.c:162\n#13 0x08178c16 in send_message_to_frontend (type=20, \n msg=0xbfffe510 \"FATAL: The database system is starting up\\n\")\n at elog.c:750\n#14 0x08178562 in elog (lev=21, \n fmt=0x81ecc60 \"The database system is starting up\") at elog.c:427\n#15 0x0811755f in ProcessStartupPacket (port=0x826f4e0, SSLdone=0)\n at postmaster.c:1176\n#16 0x0811838a in DoBackend (port=0x826f4e0) at postmaster.c:2115\n#17 0x08117f6f in BackendStartup (port=0x826f4e0) at postmaster.c:1863\n#18 0x081171dc in ServerLoop () at postmaster.c:972\n#19 0x08116d46 in PostmasterMain (argc=4, argv=0x8254bf0) at postmaster.c:754\n#20 0x080f33ff in main (argc=4, argv=0xbffff1a4) at main.c:204\n#21 0x400f1fff in __libc_start_main (main=0x80f3230 <main>, argc=4, \n ubp_av=0xbffff1a4, init=0x8069b30 <_init>, fini=0x818a4e0 <_fini>, \n rtld_fini=0x4000c420 <_dl_fini>, stack_end=0xbffff19c)\n at ../sysdeps/generic/libc-start.c:129\n\n", "msg_date": "Thu, 18 Jul 2002 13:55:54 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "preventing encoding conversion while starting up" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> I have faced a problem with encoding conversion while the database is\n> starting up. If postmaster accepts a connection request while in the\n> state, it issues a fatal message \"The database system is starting\n> up\". Then the encoding conversion system tries to convert the message\n> to client encoding if neccessary (e.g. PGCLIENTENCODING is set for\n> postmaster process). Then it calles recomputeNamespacePath() which\n> calls GetUserId(), it ends up with an assersion error (see below).\n\nThis seems to me to be a fatal objection to the entire concept of\nstoring encoding status in the database. If the low-level client\ncommunication code needs to access that information, then the postmaster\nis broken and is not repairable.\n\n> To prevent this, I would like to add a public function to postmaster.c\n> to know we are in the database starting up phase:\n\nHow will that help?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Jul 2002 09:53:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: preventing encoding conversion while starting up " }, { "msg_contents": "On Thu, 2002-07-18 at 06:55, Tatsuo Ishii wrote:\n> I have faced a problem with encoding conversion while the database is\n> starting up. If postmaster accepts a connection request while in the\n> state, it issues a fatal message \"The database system is starting\n> up\". Then the encoding conversion system tries to convert the message\n> to client encoding if neccessary (e.g. PGCLIENTENCODING is set for\n> postmaster process). Then it calles recomputeNamespacePath() which\n> calls GetUserId(), it ends up with an assersion error (see below).\n> \n> To prevent this, I would like to add a public function to postmaster.c\n> to know we are in the database starting up phase:\n\nWhy can't we just open the listening socket _after_ the database has\ncompleted starting up phase ?\n\n------------------\nHannu\n\n", "msg_date": "18 Jul 2002 18:18:26 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: preventing encoding conversion while starting up" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> Why can't we just open the listening socket _after_ the database has\n> completed starting up phase ?\n\nThe problem is not just there. The real problem is that with this patch\ninstalled, it is impossible to report startup errors of any kind,\nbecause the client communication mechanism now depends on having working\ndatabase access. I regard this as a fatal problem :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Jul 2002 12:57:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: preventing encoding conversion while starting up " }, { "msg_contents": "On Thu, 2002-07-18 at 18:57, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > Why can't we just open the listening socket _after_ the database has\n> > completed starting up phase ?\n> \n> The problem is not just there. The real problem is that with this patch\n> installed, it is impossible to report startup errors of any kind,\n> because the client communication mechanism now depends on having working\n> database access. I regard this as a fatal problem :-(\n\nSo the right way would be to always start up in us-ascii (7-bit) and\nre-negotiate encodings later ?\n\n-----------\nHannu\n\n", "msg_date": "18 Jul 2002 20:09:43 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: preventing encoding conversion while starting up" }, { "msg_contents": "On Fri, 2002-07-19 at 03:21, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > On Thu, 2002-07-18 at 18:57, Tom Lane wrote:\n> >> The problem is not just there. The real problem is that with this patch\n> >> installed, it is impossible to report startup errors of any kind,\n> >> because the client communication mechanism now depends on having working\n> >> database access. I regard this as a fatal problem :-(\n> \n> > So the right way would be to always start up in us-ascii (7-bit) and\n> > re-negotiate encodings later ?\n> \n> That might be one way out ... but doesn't it mean breaking the wire\n> protocol? Existing clients aren't likely to know to do that.\n\nIt may be possible to make it compatible with old clients by\n\n1) starting with the same encodings as we always did\n\n2) change the encoding only if both parties agree to do so. I think that\nwe could use listen/notify for that\n\nSo client must first ask for certain encoding by (mis)using listen and\nwill then be confirmed by notify\n\nhannu=# listen \"pg_encoding ISO-8859-15\"; \nLISTEN\nhannu=# notify \"pg_encoding ISO-8859-15\"; \nNOTIFY\nAsynchronous NOTIFY 'pg_encoding ISO-8859-15' from backend with pid 2319\nreceived.\nhannu=# \n\nIt would allow us to do it without protocol changes.\n\nNot that i like it though ;(\n\n> It seems like we've collected enough reasons for a protocol change that\n> one might happen for 7.4. I'd rather not have it happen in 7.3, though,\n> because we don't have enough time left to address all the issues I'd\n> like to see addressed...\n\nBut we could start making a list of issues/proposed solution, or we will\nnot have enough time in 7.4 cycle either.\n\n--------------\nHannu\n\n\n", "msg_date": "19 Jul 2002 02:15:11 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: preventing encoding conversion while starting up" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> On Thu, 2002-07-18 at 18:57, Tom Lane wrote:\n>> The problem is not just there. The real problem is that with this patch\n>> installed, it is impossible to report startup errors of any kind,\n>> because the client communication mechanism now depends on having working\n>> database access. I regard this as a fatal problem :-(\n\n> So the right way would be to always start up in us-ascii (7-bit) and\n> re-negotiate encodings later ?\n\nThat might be one way out ... but doesn't it mean breaking the wire\nprotocol? Existing clients aren't likely to know to do that.\n\nIt seems like we've collected enough reasons for a protocol change that\none might happen for 7.4. I'd rather not have it happen in 7.3, though,\nbecause we don't have enough time left to address all the issues I'd\nlike to see addressed...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Jul 2002 18:21:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: preventing encoding conversion while starting up " }, { "msg_contents": "> Hannu Krosing <hannu@tm.ee> writes:\n> > On Thu, 2002-07-18 at 18:57, Tom Lane wrote:\n> >> The problem is not just there. The real problem is that with this patch\n> >> installed, it is impossible to report startup errors of any kind,\n> >> because the client communication mechanism now depends on having working\n> >> database access. I regard this as a fatal problem :-(\n> \n> > So the right way would be to always start up in us-ascii (7-bit) and\n> > re-negotiate encodings later ?\n> \n> That might be one way out ... but doesn't it mean breaking the wire\n> protocol? Existing clients aren't likely to know to do that.\n\nNo. We have been doing the encoding negotiation outside the existing\nprotocol. Aafter backend goes into the normal idle loop, clients sends\n\"set client_encoding\" query if it wishes.\n\nBTW, for the problem I reported, what about checking\nIsTransactionState returns true before accessing database to find out\nconversions?\n--\nTatsuo Ishii\n", "msg_date": "Fri, 19 Jul 2002 09:10:53 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: preventing encoding conversion while starting up " }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> BTW, for the problem I reported, what about checking\n> IsTransactionState returns true before accessing database to find out\n> conversions?\n\nThe $64 problem here is *what do you do before you can access the database*.\nDetecting whether you can access the database yet is irrelevant unless\nyou can say what you're going to do when the answer is \"no\".\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Jul 2002 23:33:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: preventing encoding conversion while starting up " }, { "msg_contents": "> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > BTW, for the problem I reported, what about checking\n> > IsTransactionState returns true before accessing database to find out\n> > conversions?\n> \n> The $64 problem here is *what do you do before you can access the database*.\n> Detecting whether you can access the database yet is irrelevant unless\n> you can say what you're going to do when the answer is \"no\".\n\nOf course we could do no encoding conversion if the answer is \"no\".\nWhat's wrong with this?\n\nAlso I'm thinking about treating SQL_ASCII encoding as \"special\": if\ndatabase or client encoding is SQL_ASCII, then we could alwasy avoid\nencoding conversion. Currently guc assumes the default encoding for\nclient is SQL_ASCII until the conversion system finds requested client\nencoding (actually conversion system itself regards SQL_ASCII is\ndefault). This is actualy unnecessary right now, but it would minimize\npossible problem in the future. Ideally there should be a special\nencoding \"NO_CONVERSION\", people seem to treat SQL_ASCII to be almost\nidentical to it anyway (remember the days when multibyte was optional).\n--\nTatsuo Ishii\n", "msg_date": "Fri, 19 Jul 2002 12:58:19 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: preventing encoding conversion while starting up " }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n>> The $64 problem here is *what do you do before you can access the database*.\n>> Detecting whether you can access the database yet is irrelevant unless\n>> you can say what you're going to do when the answer is \"no\".\n\n> Of course we could do no encoding conversion if the answer is \"no\".\n> What's wrong with this?\n\nMaybe nothing, if that's what the clients will expect.\n\nI don't actually much care for using IsTransactionState() to try to\ndetermine whether it's safe to look at the database. I'd suggest that\nthe conversion system start up in the no-conversions state, and change\nover to doing a conversion only when explicitly told to --- which would\nhappen in the same late phase of startup that it used to (near the\nSetConfigOption(\"client_encoding\") calls), or at subsequent SET\ncommands. In other words, keep the database lookups firmly out of the\nconversion system proper, and have it do only what it's told. This\nseems much safer than doing a lookup at whatever random time a message\nis generated.\n\n> Also I'm thinking about treating SQL_ASCII encoding as \"special\": if\n> database or client encoding is SQL_ASCII, then we could alwasy avoid\n> encoding conversion.\n\nGood idea. I was somewhat troubled by the prospect of added overhead\nfor people who didn't need multibyte at all --- wasn't there a\ncommitment that there wouldn't be any noticeable slowdown from enabling\nmultibyte all the time?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Jul 2002 17:07:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: preventing encoding conversion while starting up " }, { "msg_contents": "Hi,\n\nI am a new postgresql developer. needed some help with wal/PITR. Can\nsomeone working in this area answer my question?\n(the email looks long but the question is simple :) )\n\nI have been trying to implement undo of transactions using wal. i.e. given\na xid x, postgres can undo all operations of x. For starters, I\nwant to do this in very simple cases i.e. assume x only\ninserts/updates/deletes tuples and does not change database schema. also I\nassume that all of x's wal entries are in one segment.\n\nThe code for this is quite simple if database supports undo or rollback to\na point in time. There is a lot of discussion on the mailing list about\nPITR. I am eagerly waiting for the PITR code to be available on cvs. so\nmy questions are....\n\n1. once PITR has been implemented, infinite play forward will work. Will\nundo also be supported? i.e. can we recover to the past from a \"current\"\nwal log?\nas a very simple scenario---\nxid 1 \" insert record y in relation r\" commit\nxid 2 \" update record x in relation r\" commit\nshutdown\n---now we take database back to start of xid 1.\n\nif answer to qn 1 is no...\n2. my approach is something like this,\nscan log back until start of transaction record\nscan forward until commit record\n\tif record is for transaction x\n\t\tundo(record)\nto undo,\nuse preimage in record and everything else is pretty much same as redo.\ni.e. we open relation, get desired block and work on it etc.\ncan someone tell me if this will work?\n\n\nhoping someone currently working on wal/pitr can help me on this\nissues....\n\nthanks,\nDhruv\n\n\nPS.\n\ntransaction dependency tracking\n-------------------------------\nI added support in postgres to do transaction dependency tracking.\nbasically, x depends on y if x reads something written by y. I maintain a\ndependency graph and also a corresponding disk based log that is accessed\nonly at transaction commit. there is a tool which can be used to query\nthis graph. the time over heads are pretty low (< 1%).\nwith a dependency graph a DBA can say \" I want to undo transaction x and\nall transactions that depend on x\".\n\nso now in the second phase, I am looking at undo of a transactions. any\nthoughts on this are very welcome....\n\n\n", "msg_date": "Sat, 20 Jul 2002 23:36:56 -0400 (EDT)", "msg_from": "Dhruv Pilania <dhruv@cs.sunysb.edu>", "msg_from_op": false, "msg_subject": "PITR and rollback" }, { "msg_contents": "\n\n> -----Original Message-----\n> From: Dhruv Pilania [mailto:dhruv@cs.sunysb.edu]\n> Sent: Saturday, July 20, 2002 11:37 PM\n> To: pgsql-hackers@postgresql.org\n> Cc: pgman@candle.pha.pa.us; richt@multera.com\n> Subject: PITR and rollback\n>\n>\n> Hi,\n>\n> I am a new postgresql developer. needed some help with wal/PITR. Can\n> someone working in this area answer my question?\n> (the email looks long but the question is simple :) )\n>\n> I have been trying to implement undo of transactions using wal. i.e. given\n> a xid x, postgres can undo all operations of x. For starters, I\n> want to do this in very simple cases i.e. assume x only\n> inserts/updates/deletes tuples and does not change database schema. also I\n> assume that all of x's wal entries are in one segment.\n\nStrictly speaking Postgres does not undo transactions but only aborts them.\nIt does not roll back through the log undoing the effects of the\ntransaction. It merely sets the state of the transaction in the commit log\n(clog) to aborted. Then application of the tuple visibility rules prevents\nother transactions from seeing any tuples changed by the aborted\ntransaction.\n\n>\n> The code for this is quite simple if database supports undo or rollback to\n> a point in time. There is a lot of discussion on the mailing list about\n> PITR. I am eagerly waiting for the PITR code to be available on cvs. so\n> my questions are....\n\nWhat I implemented was a roll forward to a point in time. That is restore a\nbackup and roll forward through the wal files until you reach a transaction\nthat committed at or after the specified time.\nI should have a context diff for my roll forward implementation available\nagainst current cvs HEAD by the end of the week.\n\n>\n> 1. once PITR has been implemented, infinite play forward will work. Will\n> undo also be supported? i.e. can we recover to the past from a \"current\"\n> wal log?\n> as a very simple scenario---\n> xid 1 \" insert record y in relation r\" commit\n> xid 2 \" update record x in relation r\" commit\n> shutdown\n> ---now we take database back to start of xid 1.\n>\n> if answer to qn 1 is no...\n> 2. my approach is something like this,\n> scan log back until start of transaction record\n> scan forward until commit record\n> \tif record is for transaction x\n> \t\tundo(record)\n> to undo,\n> use preimage in record and everything else is pretty much same as redo.\n\nThe wal file does not contain \"preimages\" only after images.\n\n> i.e. we open relation, get desired block and work on it etc.\n> can someone tell me if this will work?\n>\nI did a roll forward to a point in time but I think a roll back to a point\nin time would work like:\nRoll back through the wal files looking for transaction commit records and\nchange the status in the clog to aborted until you reach the first commit\nrecord that aborted before the specified roll back time.\nThe main thing that needs to be implemented here is reading backward through\nthe log files which I'm not sure is possible since the wal records do not\nhave a length suffix (they have a length prefix for reading forward but I\ndon't think they have a length suffix as well).\n\n>\n> hoping someone currently working on wal/pitr can help me on this\n> issues....\n>\n> thanks,\n> Dhruv\n>\n>\n> PS.\n>\n> transaction dependency tracking\n> -------------------------------\n> I added support in postgres to do transaction dependency tracking.\n> basically, x depends on y if x reads something written by y. I maintain a\n> dependency graph and also a corresponding disk based log that is accessed\n> only at transaction commit. there is a tool which can be used to query\n> this graph. the time over heads are pretty low (< 1%).\n> with a dependency graph a DBA can say \" I want to undo transaction x and\n> all transactions that depend on x\".\n>\n> so now in the second phase, I am looking at undo of a transactions. any\n> thoughts on this are very welcome....\n>\n>\n>\n\n", "msg_date": "Mon, 22 Jul 2002 12:52:19 -0400", "msg_from": "Richard Tucker <richt@multera.com>", "msg_from_op": false, "msg_subject": "Re: PITR and rollback" }, { "msg_contents": "\nAny chance you can work on save points/nested transactions? See\ndoc/TODO.detail/transactions for info. I can help explaining the ideas\nin there.\n\n---------------------------------------------------------------------------\n\nDhruv Pilania wrote:\n> Hi,\n> \n> I am a new postgresql developer. needed some help with wal/PITR. Can\n> someone working in this area answer my question?\n> (the email looks long but the question is simple :) )\n> \n> I have been trying to implement undo of transactions using wal. i.e. given\n> a xid x, postgres can undo all operations of x. For starters, I\n> want to do this in very simple cases i.e. assume x only\n> inserts/updates/deletes tuples and does not change database schema. also I\n> assume that all of x's wal entries are in one segment.\n> \n> The code for this is quite simple if database supports undo or rollback to\n> a point in time. There is a lot of discussion on the mailing list about\n> PITR. I am eagerly waiting for the PITR code to be available on cvs. so\n> my questions are....\n> \n> 1. once PITR has been implemented, infinite play forward will work. Will\n> undo also be supported? i.e. can we recover to the past from a \"current\"\n> wal log?\n> as a very simple scenario---\n> xid 1 \" insert record y in relation r\" commit\n> xid 2 \" update record x in relation r\" commit\n> shutdown\n> ---now we take database back to start of xid 1.\n> \n> if answer to qn 1 is no...\n> 2. my approach is something like this,\n> scan log back until start of transaction record\n> scan forward until commit record\n> \tif record is for transaction x\n> \t\tundo(record)\n> to undo,\n> use preimage in record and everything else is pretty much same as redo.\n> i.e. we open relation, get desired block and work on it etc.\n> can someone tell me if this will work?\n> \n> \n> hoping someone currently working on wal/pitr can help me on this\n> issues....\n> \n> thanks,\n> Dhruv\n> \n> \n> PS.\n> \n> transaction dependency tracking\n> -------------------------------\n> I added support in postgres to do transaction dependency tracking.\n> basically, x depends on y if x reads something written by y. I maintain a\n> dependency graph and also a corresponding disk based log that is accessed\n> only at transaction commit. there is a tool which can be used to query\n> this graph. the time over heads are pretty low (< 1%).\n> with a dependency graph a DBA can say \" I want to undo transaction x and\n> all transactions that depend on x\".\n> \n> so now in the second phase, I am looking at undo of a transactions. any\n> thoughts on this are very welcome....\n> \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Jul 2002 18:15:27 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: PITR and rollback" } ]
[ { "msg_contents": "Does anyone know how I should modify MergeAttributes to support dropped\ncolumns?\n\nIf the parent column is dropped, should I perhaps just instead of going:\n\ndef = makeNode(ColumnDef);\n\nI could go something like:\n\ndef = makeNullNode(); (or whatever the correct function is)\n\nOr should I modify or remove this sort of thing?:\n\ninhSchema = lappend(inhSchema, def);\n\nThis is to stop a new child table from inheriting dropped columns by\ndefault...\n\nAlso, the last thing after that on my checklist is fixing these two:\n\nCREATE CONSTRAINT TRIGGER\nALTER TABLE / ADD FOREIGN KEY\n\nWhere should I do the check for these? For the alter table case, I can\ncheck that the foreign keys and primary keys actually exist in\ncreateForeignKeyConstraint, but that means that it's already done the table\nscan to validate the foreign key.\n\nAnd I can't for the life of me actually find where a CREATE CONSTRAINT\nTRIGGER statement is processed...\n\nChris\n\n", "msg_date": "Thu, 18 Jul 2002 13:48:51 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Bright ideas required for drop column..." }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Does anyone know how I should modify MergeAttributes to support dropped\n> columns?\n\nI think you could get away with just ignoring dropped columns from the\nparent. There's no assumption that column numbers are the same in\nparent and child.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Jul 2002 09:55:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bright ideas required for drop column... " } ]
[ { "msg_contents": "\nBruce wrote:\n> Actual error code numbers/letters. I think the new elog levels will\n> help with this. We have to decide if we want error numbers, or some\n> pneumonic like NOATTR or CONSTVIOL. I suggest the latter.\n\nSince there is an actual standard for error codes, I would strongly suggest \nto adhere. The standardized codes are SQLSTATE a char(5) (well standardized\nfor many classes of db errors). Also common, but not so standardized is SQLCODE \na long (only a very few are standardized, like 100 = 'no data found').\nAnd also sqlca. Also look at ecpg for sqlcode and sqlca.\n\nA Quote from dec rdb:\n--------------------------------------------------------------------\n o SQLCODE-This is the original SQL error handling mechanism.\n It is an integer value. SQLCODE differentiates among errors\n (negative numbers), warnings (positive numbers), succesful\n completion (0), and a special code of 100, which means no\n data. SQLCODE is a deprecated feature of the ANSI/ISO SQL\n standard.\n\n o SQLCA-This is an extension of the SQLCODE error handling\n mechanism. It contains other context information that\n supplements the SQLCODE value. SQLCA is not part of the\n ANSI/ISO SQL standard. However, many foreign databases such\n as DB2 and ORACLE RDBMS have defined proprietary semantics and\n syntax to implement it.\n\n o SQLSTATE-This is the error handling mechanism for the ANSI/ISO\n SQL standard. The SQLSTATE value is a character string that is\n associated with diagnostic information. To use the SQLSTATE\n status parameter, you must specify the SQL92 dialect and\n compile your module using DEC Rdb Version 6.0.\n--------------------------------------------------------------------\n\nAndreas\n", "msg_date": "Thu, 18 Jul 2002 10:01:20 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: error codes" }, { "msg_contents": "Insisting on Andreas suggestion, why can't we just prefix all error message \nstrings with the SQLState code? So all error messages would have the format\n\nCCSSS - xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\n\nWhere CCSSS is the standard SQLState code and the message text is a more \nspecific description.\n\nNote that the standard allows for implementation-defined codes, so we can have \nour own CC classes and all the SSS subclasses that we need.\n\n\n-- \nFernando Nasser\nRed Hat - Toronto E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n\n", "msg_date": "Tue, 26 Nov 2002 18:57:22 -0500", "msg_from": "Fernando Nasser <fnasser@redhat.com>", "msg_from_op": false, "msg_subject": "Re: error codes" } ]
[ { "msg_contents": "Hello, i noticed that win32 native stopped working/compiling after the SSL merge.\r\nSo i took the opportunity to fix some stuff:\r\n\r\n1. Made the thing compile (typos & needed definitions) with the new pqsecure_* stuff, and added fe-secure.c to the win32.mak makefile.\r\n2. Fixed some MULTIBYTE compile errors (when building without MB support).\r\n3. Made it do that you can build with debug info: \"nmake -f win32.mak DEBUG=1\".\r\n4. Misc small compiler speedup changes.\r\n\r\nThe resulting .dll has been tested in production, and everything seems ok.\r\nI CC:ed -hackers because i'm not sure about two things:\r\n\r\n1. In libpq-int.h I typedef ssize_t as an int because Visual C (v6.0) doesn't define ssize_t. Is that ok, or is there any standard about what type should be used for ssize_t? \r\n\r\n2. To keep the .dll api consistent regarding MULTIBYTE I just return -1 in fe-connect.c:PQsetClientEncoding() instead of taking away the whole function. I wonder if i should do any compares with the conn->client_encoding and return 0 if nothing would have changed (if so how do i check that?).\r\n\r\nRegards\r\n\r\nMagnus Naeslund\r\n\r\n-- \r\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\r\n Programmer/Networker [|] Magnus Naeslund\r\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-", "msg_date": "Thu, 18 Jul 2002 11:41:51 +0200", "msg_from": "\"Magnus Naeslund(f)\" <mag@fbab.net>", "msg_from_op": true, "msg_subject": "[PATCH] Win32 native fixes after SSL updates (+more)" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nMagnus Naeslund(f) wrote:\n> Hello, i noticed that win32 native stopped working/compiling after the SSL merge.\n> So i took the opportunity to fix some stuff:\n> \n> 1. Made the thing compile (typos & needed definitions) with the new pqsecure_* stuff, and added fe-secure.c to the win32.mak makefile.\n> 2. Fixed some MULTIBYTE compile errors (when building without MB support).\n> 3. Made it do that you can build with debug info: \"nmake -f win32.mak DEBUG=1\".\n> 4. Misc small compiler speedup changes.\n> \n> The resulting .dll has been tested in production, and everything seems ok.\n> I CC:ed -hackers because i'm not sure about two things:\n> \n> 1. In libpq-int.h I typedef ssize_t as an int because Visual C (v6.0) doesn't define ssize_t. Is that ok, or is there any standard about what type should be used for ssize_t? \n> \n> 2. To keep the .dll api consistent regarding MULTIBYTE I just return -1 in fe-connect.c:PQsetClientEncoding() instead of taking away the whole function. I wonder if i should do any compares with the conn->client_encoding and return 0 if nothing would have changed (if so how do i check that?).\n> \n> Regards\n> \n> Magnus Naeslund\n> \n> -- \n> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n> Programmer/Networker [|] Magnus Naeslund\n> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n> \n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Jul 2002 16:24:17 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [PATCH] Win32 native fixes after SSL updates (+more)" }, { "msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\n\nMagnus Naeslund(f) wrote:\n> Hello, i noticed that win32 native stopped working/compiling after the SSL merge.\n> So i took the opportunity to fix some stuff:\n> \n> 1. Made the thing compile (typos & needed definitions) with the new pqsecure_* stuff, and added fe-secure.c to the win32.mak makefile.\n> 2. Fixed some MULTIBYTE compile errors (when building without MB support).\n> 3. Made it do that you can build with debug info: \"nmake -f win32.mak DEBUG=1\".\n> 4. Misc small compiler speedup changes.\n> \n> The resulting .dll has been tested in production, and everything seems ok.\n> I CC:ed -hackers because i'm not sure about two things:\n> \n> 1. In libpq-int.h I typedef ssize_t as an int because Visual C (v6.0) doesn't define ssize_t. Is that ok, or is there any standard about what type should be used for ssize_t? \n> \n> 2. To keep the .dll api consistent regarding MULTIBYTE I just return -1 in fe-connect.c:PQsetClientEncoding() instead of taking away the whole function. I wonder if i should do any compares with the conn->client_encoding and return 0 if nothing would have changed (if so how do i check that?).\n> \n> Regards\n> \n> Magnus Naeslund\n> \n> -- \n> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n> Programmer/Networker [|] Magnus Naeslund\n> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n> \n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 20 Jul 2002 01:43:19 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [PATCH] Win32 native fixes after SSL updates (+more)" } ]
[ { "msg_contents": "Hi,\n\nWhile implementing DROP CONVERSION [CASCADE|RESTRICT], I'm facing\na strange problem. When I tries to delete a tuple from pg_depened, I\ngot an assertion failure:\n\nTRAP: Failed Assertion(\"!(!((tp.t_data)->t_infomask & (0x4000 |\n0x8000))):\", File: \"heapam.c\", Line: 1315)\n\nThis means that HEAP_MOVED_IN or HEAP_MOVED_OFF flag is set. Is this\nnormal? I noticed that DELETE, INSERT and VACUUM FULL has been\nperformed over pg_depend in initdb. Is this related to the problem?\n--\nTatsuo Ishii\n", "msg_date": "Thu, 18 Jul 2002 23:05:05 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "HEAP_MOVED_IN or HEAP_MOVED_OFF?" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> While implementing DROP CONVERSION [CASCADE|RESTRICT], I'm facing\n> a strange problem. When I tries to delete a tuple from pg_depened, I\n> got an assertion failure:\n\nThis is the bug I complained of in Manfred's HeapTupleHeader patches.\nHe's sent in a patch (not yet reviewed or applied) to fix it. In the\nmeantime, I'd counsel avoiding VACUUM FULL.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Jul 2002 10:24:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HEAP_MOVED_IN or HEAP_MOVED_OFF? " }, { "msg_contents": "On Thu, 18 Jul 2002 23:05:05 +0900 (JST), Tatsuo Ishii\n<t-ishii@sra.co.jp> wrote:\n>TRAP: Failed Assertion(\"!(!((tp.t_data)->t_infomask & (0x4000 |\n>0x8000))):\", File: \"heapam.c\", Line: 1315)\n\nTatsuo, this is unrelated to *your* work. It is a bug introduced with\nmy heap tuple header changes. Tom Lane has reported this bug three\ndays ago: \"[HACKERS] HeapTuple header changes cause core dumps in CVS\ntip\".\n\nA workaround has been posted in reply to this message. And a (IMHO)\nbetter fix is waiting on -patches, making the workaround obsolete. If\nyou try the fix: any feedback is very appreciated.\n\nServus\n Manfred\n", "msg_date": "Thu, 18 Jul 2002 16:56:35 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "Re: HEAP_MOVED_IN or HEAP_MOVED_OFF?" } ]
[ { "msg_contents": "Running CVS HEAD from this morning, 'make check' fails due to an\ninitdb failure when initializing pg_depend:\n\ninitializing pg_depend...\n/home/nconway/pgsql/src/test/regress/./tmp_check/install//data/pgsql/bin/initdb:\nline 718: 31128 Segmentation fault (core dumped) \"$PGPATH\"/postgres\n$PGSQL_OPT template1 >/dev/null <<EOF\n\nThat corresponds to the following lines in initdb, which lead to the\ncore dump:\n\nDELETE FROM pg_depend;\nINSERT INTO pg_depend SELECT 0,0,0, tableoid,oid,0, 'p' FROM pg_class;\nINSERT INTO pg_depend SELECT 0,0,0, tableoid,oid,0, 'p' FROM pg_proc;\nINSERT INTO pg_depend SELECT 0,0,0, tableoid,oid,0, 'p' FROM pg_type;\nINSERT INTO pg_depend SELECT 0,0,0, tableoid,oid,0, 'p' FROM pg_constraint;\nINSERT INTO pg_depend SELECT 0,0,0, tableoid,oid,0, 'p' FROM pg_attrdef;\nINSERT INTO pg_depend SELECT 0,0,0, tableoid,oid,0, 'p' FROM pg_language;\nINSERT INTO pg_depend SELECT 0,0,0, tableoid,oid,0, 'p' FROM pg_operator;\nINSERT INTO pg_depend SELECT 0,0,0, tableoid,oid,0, 'p' FROM pg_rewrite;\nINSERT INTO pg_depend SELECT 0,0,0, tableoid,oid,0, 'p' FROM pg_trigger;\n\nHere's the backtrace from the core dump:\n\n(gdb) bt\n#0 0x0812ff4d in expression_tree_walker (node=0xffffffff, \n walker=0x812b0ac <fix_opids_walker>, context=0x0) at clauses.c:1814\n#1 0x081300cf in expression_tree_walker (node=0x82cb308, \n walker=0x812b0ac <fix_opids_walker>, context=0x0) at clauses.c:1832\n#2 0x0812b0f7 in fix_opids_walker (node=0x82cb308, context=0x0) at setrefs.c:579\n#3 0x081303c6 in expression_tree_walker (node=0x82cb3a0, \n walker=0x812b0ac <fix_opids_walker>, context=0x0) at clauses.c:1930\n#4 0x0812b0f7 in fix_opids_walker (node=0x82cb3a0, context=0x0) at setrefs.c:579\n#5 0x08130397 in expression_tree_walker (node=0x82cb3e0, \n walker=0x812b0ac <fix_opids_walker>, context=0x0) at clauses.c:1925\n#6 0x0812b0f7 in fix_opids_walker (node=0x82cb3e0, context=0x0) at setrefs.c:579\n#7 0x0812b0a7 in fix_opids (node=0x82cb3e0) at setrefs.c:569\n#8 0x0812ac0e in fix_expr_references (plan=0x82cbb38, node=0x82cb3e0)\n at setrefs.c:258\n#9 0x0812a805 in set_plan_references (plan=0x82cbb38, rtable=0x82be5d8)\n at setrefs.c:97\n#10 0x08129057 in planner (parse=0x82bdf18) at planner.c:104\n#11 0x08155f81 in pg_plan_query (querytree=0x82bdf18) at postgres.c:493\n#12 0x081561fd in pg_exec_query_string (query_string=0x82bdc90, dest=Debug, \n parse_context=0x8291d60) at postgres.c:759\n#13 0x0815745a in PostgresMain (argc=9, argv=0x827b388, \n username=0x827b9f0 \"nconway\") at postgres.c:1924\n#14 0x081070cd in main (argc=9, argv=0xbffff9a4) at main.c:229\n\nCan someone fix this please?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Thu, 18 Jul 2002 11:05:32 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": true, "msg_subject": "regression in CVS HEAD" }, { "msg_contents": "nconway@klamath.dyndns.org (Neil Conway) writes:\n> Running CVS HEAD from this morning, 'make check' fails due to an\n> initdb failure when initializing pg_depend:\n\nDid you do a full recompile after your last update? I notice Bruce\nhas arbitrarily renumbered expression nodes ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Jul 2002 11:12:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: regression in CVS HEAD " }, { "msg_contents": "Tom Lane wrote:\n> nconway@klamath.dyndns.org (Neil Conway) writes:\n> > Running CVS HEAD from this morning, 'make check' fails due to an\n> > initdb failure when initializing pg_depend:\n> \n> Did you do a full recompile after your last update? I notice Bruce\n> has arbitrarily renumbered expression nodes ...\n\nYes, I am seeing a failure too. I didn't have time to run a full\nregression after applying those patches last night. I was working on my\nown patch at the same time. I will investigate.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Jul 2002 11:33:29 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: regression in CVS HEAD" }, { "msg_contents": "Tom Lane wrote:\n> nconway@klamath.dyndns.org (Neil Conway) writes:\n> > Running CVS HEAD from this morning, 'make check' fails due to an\n> > initdb failure when initializing pg_depend:\n> \n> Did you do a full recompile after your last update? I notice Bruce\n> has arbitrarily renumbered expression nodes ...\n\nOK, it was the BETWEEN patch that was causing the failure. I asked Rod\nto retest and resubmit.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Jul 2002 13:02:35 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: regression in CVS HEAD" } ]
[ { "msg_contents": "I have been looking at an example of the \"no one parent tuple found\"\nVACUUM error provided by Mario Weilguni. It appears to me that VACUUM\nis getting confused by a tuple that looks like so in pg_filedump:\n\n Item 4 -- Length: 249 Offset: 31616 (0x7b80) Flags: USED\n OID: 0 CID: min(240) max(18) XID: min(5691267) max(6484551)\n Block Id: 1 linp Index: 1 Attributes: 38 Size: 40\n infomask: 0x3503 (HASNULL|HASVARLENA|XMIN_COMMITTED|XMAX_COMMITTED|MARKED_FOR_UPDATE|UPDATED)\n\nNotice that the t_ctid field is not pointing to this tuple, but to a\ndifferent item on the same page (which in fact is an unused item).\nThis causes VACUUM to believe that the tuple is part of an update chain.\nBut in point of fact it is not part of a chain (indeed there are *no*\nchains in the test relation, thus leading to the observed failure).\n\nAs near as I can tell, the sequence of events was:\n\n1. this row was updated by a transaction that stored the updated version\nin lineindex 1, but later aborted. t_ctid is left pointing to linp 1.\n\n2. Some other transaction came along, marked the row FOR UPDATE, and\ncommitted (with no actual update).\n\nSo we now have XMAX_COMMITTED and t_ctid != t_self, which looks way too\nmuch like a tuple that's been updated, when in fact it is the latest\ngood version of its row.\n\nI think an appropriate fix would be to reset t_ctid to equal t_self\nwhenever we clear XMAX_INVALID, which in practice means heap_delete and\nheap_mark4update need to do this. (heap_update also clears\nXMAX_INVALID, but of course it's setting t_ctid to point to the updated\ntuple.)\n\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Jul 2002 14:01:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "heap_delete, heap_mark4update must reset t_ctid" }, { "msg_contents": "Tom,\n\nWhen/if you have a patch for this, I would like to test it. I still\nhave a copy of a database showing the same problem that I would like to\ntest this on when it is ready.\n\nthanks,\n--Barry\n\nTom Lane wrote:\n\n >I have been looking at an example of the \"no one parent tuple found\"\n >VACUUM error provided by Mario Weilguni. It appears to me that VACUUM\n >is getting confused by a tuple that looks like so in pg_filedump:\n >\n > Item 4 -- Length: 249 Offset: 31616 (0x7b80) Flags: USED\n > OID: 0 CID: min(240) max(18) XID: min(5691267) max(6484551)\n > Block Id: 1 linp Index: 1 Attributes: 38 Size: 40\n > infomask: 0x3503 \n(HASNULL|HASVARLENA|XMIN_COMMITTED|XMAX_COMMITTED|MARKED_FOR_UPDATE|UPDATED)\n >\n >Notice that the t_ctid field is not pointing to this tuple, but to a\n >different item on the same page (which in fact is an unused item).\n >This causes VACUUM to believe that the tuple is part of an update chain.\n >But in point of fact it is not part of a chain (indeed there are *no*\n >chains in the test relation, thus leading to the observed failure).\n >\n >As near as I can tell, the sequence of events was:\n >\n >1. this row was updated by a transaction that stored the updated version\n >in lineindex 1, but later aborted. t_ctid is left pointing to linp 1.\n >\n >2. Some other transaction came along, marked the row FOR UPDATE, and\n >committed (with no actual update).\n >\n >So we now have XMAX_COMMITTED and t_ctid != t_self, which looks way too\n >much like a tuple that's been updated, when in fact it is the latest\n >good version of its row.\n >\n >I think an appropriate fix would be to reset t_ctid to equal t_self\n >whenever we clear XMAX_INVALID, which in practice means heap_delete and\n >heap_mark4update need to do this. (heap_update also clears\n >XMAX_INVALID, but of course it's setting t_ctid to point to the updated\n >tuple.)\n >\n >Comments?\n >\n > \n\t\tregards, tom lane\n >\n >---------------------------(end of broadcast)---------------------------\n >TIP 6: Have you searched our list archives?\n >\n >http://archives.postgresql.org\n >\n >\n >\n\n\n\n", "msg_date": "Fri, 19 Jul 2002 10:03:04 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: heap_delete, heap_mark4update must reset t_ctid" }, { "msg_contents": "\nHas this been fixed? I think we did.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> I have been looking at an example of the \"no one parent tuple found\"\n> VACUUM error provided by Mario Weilguni. It appears to me that VACUUM\n> is getting confused by a tuple that looks like so in pg_filedump:\n> \n> Item 4 -- Length: 249 Offset: 31616 (0x7b80) Flags: USED\n> OID: 0 CID: min(240) max(18) XID: min(5691267) max(6484551)\n> Block Id: 1 linp Index: 1 Attributes: 38 Size: 40\n> infomask: 0x3503 (HASNULL|HASVARLENA|XMIN_COMMITTED|XMAX_COMMITTED|MARKED_FOR_UPDATE|UPDATED)\n> \n> Notice that the t_ctid field is not pointing to this tuple, but to a\n> different item on the same page (which in fact is an unused item).\n> This causes VACUUM to believe that the tuple is part of an update chain.\n> But in point of fact it is not part of a chain (indeed there are *no*\n> chains in the test relation, thus leading to the observed failure).\n> \n> As near as I can tell, the sequence of events was:\n> \n> 1. this row was updated by a transaction that stored the updated version\n> in lineindex 1, but later aborted. t_ctid is left pointing to linp 1.\n> \n> 2. Some other transaction came along, marked the row FOR UPDATE, and\n> committed (with no actual update).\n> \n> So we now have XMAX_COMMITTED and t_ctid != t_self, which looks way too\n> much like a tuple that's been updated, when in fact it is the latest\n> good version of its row.\n> \n> I think an appropriate fix would be to reset t_ctid to equal t_self\n> whenever we clear XMAX_INVALID, which in practice means heap_delete and\n> heap_mark4update need to do this. (heap_update also clears\n> XMAX_INVALID, but of course it's setting t_ctid to point to the updated\n> tuple.)\n> \n> Comments?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 27 Aug 2002 11:50:06 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: heap_delete, heap_mark4update must reset t_ctid" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Has this been fixed? I think we did.\n\nYes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Aug 2002 11:53:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: heap_delete, heap_mark4update must reset t_ctid " }, { "msg_contents": "\nAs you can see, there is a lot of cruft left in my mailbox, but there\nare some items that we left behind that may be fixable before 7.3.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Has this been fixed? I think we did.\n> \n> Yes.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 27 Aug 2002 11:54:50 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: heap_delete, heap_mark4update must reset t_ctid" } ]
[ { "msg_contents": "Richard:\n\nI can't quite follow this; maybe you sent a draft by accident. If you\nwant to post a patch against 7.2.1, or even better against HEAD in CVS,\nthat would be great. Or if you'd rather point me to your source online,\nthat would be good too.\n\nI just want to clarify though: is this work released to the PostgreSQL\nDevelopment group by Progress and Multera, or do they still claim\ncopyright interest in it?\n\nRegards,\n\tJ.R. Nield\n\n\nOn Thu, 2002-07-18 at 12:56, Richard Tucker wrote:\n> \n> \n> -----Original Message-----\n> From: J. R. Nield [mailto:jrnield@usol.com]\n> Sent: Wednesday, July 17, 2002 8:13 PM\n> To: richt@multera.com\n> Cc: Bruce Momjian\n> Subject: RE: [HACKERS] Issues Outstanding for Point In Time Recovery\n> (PITR)\n> \n> \n> On Wed, 2002-07-17 at 19:25, Richard Tucker wrote:\n> > Regarding hot backup. Our implementation of \"pg_copy\" does a hot backup.\n> > It turns off database checkpointing for the duration of the backup.\n> Backups\n> > all the files of the database cluster up to the wal file currently being\n> > logged to. It then acquires the WalInsertLock lock long enough to backup\n> > the current wal file.\n> \n> Does it then allow more writes to that WAL file? It would seem like you\n> want to advance the log to the next file, so the sysadmin wouldn't have\n> to choose which one of log-file number 3 he wants to use at restore.\n> \n> > Writes to the wal file are allowed during the backup except for the\n> backing of the wal file current when the\n> > backup completes. That is the pg_xlog directory is the last directory to\n> be backed up. The wal_files are backed\n> > up in the order they were used. Continued wal file logging is allowed\n> until the backup reaches the current wal\n> > file being written to. To back up this last wal file the WalInsertLock is\n> held until the copy of the wal file\n> > is complete. So the backup stops update activity only long enough to copy\n> this last 16mb file.\n> \n> Also, what do you mean by 'turns off checkpointing'. You have to do a\n> checkpoint, or at least flush the buffers, when you start the backup.\n> Otherwise how do you know what LSN to start from at restore?\n> \n> > The pg_control file also gets backed up. It contains the point in the log\n> at which to begin\n> > the redo/roll forward.\n> > By not allowing the redo point to advance while the backup goes on means\n> that the startup processes' crash\n> > recovery code will capture all the changes made to the database cluster\n> while the backup was running.\n> \n> \n> Anyway: Yes we'd love to see the code.\n> \n> > In what form would you like me to send the code to you e.g. as a patch,\n> copy our whole source ...\n> \n> Since I've pretty-much got create/drop and index stuff working, if your\n> code does the rest then we should be good to go.\n> \n> ;jrnield\n> \n> \n> --\n> J. R. Nield\n> jrnield@usol.com\n> \n> \n> \n> \n-- \nJ. R. Nield\njrnield@usol.com\n\n\n\n", "msg_date": "18 Jul 2002 14:33:55 -0400", "msg_from": "\"J. R. Nield\" <jrnield@usol.com>", "msg_from_op": true, "msg_subject": "Re: Issues Outstanding for Point In Time Recovery (PITR)" }, { "msg_contents": "Progress/Multera would release the hot backup/roll forward work to the\nPostgreSQL Development group.\n-regards\nricht\n\n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of J. R. Nield\nSent: Thursday, July 18, 2002 2:34 PM\nTo: richt@multera.com\nCc: Bruce Momjian; PostgreSQL Hacker\nSubject: Re: [HACKERS] Issues Outstanding for Point In Time Recovery\n(PITR)\n\n\nRichard:\n\nI can't quite follow this; maybe you sent a draft by accident. If you\nwant to post a patch against 7.2.1, or even better against HEAD in CVS,\nthat would be great. Or if you'd rather point me to your source online,\nthat would be good too.\n\nI just want to clarify though: is this work released to the PostgreSQL\nDevelopment group by Progress and Multera, or do they still claim\ncopyright interest in it?\n\nRegards,\n\tJ.R. Nield\n\n\nOn Thu, 2002-07-18 at 12:56, Richard Tucker wrote:\n>\n>\n> -----Original Message-----\n> From: J. R. Nield [mailto:jrnield@usol.com]\n> Sent: Wednesday, July 17, 2002 8:13 PM\n> To: richt@multera.com\n> Cc: Bruce Momjian\n> Subject: RE: [HACKERS] Issues Outstanding for Point In Time Recovery\n> (PITR)\n>\n>\n> On Wed, 2002-07-17 at 19:25, Richard Tucker wrote:\n> > Regarding hot backup. Our implementation of \"pg_copy\" does a hot\nbackup.\n> > It turns off database checkpointing for the duration of the backup.\n> Backups\n> > all the files of the database cluster up to the wal file currently being\n> > logged to. It then acquires the WalInsertLock lock long enough to\nbackup\n> > the current wal file.\n>\n> Does it then allow more writes to that WAL file? It would seem like you\n> want to advance the log to the next file, so the sysadmin wouldn't have\n> to choose which one of log-file number 3 he wants to use at restore.\n>\n> > Writes to the wal file are allowed during the backup except for the\n> backing of the wal file current when the\n> > backup completes. That is the pg_xlog directory is the last directory\nto\n> be backed up. The wal_files are backed\n> > up in the order they were used. Continued wal file logging is allowed\n> until the backup reaches the current wal\n> > file being written to. To back up this last wal file the WalInsertLock\nis\n> held until the copy of the wal file\n> > is complete. So the backup stops update activity only long enough to\ncopy\n> this last 16mb file.\n>\n> Also, what do you mean by 'turns off checkpointing'. You have to do a\n> checkpoint, or at least flush the buffers, when you start the backup.\n> Otherwise how do you know what LSN to start from at restore?\n>\n> > The pg_control file also gets backed up. It contains the point in the\nlog\n> at which to begin\n> > the redo/roll forward.\n> > By not allowing the redo point to advance while the backup goes on means\n> that the startup processes' crash\n> > recovery code will capture all the changes made to the database cluster\n> while the backup was running.\n>\n>\n> Anyway: Yes we'd love to see the code.\n>\n> > In what form would you like me to send the code to you e.g. as a patch,\n> copy our whole source ...\n>\n> Since I've pretty-much got create/drop and index stuff working, if your\n> code does the rest then we should be good to go.\n>\n> ;jrnield\n>\n>\n> --\n> J. R. Nield\n> jrnield@usol.com\n>\n>\n>\n>\n--\nJ. R. Nield\njrnield@usol.com\n\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n", "msg_date": "Thu, 18 Jul 2002 15:56:20 -0400", "msg_from": "Richard Tucker <richt@multera.com>", "msg_from_op": false, "msg_subject": "Re: Issues Outstanding for Point In Time Recovery (PITR)" } ]
[ { "msg_contents": "I've been working on the TODO list item \"Add SHOW command to display locks\". The\ncode is basically finished, but I'd like to make sure the user interface is okay\nwith everyone before I send it in to -patches (if you're interested, the patch\nis attached).\n\nRather than adding another SHOW command, I think using a table function\nis a better idea. That's because the information returned by the lock\nlisting code will often need to be correlated with other information in\nthe system catalogs, or sorted/aggregated in various ways (e.g. \"show me\nthe names of all locked relations\", or \"show me the relation with the most\nAccessShareLocks'\"). Written as a table function, the lock listing code\nitself can be fairly simple, and the DBA can write the necessary SQL\nqueries to produce the information he needs. It also makes it easier to\nparse the lock status information, if you're writing (for example) a\nGUI admin tool.\n\nUsage examples:\n\nBasic information returned from function:\n\nnconway=# select * from show_locks();\n relation | database | backendpid | mode | isgranted \n----------+----------+------------+-----------------+-----------\n 16575 | 16689 | 13091 | AccessShareLock | t\n 376 | 0 | 13091 | ExclusiveLock | t\n\nAfter creating a simple relation and starting 2 transactions, one\nof which has acquired the lock and one which is waiting on it:\n\nnconway=# select l.backendpid, l.mode, l.isgranted from show_locks() l,\npg_class c where l.relation = c.oid and c.relname = 'a';\n\n backendpid | mode | isgranted \n------------+-----------------------+-----------\n 13098 | RowExclusiveLock | t\n 13108 | ShareRowExclusiveLock | f\n\nDuring a 128 client pgbench run:\n\npgbench1=# select c.relname, count(l.isgranted) from show_locks() l,\n pg_class c where c.oid = l.relation group by c.relname\n order by count desc;\n relname | count\n---------------------+-------\n accounts | 1081\n tellers | 718\n pg_xactlock | 337\n branches | 208\n history | 4\n pg_class | 3\n __show_locks_result | 1\n\nAnd so on -- I think you get the idea.\n\nRegarding performance, the only performance-critical aspect of the patch\nis the place where we need to acquire the LockMgrLock, to ensure that\nwe get a consistent view of data from the lock manager's hash tables.\nThe patch is designed so that this lock is held for as short a period\nas possible: the lock is acquired, the data is copied from shared memory\nto local memory, the lock is released, and then the data is processed.\nAny suggestions on how to optimize performance any further would be\nwelcome.\n\nLet me know if there are any objections or suggestions for improvement.\nIn particular, should we provide some pre-defined views that correlate\nthe show_locks() data with data from the system catalogs? And if so,\nwhich views should be pre-defined?\n\nAlso, should locks on special relations (e.g. pg_xactlock) or on\nsystem catalogs be shown?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC", "msg_date": "Thu, 18 Jul 2002 14:35:42 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": true, "msg_subject": "RFC: listing lock status" }, { "msg_contents": "On Thu, Jul 18, 2002 at 02:35:42PM -0400, Neil Conway wrote:\n> I've been working on the TODO list item \"Add SHOW command to display locks\". The\n> code is basically finished, but I'd like to make sure the user interface is okay\n> with everyone before I send it in to -patches (if you're interested, the patch\n> is attached).\n\nWoops, forgot to 'cvs add' a newly created file. (Thanks to Joe Conway\nfor letting me know.)\n\nA fixed patch is attached.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC", "msg_date": "Thu, 18 Jul 2002 16:31:30 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": true, "msg_subject": "Re: RFC: listing lock status" }, { "msg_contents": "Neil Conway wrote:\n > I've been working on the TODO list item \"Add SHOW command to display\n > locks\". The code is basically finished, but I'd like to make sure the\n > user interface is okay with everyone before I send it in to -patches\n > (if you're interested, the patch is attached).\n >\n > Rather than adding another SHOW command, I think using a table\n > function is a better idea. That's because the information returned by\n > the lock listing code will often need to be correlated with other\n > information in the system catalogs, or sorted/aggregated in various\n > ways (e.g. \"show me the names of all locked relations\", or \"show me\n > the relation with the most AccessShareLocks'\"). Written as a table\n > function, the lock listing code itself can be fairly simple, and the\n > DBA can write the necessary SQL queries to produce the information he\n > needs. It also makes it easier to parse the lock status information,\n > if you're writing (for example) a GUI admin tool.\n\nI'm undoubtedly biased ;-), but I like your approach. Applies and works \nfine here.\n\n > Also, should locks on special relations (e.g. pg_xactlock) or on\n > system catalogs be shown?\n\nMaybe the function should take a boolean parameter to indicate whether\nor not to show locks on objects in pg_* schema?\n\nJoe\n\n", "msg_date": "Thu, 18 Jul 2002 15:12:53 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: RFC: listing lock status" }, { "msg_contents": "On Thu, Jul 18, 2002 at 03:12:53PM -0700, Joe Conway wrote:\n> Neil Conway wrote:\n> > Also, should locks on special relations (e.g. pg_xactlock) or on\n> > system catalogs be shown?\n> \n> Maybe the function should take a boolean parameter to indicate whether\n> or not to show locks on objects in pg_* schema?\n\nI had thought about that, but it occurs to me that the DBA can\neffectively choose this for himself using the relID and databaseID\nreturned by the SRF, in combination with pg_database.datlastsysoid.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Thu, 18 Jul 2002 19:37:04 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": true, "msg_subject": "Re: RFC: listing lock status" }, { "msg_contents": "> > Rather than adding another SHOW command, I think using a table\n> > function is a better idea. That's because the information returned by\n> > the lock listing code will often need to be correlated with other\n> > information in the system catalogs, or sorted/aggregated in various\n> > ways (e.g. \"show me the names of all locked relations\", or \"show me\n> > the relation with the most AccessShareLocks'\"). Written as a table\n> > function, the lock listing code itself can be fairly simple, and the\n> > DBA can write the necessary SQL queries to produce the information he\n> > needs. It also makes it easier to parse the lock status information,\n> > if you're writing (for example) a GUI admin tool.\n\nOut of interest - why do SRFs need to have a table or view defined that\nmatches their return type? Why can't you just create the type for the\nfunction and set it up as a dependency?\n\nChris\n\n", "msg_date": "Fri, 19 Jul 2002 10:02:52 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: RFC: listing lock status" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> Out of interest - why do SRFs need to have a table or view defined that\n> matches their return type? Why can't you just create the type for the\n> function and set it up as a dependency?\n> \n\nThe only current way to create a composite type (and hence have it for \nthe function to reference) is to define a table or view.\n\nWe have discussed the need for a stand-alone composite type, but I think \nTom favors doing that as part of a larger project, namely changing the \nassociation of pg_attributes to pg_type instead of pg_class (if I \nunderstand/remember it correctly).\n\nJoe\n\n", "msg_date": "Thu, 18 Jul 2002 19:31:29 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: RFC: listing lock status" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Christopher Kings-Lynne wrote:\n>> Out of interest - why do SRFs need to have a table or view defined that\n>> matches their return type? Why can't you just create the type for the\n>> function and set it up as a dependency?\n\n> The only current way to create a composite type (and hence have it for \n> the function to reference) is to define a table or view.\n\n> We have discussed the need for a stand-alone composite type, but I think \n> Tom favors doing that as part of a larger project, namely changing the \n> association of pg_attributes to pg_type instead of pg_class (if I \n> understand/remember it correctly).\n\nWell, it's not an optional larger project: there just isn't any way ATM\nto define a composite type that's not linked to a pg_class entry. The\nonly way to show fields of a composite type is through pg_attribute\nentries, and pg_attribute entries are bound to pg_class entries not\npg_type entries.\n\nThe clean way to restructure this would be to link pg_attribute entries\nto pg_type not pg_class. But that would break approximately every\nclient that looks at the system catalogs.\n\nAn alternative that just now occurred to me is to invent a new \"dummy\"\nrelkind for a pg_class entry that isn't a real relation, but merely a\nfront for a composite type in pg_type. Not sure of all the\nimplications, but it might be worth pursuing.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Jul 2002 23:08:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: RFC: listing lock status " }, { "msg_contents": "nconway@klamath.dyndns.org (Neil Conway) writes:\n> On Thu, Jul 18, 2002 at 03:12:53PM -0700, Joe Conway wrote:\n>> Maybe the function should take a boolean parameter to indicate whether\n>> or not to show locks on objects in pg_* schema?\n\n> I had thought about that, but it occurs to me that the DBA can\n> effectively choose this for himself using the relID and databaseID\n> returned by the SRF, in combination with pg_database.datlastsysoid.\n\ndatlastsysoid is obsolete IMHO --- it was never trustworthy when one\nconsiders the possibility of OID wraparound.\n\nMy opinion on this point is (a) pgxactlock locks are special and should\nbe shown specially --- in the form of \"xact a waits for xact b\";\n(b) locks on other system catalogs are normal locks and should NOT be\ndiscriminated against. If you have a deadlock condition, the fact that\none of the elements of the lock cycle is on a system catalog isn't going\nto magically get you out of the deadlock; nor can you avoid waiting just\nbecause the lock you need is on a system catalog. Since AFAICS the\nonly value of a lock status displayer is to investigate problems of one\nof those two forms, I can fathom no reason at all that anyone would have\nthe slightest use for a displayer that arbitrarily omits some locks.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Jul 2002 23:30:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: RFC: listing lock status " }, { "msg_contents": "Tom Lane wrote:\n> Well, it's not an optional larger project: there just isn't any way ATM\n> to define a composite type that's not linked to a pg_class entry. The\n> only way to show fields of a composite type is through pg_attribute\n> entries, and pg_attribute entries are bound to pg_class entries not\n> pg_type entries.\n> \n> The clean way to restructure this would be to link pg_attribute entries\n> to pg_type not pg_class. But that would break approximately every\n> client that looks at the system catalogs.\n> \n> An alternative that just now occurred to me is to invent a new \"dummy\"\n> relkind for a pg_class entry that isn't a real relation, but merely a\n> front for a composite type in pg_type. Not sure of all the\n> implications, but it might be worth pursuing.\n> \n\nI was originally thinking the same thing, but I guess I didn't think it \nwould fly. Could we steal the needed parts from CREATE and DROP VIEW, \nexcept make a new relkind 'f' and skip the RULEs? Something like:\n\nCREATE TYPE typename AS ( column_name data_type [, ... ])\n\nFWIW, you can see an example of Oracle's CREATE TYPE here:\nhttp://download-west.oracle.com/otndoc/oracle9i/901_doc/appdev.901/a89856/08_subs.htm#19677\n\nAnd perhaps we could do:\n\nCREATE [ OR REPLACE ] FUNCTION name ( [ argtype [, ...] ] )\nRETURNS [setof] { data_type | (column_name data_type [, ... ]) } . . .\n\nto automatically create a composite type with a system generated name \nfor a function. Someone reported a similar syntax for InterBase here:\nhttp://archives.postgresql.org/pgsql-sql/2002-07/msg00011.php\n\nThoughts?\n\nJoe\n\n\n\n", "msg_date": "Thu, 18 Jul 2002 21:39:39 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: RFC: listing lock status" }, { "msg_contents": "On Thu, Jul 18, 2002 at 11:30:46PM -0400, Tom Lane wrote:\n> My opinion on this point is (a) pgxactlock locks are special and should\n> be shown specially --- in the form of \"xact a waits for xact b\";\n\nNot sure how that would fit into a UI based on returning sets of tuples.\n\n> I can fathom no reason at all that anyone would have\n> the slightest use for a displayer that arbitrarily omits some locks.\n\nI agree. I think a reasonable solution is to have the low-level SRF\nreturn data on both pg_xactlock locks and locks on system catalogs.\nIf the DBA wants to disregard one or the other, it should be pretty\neasy to do (particularly pg_xactlock).\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Fri, 19 Jul 2002 10:11:05 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": true, "msg_subject": "Re: RFC: listing lock status" } ]
[ { "msg_contents": "On Thu, 2002-07-18 at 15:36, Richard Tucker wrote:\n> Sorry, I didn't delimit my comments correctly before.\n> I'm offering our pg_copy/roll forward implementation to PostgreSQL.org if it\n> finds it an acceptable contribution. Progress/Multera would hand over all\n> rights to any code accepted.\n> \n> I'd be willing to post a patch to HEAD -- where can I find instructions on\n> how to do this?\n> \n\nInstructions for how to get the latest development source are at:\n http://developer.postgresql.org/TODO/docs/cvs.html\n\nIf you want to post a context diff against 7.2.1 to pgsql-hackers first,\nthat would let us see what it does.\n\nLet me know if there is anything else I can help you with.\n\n-- \nJ. R. Nield\njrnield@usol.com\n\n\n\n", "msg_date": "18 Jul 2002 16:25:07 -0400", "msg_from": "\"J. R. Nield\" <jrnield@usol.com>", "msg_from_op": true, "msg_subject": "Re: Issues Outstanding for Point In Time Recovery (PITR)" } ]
[ { "msg_contents": "I'm getting the warnings below from current cvs. My configure line looks \nlike:\n\n./configure --enable-integer-datetimes --enable-locale --enable-debug \n--enable-cassert --enable-multibyte --enable-syslog --enable-nls \n--enable-depend\n\nJoe\n\n============================================================\ngcc -O2 -g -Wall -Wmissing-prototypes -Wmissing-declarations \n-I../../../../src/include -c -o conv.o conv.c -MMD\nconv.c:41: warning: `euc_kr2mic' defined but not used\nconv.c:67: warning: `mic2euc_kr' defined but not used\nconv.c:97: warning: `euc_cn2mic' defined but not used\nconv.c:123: warning: `mic2euc_cn' defined but not used\nconv.c:288: warning: `latin12mic' defined but not used\nconv.c:293: warning: `mic2latin1' defined but not used\nconv.c:298: warning: `latin22mic' defined but not used\nconv.c:303: warning: `mic2latin2' defined but not used\nconv.c:308: warning: `latin32mic' defined but not used\nconv.c:313: warning: `mic2latin3' defined but not used\nconv.c:318: warning: `latin42mic' defined but not used\nconv.c:323: warning: `mic2latin4' defined but not used\nconv.c:375: warning: `koi8r2mic' defined but not used\nconv.c:382: warning: `mic2koi8r' defined but not used\nconv.c:478: warning: `iso2mic' defined but not used\nconv.c:504: warning: `mic2iso' defined but not used\nconv.c:530: warning: `win12512mic' defined but not used\nconv.c:556: warning: `mic2win1251' defined but not used\nconv.c:582: warning: `alt2mic' defined but not used\nconv.c:608: warning: `mic2alt' defined but not used\nconv.c:642: warning: `win12502mic' defined but not used\nconv.c:666: warning: `mic2win1250' defined but not used\n\n", "msg_date": "Thu, 18 Jul 2002 13:56:31 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "compiler warnings from cvs tip" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> I'm getting the warnings below from current cvs.\n\nI see the same.\n\n> gcc -O2 -g -Wall -Wmissing-prototypes -Wmissing-declarations \n> -I../../../../src/include -c -o conv.o conv.c -MMD\n> conv.c:41: warning: `euc_kr2mic' defined but not used\n> conv.c:67: warning: `mic2euc_kr' defined but not used\n> conv.c:97: warning: `euc_cn2mic' defined but not used\n> conv.c:123: warning: `mic2euc_cn' defined but not used\n> conv.c:288: warning: `latin12mic' defined but not used\n> conv.c:293: warning: `mic2latin1' defined but not used\n> conv.c:298: warning: `latin22mic' defined but not used\n> conv.c:303: warning: `mic2latin2' defined but not used\n> conv.c:308: warning: `latin32mic' defined but not used\n> conv.c:313: warning: `mic2latin3' defined but not used\n> conv.c:318: warning: `latin42mic' defined but not used\n> conv.c:323: warning: `mic2latin4' defined but not used\n> conv.c:375: warning: `koi8r2mic' defined but not used\n> conv.c:382: warning: `mic2koi8r' defined but not used\n> conv.c:478: warning: `iso2mic' defined but not used\n> conv.c:504: warning: `mic2iso' defined but not used\n> conv.c:530: warning: `win12512mic' defined but not used\n> conv.c:556: warning: `mic2win1251' defined but not used\n> conv.c:582: warning: `alt2mic' defined but not used\n> conv.c:608: warning: `mic2alt' defined but not used\n> conv.c:642: warning: `win12502mic' defined but not used\n> conv.c:666: warning: `mic2win1250' defined but not used\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Jul 2002 18:35:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: compiler warnings from cvs tip " }, { "msg_contents": "> I'm getting the warnings below from current cvs. My configure line looks \n> like:\n\nDon't worry. As I already posted, there are some encodings need to\naddressed for CREATE CONVERSION. Theses functions are in the process\nof migration. I will #ifdef out to avoid confusion.\n--\nTatsuo Ishii\n", "msg_date": "Fri, 19 Jul 2002 09:14:05 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: compiler warnings from cvs tip" } ]
[ { "msg_contents": "I have completed this TODO item with the following patch, patch applied:\n\n * Merge LockMethodCtl and LockMethodTable into one shared structure\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/backend/storage/lmgr/deadlock.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/storage/lmgr/deadlock.c,v\nretrieving revision 1.10\ndiff -c -r1.10 deadlock.c\n*** src/backend/storage/lmgr/deadlock.c\t20 Jun 2002 20:29:35 -0000\t1.10\n--- src/backend/storage/lmgr/deadlock.c\t18 Jul 2002 22:13:13 -0000\n***************\n*** 170,179 ****\n * only look at regular locks.\n *\n * We must have already locked the master lock before being called.\n- * NOTE: although the lockctl structure appears to allow each lock\n- * table to have a different LWLock, all locks that can block had\n- * better use the same LWLock, else this code will not be adequately\n- * interlocked!\n */\n bool\n DeadLockCheck(PGPROC *proc)\n--- 170,175 ----\n***************\n*** 384,390 ****\n \tHOLDER\t *holder;\n \tSHM_QUEUE *lockHolders;\n \tLOCKMETHODTABLE *lockMethodTable;\n- \tLOCKMETHODCTL *lockctl;\n \tPROC_QUEUE *waitQueue;\n \tint\t\t\tqueue_size;\n \tint\t\t\tconflictMask;\n--- 380,385 ----\n***************\n*** 423,431 ****\n \tif (lock == NULL)\n \t\treturn false;\n \tlockMethodTable = GetLocksMethodTable(lock);\n! \tlockctl = lockMethodTable->ctl;\n! \tnumLockModes = lockctl->numLockModes;\n! \tconflictMask = lockctl->conflictTab[checkProc->waitLockMode];\n \n \t/*\n \t * Scan for procs that already hold conflicting locks.\tThese are\n--- 418,425 ----\n \tif (lock == NULL)\n \t\treturn false;\n \tlockMethodTable = GetLocksMethodTable(lock);\n! \tnumLockModes = lockMethodTable->numLockModes;\n! \tconflictMask = lockMethodTable->conflictTab[checkProc->waitLockMode];\n \n \t/*\n \t * Scan for procs that already hold conflicting locks.\tThese are\nIndex: src/backend/storage/lmgr/lock.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/storage/lmgr/lock.c,v\nretrieving revision 1.108\ndiff -c -r1.108 lock.c\n*** src/backend/storage/lmgr/lock.c\t20 Jun 2002 20:29:35 -0000\t1.108\n--- src/backend/storage/lmgr/lock.c\t18 Jul 2002 22:13:19 -0000\n***************\n*** 213,224 ****\n {\n \tint\t\t\ti;\n \n! \tlockMethodTable->ctl->numLockModes = numModes;\n \tnumModes++;\n \tfor (i = 0; i < numModes; i++, prioP++, conflictsP++)\n \t{\n! \t\tlockMethodTable->ctl->conflictTab[i] = *conflictsP;\n! \t\tlockMethodTable->ctl->prio[i] = *prioP;\n \t}\n }\n \n--- 213,224 ----\n {\n \tint\t\t\ti;\n \n! \tlockMethodTable->numLockModes = numModes;\n \tnumModes++;\n \tfor (i = 0; i < numModes; i++, prioP++, conflictsP++)\n \t{\n! \t\tlockMethodTable->conflictTab[i] = *conflictsP;\n! \t\tlockMethodTable->prio[i] = *prioP;\n \t}\n }\n \n***************\n*** 263,269 ****\n \n \t/* each lock table has a non-shared, permanent header */\n \tlockMethodTable = (LOCKMETHODTABLE *)\n! \t\tMemoryContextAlloc(TopMemoryContext, sizeof(LOCKMETHODTABLE));\n \n \t/*\n \t * Lock the LWLock for the table (probably not necessary here)\n--- 263,272 ----\n \n \t/* each lock table has a non-shared, permanent header */\n \tlockMethodTable = (LOCKMETHODTABLE *)\n! \t\tShmemInitStruct(shmemName, sizeof(LOCKMETHODTABLE), &found);\n! \n! \tif (!lockMethodTable)\n! \t\telog(FATAL, \"LockMethodTableInit: couldn't initialize %s\", tabName);\n \n \t/*\n \t * Lock the LWLock for the table (probably not necessary here)\n***************\n*** 271,287 ****\n \tLWLockAcquire(LockMgrLock, LW_EXCLUSIVE);\n \n \t/*\n- \t * allocate a control structure from shared memory or attach to it if\n- \t * it already exists.\n- \t */\n- \tsprintf(shmemName, \"%s (ctl)\", tabName);\n- \tlockMethodTable->ctl = (LOCKMETHODCTL *)\n- \t\tShmemInitStruct(shmemName, sizeof(LOCKMETHODCTL), &found);\n- \n- \tif (!lockMethodTable->ctl)\n- \t\telog(FATAL, \"LockMethodTableInit: couldn't initialize %s\", tabName);\n- \n- \t/*\n \t * no zero-th table\n \t */\n \tNumLockMethods = 1;\n--- 274,279 ----\n***************\n*** 291,299 ****\n \t */\n \tif (!found)\n \t{\n! \t\tMemSet(lockMethodTable->ctl, 0, sizeof(LOCKMETHODCTL));\n! \t\tlockMethodTable->ctl->masterLock = LockMgrLock;\n! \t\tlockMethodTable->ctl->lockmethod = NumLockMethods;\n \t}\n \n \t/*\n--- 283,291 ----\n \t */\n \tif (!found)\n \t{\n! \t\tMemSet(lockMethodTable, 0, sizeof(LOCKMETHODTABLE));\n! \t\tlockMethodTable->masterLock = LockMgrLock;\n! \t\tlockMethodTable->lockmethod = NumLockMethods;\n \t}\n \n \t/*\n***************\n*** 342,355 ****\n \tif (!lockMethodTable->holderHash)\n \t\telog(FATAL, \"LockMethodTableInit: couldn't initialize %s\", tabName);\n \n! \t/* init ctl data structures */\n \tLockMethodInit(lockMethodTable, conflictsP, prioP, numModes);\n \n \tLWLockRelease(LockMgrLock);\n \n \tpfree(shmemName);\n \n! \treturn lockMethodTable->ctl->lockmethod;\n }\n \n /*\n--- 334,347 ----\n \tif (!lockMethodTable->holderHash)\n \t\telog(FATAL, \"LockMethodTableInit: couldn't initialize %s\", tabName);\n \n! \t/* init data structures */\n \tLockMethodInit(lockMethodTable, conflictsP, prioP, numModes);\n \n \tLWLockRelease(LockMgrLock);\n \n \tpfree(shmemName);\n \n! \treturn lockMethodTable->lockmethod;\n }\n \n /*\n***************\n*** 476,482 ****\n \t\treturn FALSE;\n \t}\n \n! \tmasterLock = lockMethodTable->ctl->masterLock;\n \n \tLWLockAcquire(masterLock, LW_EXCLUSIVE);\n \n--- 468,474 ----\n \t\treturn FALSE;\n \t}\n \n! \tmasterLock = lockMethodTable->masterLock;\n \n \tLWLockAcquire(masterLock, LW_EXCLUSIVE);\n \n***************\n*** 576,582 ****\n \t\t * XXX Doing numeric comparison on the lockmodes is a hack; it'd be\n \t\t * better to use a table. For now, though, this works.\n \t\t */\n! \t\tfor (i = lockMethodTable->ctl->numLockModes; i > 0; i--)\n \t\t{\n \t\t\tif (holder->holding[i] > 0)\n \t\t\t{\n--- 568,574 ----\n \t\t * XXX Doing numeric comparison on the lockmodes is a hack; it'd be\n \t\t * better to use a table. For now, though, this works.\n \t\t */\n! \t\tfor (i = lockMethodTable->numLockModes; i > 0; i--)\n \t\t{\n \t\t\tif (holder->holding[i] > 0)\n \t\t\t{\n***************\n*** 631,637 ****\n \t * join wait queue. Otherwise, check for conflict with already-held\n \t * locks. (That's last because most complex check.)\n \t */\n! \tif (lockMethodTable->ctl->conflictTab[lockmode] & lock->waitMask)\n \t\tstatus = STATUS_FOUND;\n \telse\n \t\tstatus = LockCheckConflicts(lockMethodTable, lockmode,\n--- 623,629 ----\n \t * join wait queue. Otherwise, check for conflict with already-held\n \t * locks. (That's last because most complex check.)\n \t */\n! \tif (lockMethodTable->conflictTab[lockmode] & lock->waitMask)\n \t\tstatus = STATUS_FOUND;\n \telse\n \t\tstatus = LockCheckConflicts(lockMethodTable, lockmode,\n***************\n*** 683,689 ****\n \t\t\tint\t\t\ttmpMask;\n \n \t\t\tfor (i = 1, tmpMask = 2;\n! \t\t\t\t i <= lockMethodTable->ctl->numLockModes;\n \t\t\t\t i++, tmpMask <<= 1)\n \t\t\t{\n \t\t\t\tif (myHolding[i] > 0)\n--- 675,681 ----\n \t\t\tint\t\t\ttmpMask;\n \n \t\t\tfor (i = 1, tmpMask = 2;\n! \t\t\t\t i <= lockMethodTable->numLockModes;\n \t\t\t\t i++, tmpMask <<= 1)\n \t\t\t{\n \t\t\t\tif (myHolding[i] > 0)\n***************\n*** 749,756 ****\n \t\t\t\t PGPROC *proc,\n \t\t\t\t int *myHolding)\t\t/* myHolding[] array or NULL */\n {\n! \tLOCKMETHODCTL *lockctl = lockMethodTable->ctl;\n! \tint\t\t\tnumLockModes = lockctl->numLockModes;\n \tint\t\t\tbitmask;\n \tint\t\t\ti,\n \t\t\t\ttmpMask;\n--- 741,747 ----\n \t\t\t\t PGPROC *proc,\n \t\t\t\t int *myHolding)\t\t/* myHolding[] array or NULL */\n {\n! \tint\t\t\tnumLockModes = lockMethodTable->numLockModes;\n \tint\t\t\tbitmask;\n \tint\t\t\ti,\n \t\t\t\ttmpMask;\n***************\n*** 765,771 ****\n \t * each type of lock that conflicts with request.\tBitwise compare\n \t * tells if there is a conflict.\n \t */\n! \tif (!(lockctl->conflictTab[lockmode] & lock->grantMask))\n \t{\n \t\tHOLDER_PRINT(\"LockCheckConflicts: no conflict\", holder);\n \t\treturn STATUS_OK;\n--- 756,762 ----\n \t * each type of lock that conflicts with request.\tBitwise compare\n \t * tells if there is a conflict.\n \t */\n! \tif (!(lockMethodTable->conflictTab[lockmode] & lock->grantMask))\n \t{\n \t\tHOLDER_PRINT(\"LockCheckConflicts: no conflict\", holder);\n \t\treturn STATUS_OK;\n***************\n*** 798,804 ****\n \t * locks held by other processes. If one of these conflicts with the\n \t * kind of lock that I want, there is a conflict and I have to sleep.\n \t */\n! \tif (!(lockctl->conflictTab[lockmode] & bitmask))\n \t{\n \t\t/* no conflict. OK to get the lock */\n \t\tHOLDER_PRINT(\"LockCheckConflicts: resolved\", holder);\n--- 789,795 ----\n \t * locks held by other processes. If one of these conflicts with the\n \t * kind of lock that I want, there is a conflict and I have to sleep.\n \t */\n! \tif (!(lockMethodTable->conflictTab[lockmode] & bitmask))\n \t{\n \t\t/* no conflict. OK to get the lock */\n \t\tHOLDER_PRINT(\"LockCheckConflicts: resolved\", holder);\n***************\n*** 918,924 ****\n \t\t * needed, will happen in xact cleanup (see above for motivation).\n \t\t */\n \t\tLOCK_PRINT(\"WaitOnLock: aborting on lock\", lock, lockmode);\n! \t\tLWLockRelease(lockMethodTable->ctl->masterLock);\n \t\telog(ERROR, \"deadlock detected\");\n \t\t/* not reached */\n \t}\n--- 909,915 ----\n \t\t * needed, will happen in xact cleanup (see above for motivation).\n \t\t */\n \t\tLOCK_PRINT(\"WaitOnLock: aborting on lock\", lock, lockmode);\n! \t\tLWLockRelease(lockMethodTable->masterLock);\n \t\telog(ERROR, \"deadlock detected\");\n \t\t/* not reached */\n \t}\n***************\n*** 1014,1020 ****\n \t\treturn FALSE;\n \t}\n \n! \tmasterLock = lockMethodTable->ctl->masterLock;\n \tLWLockAcquire(masterLock, LW_EXCLUSIVE);\n \n \t/*\n--- 1005,1011 ----\n \t\treturn FALSE;\n \t}\n \n! \tmasterLock = lockMethodTable->masterLock;\n \tLWLockAcquire(masterLock, LW_EXCLUSIVE);\n \n \t/*\n***************\n*** 1109,1115 ****\n \t * granted locks might belong to some waiter, who could now be\n \t * awakened because he doesn't conflict with his own locks.\n \t */\n! \tif (lockMethodTable->ctl->conflictTab[lockmode] & lock->waitMask)\n \t\twakeupNeeded = true;\n \n \tif (lock->nRequested == 0)\n--- 1100,1106 ----\n \t * granted locks might belong to some waiter, who could now be\n \t * awakened because he doesn't conflict with his own locks.\n \t */\n! \tif (lockMethodTable->conflictTab[lockmode] & lock->waitMask)\n \t\twakeupNeeded = true;\n \n \tif (lock->nRequested == 0)\n***************\n*** 1208,1215 ****\n \t\treturn FALSE;\n \t}\n \n! \tnumLockModes = lockMethodTable->ctl->numLockModes;\n! \tmasterLock = lockMethodTable->ctl->masterLock;\n \n \tLWLockAcquire(masterLock, LW_EXCLUSIVE);\n \n--- 1199,1206 ----\n \t\treturn FALSE;\n \t}\n \n! \tnumLockModes = lockMethodTable->numLockModes;\n! \tmasterLock = lockMethodTable->masterLock;\n \n \tLWLockAcquire(masterLock, LW_EXCLUSIVE);\n \n***************\n*** 1264,1270 ****\n \t\t\t\t\t * Read comments in LockRelease\n \t\t\t\t\t */\n \t\t\t\t\tif (!wakeupNeeded &&\n! \t\t\t\t\tlockMethodTable->ctl->conflictTab[i] & lock->waitMask)\n \t\t\t\t\t\twakeupNeeded = true;\n \t\t\t\t}\n \t\t\t}\n--- 1255,1261 ----\n \t\t\t\t\t * Read comments in LockRelease\n \t\t\t\t\t */\n \t\t\t\t\tif (!wakeupNeeded &&\n! \t\t\t\t\tlockMethodTable->conflictTab[i] & lock->waitMask)\n \t\t\t\t\t\twakeupNeeded = true;\n \t\t\t\t}\n \t\t\t}\n***************\n*** 1355,1362 ****\n \n \tsize += MAXALIGN(sizeof(PROC_HDR)); /* ProcGlobal */\n \tsize += maxBackends * MAXALIGN(sizeof(PGPROC));\t\t/* each MyProc */\n! \tsize += MAX_LOCK_METHODS * MAXALIGN(sizeof(LOCKMETHODCTL)); /* each\n! \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t * lockMethodTable->ctl */\n \n \t/* lockHash table */\n \tsize += hash_estimate_size(max_table_size, sizeof(LOCK));\n--- 1346,1353 ----\n \n \tsize += MAXALIGN(sizeof(PROC_HDR)); /* ProcGlobal */\n \tsize += maxBackends * MAXALIGN(sizeof(PGPROC));\t\t/* each MyProc */\n! \tsize += MAX_LOCK_METHODS * MAXALIGN(sizeof(LOCKMETHODTABLE)); /* each\n! \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t * lockMethodTable */\n \n \t/* lockHash table */\n \tsize += hash_estimate_size(max_table_size, sizeof(LOCK));\nIndex: src/backend/storage/lmgr/proc.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/storage/lmgr/proc.c,v\nretrieving revision 1.122\ndiff -c -r1.122 proc.c\n*** src/backend/storage/lmgr/proc.c\t13 Jul 2002 01:02:14 -0000\t1.122\n--- src/backend/storage/lmgr/proc.c\t18 Jul 2002 22:13:20 -0000\n***************\n*** 503,510 ****\n \t\t LOCK *lock,\n \t\t HOLDER *holder)\n {\n! \tLOCKMETHODCTL *lockctl = lockMethodTable->ctl;\n! \tLWLockId\tmasterLock = lockctl->masterLock;\n \tPROC_QUEUE *waitQueue = &(lock->waitProcs);\n \tint\t\t\tmyHeldLocks = MyProc->heldLocks;\n \tbool\t\tearly_deadlock = false;\n--- 503,509 ----\n \t\t LOCK *lock,\n \t\t HOLDER *holder)\n {\n! \tLWLockId\tmasterLock = lockMethodTable->masterLock;\n \tPROC_QUEUE *waitQueue = &(lock->waitProcs);\n \tint\t\t\tmyHeldLocks = MyProc->heldLocks;\n \tbool\t\tearly_deadlock = false;\n***************\n*** 537,546 ****\n \t\tfor (i = 0; i < waitQueue->size; i++)\n \t\t{\n \t\t\t/* Must he wait for me? */\n! \t\t\tif (lockctl->conflictTab[proc->waitLockMode] & myHeldLocks)\n \t\t\t{\n \t\t\t\t/* Must I wait for him ? */\n! \t\t\t\tif (lockctl->conflictTab[lockmode] & proc->heldLocks)\n \t\t\t\t{\n \t\t\t\t\t/*\n \t\t\t\t\t * Yes, so we have a deadlock.\tEasiest way to clean\n--- 536,545 ----\n \t\tfor (i = 0; i < waitQueue->size; i++)\n \t\t{\n \t\t\t/* Must he wait for me? */\n! \t\t\tif (lockMethodTable->conflictTab[proc->waitLockMode] & myHeldLocks)\n \t\t\t{\n \t\t\t\t/* Must I wait for him ? */\n! \t\t\t\tif (lockMethodTable->conflictTab[lockmode] & proc->heldLocks)\n \t\t\t\t{\n \t\t\t\t\t/*\n \t\t\t\t\t * Yes, so we have a deadlock.\tEasiest way to clean\n***************\n*** 553,559 ****\n \t\t\t\t\tbreak;\n \t\t\t\t}\n \t\t\t\t/* I must go before this waiter. Check special case. */\n! \t\t\t\tif ((lockctl->conflictTab[lockmode] & aheadRequests) == 0 &&\n \t\t\t\t\tLockCheckConflicts(lockMethodTable,\n \t\t\t\t\t\t\t\t\t lockmode,\n \t\t\t\t\t\t\t\t\t lock,\n--- 552,558 ----\n \t\t\t\t\tbreak;\n \t\t\t\t}\n \t\t\t\t/* I must go before this waiter. Check special case. */\n! \t\t\t\tif ((lockMethodTable->conflictTab[lockmode] & aheadRequests) == 0 &&\n \t\t\t\t\tLockCheckConflicts(lockMethodTable,\n \t\t\t\t\t\t\t\t\t lockmode,\n \t\t\t\t\t\t\t\t\t lock,\n***************\n*** 725,731 ****\n void\n ProcLockWakeup(LOCKMETHODTABLE *lockMethodTable, LOCK *lock)\n {\n- \tLOCKMETHODCTL *lockctl = lockMethodTable->ctl;\n \tPROC_QUEUE *waitQueue = &(lock->waitProcs);\n \tint\t\t\tqueue_size = waitQueue->size;\n \tPGPROC\t *proc;\n--- 724,729 ----\n***************\n*** 746,752 ****\n \t\t * Waken if (a) doesn't conflict with requests of earlier waiters,\n \t\t * and (b) doesn't conflict with already-held locks.\n \t\t */\n! \t\tif ((lockctl->conflictTab[lockmode] & aheadRequests) == 0 &&\n \t\t\tLockCheckConflicts(lockMethodTable,\n \t\t\t\t\t\t\t lockmode,\n \t\t\t\t\t\t\t lock,\n--- 744,750 ----\n \t\t * Waken if (a) doesn't conflict with requests of earlier waiters,\n \t\t * and (b) doesn't conflict with already-held locks.\n \t\t */\n! \t\tif ((lockMethodTable->conflictTab[lockmode] & aheadRequests) == 0 &&\n \t\t\tLockCheckConflicts(lockMethodTable,\n \t\t\t\t\t\t\t lockmode,\n \t\t\t\t\t\t\t lock,\nIndex: src/include/storage/lock.h\n===================================================================\nRCS file: /cvsroot/pgsql/src/include/storage/lock.h,v\nretrieving revision 1.61\ndiff -c -r1.61 lock.h\n*** src/include/storage/lock.h\t20 Jun 2002 20:29:52 -0000\t1.61\n--- src/include/storage/lock.h\t18 Jul 2002 22:13:22 -0000\n***************\n*** 62,78 ****\n * There is normally only one lock method, the default one.\n * If user locks are enabled, an additional lock method is present.\n *\n- * LOCKMETHODCTL and LOCKMETHODTABLE are split because the first lives\n- * in shared memory. (There isn't any really good reason for the split.)\n- * LOCKMETHODTABLE exists in private memory. Both are created by the\n- * postmaster and should be the same in all backends.\n- */\n- \n- /*\n * This is the control structure for a lock table.\tIt\n * lives in shared memory.\tThis information is the same\n * for all backends.\n *\n * lockmethod -- the handle used by the lock table's clients to\n *\t\trefer to the type of lock table being used.\n *\n--- 62,75 ----\n * There is normally only one lock method, the default one.\n * If user locks are enabled, an additional lock method is present.\n *\n * This is the control structure for a lock table.\tIt\n * lives in shared memory.\tThis information is the same\n * for all backends.\n *\n+ * lockHash -- hash table holding per-locked-object lock information\n+ *\n+ * holderHash -- hash table holding per-lock-holder lock information\n+ *\n * lockmethod -- the handle used by the lock table's clients to\n *\t\trefer to the type of lock table being used.\n *\n***************\n*** 88,115 ****\n *\t\tstarvation). XXX this field is not actually used at present!\n *\n * masterLock -- synchronizes access to the table\n */\n! typedef struct LOCKMETHODCTL\n {\n \tLOCKMETHOD\tlockmethod;\n \tint\t\t\tnumLockModes;\n \tint\t\t\tconflictTab[MAX_LOCKMODES];\n \tint\t\t\tprio[MAX_LOCKMODES];\n \tLWLockId\tmasterLock;\n- } LOCKMETHODCTL;\n- \n- /*\n- * Eack backend has a non-shared lock table header.\n- *\n- * lockHash -- hash table holding per-locked-object lock information\n- * holderHash -- hash table holding per-lock-holder lock information\n- * ctl - shared control structure described above.\n- */\n- typedef struct LOCKMETHODTABLE\n- {\n- \tHTAB\t *lockHash;\n- \tHTAB\t *holderHash;\n- \tLOCKMETHODCTL *ctl;\n } LOCKMETHODTABLE;\n \n \n--- 85,101 ----\n *\t\tstarvation). XXX this field is not actually used at present!\n *\n * masterLock -- synchronizes access to the table\n+ *\n */\n! typedef struct LOCKMETHODTABLE\n {\n+ \tHTAB\t *lockHash;\n+ \tHTAB\t *holderHash;\n \tLOCKMETHOD\tlockmethod;\n \tint\t\t\tnumLockModes;\n \tint\t\t\tconflictTab[MAX_LOCKMODES];\n \tint\t\t\tprio[MAX_LOCKMODES];\n \tLWLockId\tmasterLock;\n } LOCKMETHODTABLE;", "msg_date": "Thu, 18 Jul 2002 19:07:00 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "lock item completed" } ]
[ { "msg_contents": "I have completed the trivial TODO item:\n\n\t* -HOLDER/HOLDERTAG rename to PROCLOCK/PROCLOCKTAG (Bruce)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Jul 2002 20:19:19 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "TODO lock item" } ]
[ { "msg_contents": "DON'T APPLY THIS PATCH!\n\nHi All,\n\nThis is the patch of what I've done so far. I'd value any feedback!\n\nThe only thing that I'm not checking (as far as I know) is ALTER TABLE/ADD\nFOREIGN KEY and CREATE CONSTRAINT TRIGGER, as I'm still trying to figure out\nthe best place to make the change.\n\nEvery other command is updated.\n\nDependency support is commented out until I figure out how to do it...\n\nIt includes the most detailed regression test I've ever written, but no\nexpected results until I fix foreign keys.\n\nPlease have a look and see if there's anything I've overlooked...\n\nThanks,\n\nChris", "msg_date": "Fri, 19 Jul 2002 16:50:24 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Demo patch for DROP COLUMN" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> This is the patch of what I've done so far. I'd value any feedback!\n\nA few comments ---\n\nYou really really should put in the wrapper routines that were suggested\nfor SearchSysCache, as this would avoid a whole lot of duplicated code;\nmost of the explicit tests for COLUMN_IS_DROPPED that are in your patch\ncould be replaced with that technique, reducing code size and increasing\nreadability.\n\nThe implementation routine for DROP COLUMN is wrong; it needs to be\nsplit into two parts, one that calls performDeletion and one that's\ncalled by same. AlterTableDropConstraint might be a useful model,\nalthough looking at it I wonder whether it has the permissions checks\nright. (Should the owner of a table be allowed to drop constraints on\nchild tables he doesn't own? If not, we should recurse to\nAlterTableDropConstraint rather than calling RemoveRelChecks directly.)\nAlso, you have\n\n! \t * Propagate to children if desired\n! \t * @@ THIS BEHAVIOR IS BROKEN, HOWEVER IT MATCHES RENAME COLUMN @@\n\nWhat's broken about it? RENAME probably is broken, in that it should\nnever allow non-recursion, but DROP doesn't have that issue.\n\n\n! \tfor (;;) {\n! \t\tif (SearchSysCacheExists(ATTNAME,\n! \t\t\t\t\t\t\t\t ObjectIdGetDatum(myrelid),\n! \t\t\t\t\t\t\t\t PointerGetDatum(newattname),\n! \t\t\t\t\t\t\t\t 0, 0)) {\n! \t\t\tsnprintf(newattname, NAMEDATALEN, \"___pg_dropped%d %d\", ++n, attnum);\n! \t\t}\n! \t\telse break;\n! \t}\n\nWhy not just\n\n\twhile (SearchSysCacheExists(...))\n\t{\n\t\tsnprintf(... ++n ...);\n\t}\n\nMore to the point, though, this is a waste of time. Since you're using\na physical column number there can be no conflict against the names of\nother dropped columns. There might be a conflict against a user column\nname, but the answer to that is to choose a weird name in the first\nplace; the hack above only avoids half of the problem, namely when the\nuser column already exists. You're still wrong if he tries to do ALTER\nTABLE ADD COLUMN ___pg_dropped1 later on. Therefore what you want to\ndo is use a name that is (a) very unlikely to be used in practice, and\n(b) predictable so that we can document it and say \"tough, you can't use\nthat name\". I'd suggest something like\n\t........pg.dropped.%d........\nNote the use of dots instead of underscores --- we may as well pick a\nname that's not valid as an unquoted identifier. (Dashes or other\nchoices would work as well. I'm not in favor of control characters\nor whitespace in the name though.)\n\n foreach(c, rte->eref->colnames)\n {\n attnum++;\n! if (strcmp(strVal(lfirst(c)), colname) == 0)\n {\n if (result)\n elog(ERROR, \"Column reference \\\"%s\\\" is ambiguous\", colname);\n--- 271,278 ----\n foreach(c, rte->eref->colnames)\n {\n attnum++;\n! /* Column name must be non-NULL and be a string match */\n! if (lfirst(c) != NULL && strcmp(strVal(lfirst(c)), colname) == 0)\n {\n if (result)\n elog(ERROR, \"Column reference \\\"%s\\\" is ambiguous\", colname);\n\nAlthough I think it was my suggestion to put nulls in the eref lists,\nit sure looks messy. What happens if you just allow the dropped-column\nnames to be used in eref? After getting a match you'll need to check if\nthe column is actually valid, but I think maybe this can be combined\nwith lookups that will occur anyway. Probably it'd net out being fewer\nchanges in total.\n\nThe proposed pg_dump change will NOT work, since it will cause pg_dump's\nidea of column numbers not to agree with reality, thus breaking\npg_attrdef lookups and probably other places. Probably best bet is to\nactually retrieve attisdropped (or constant 'f' for older releases),\nstore dropped attrs in the pg_dump datastructures anyway, and then omit\nthem at the time of listing out the table definition.\n\ntab-complete query is wrong (try it)...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Jul 2002 17:51:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Demo patch for DROP COLUMN " }, { "msg_contents": "> You really really should put in the wrapper routines that were suggested\n> for SearchSysCache, as this would avoid a whole lot of duplicated code;\n> most of the explicit tests for COLUMN_IS_DROPPED that are in your patch\n> could be replaced with that technique, reducing code size and increasing\n> readability.\n\nThing is that not all the commands use SearchSysCache (some use get_attnum\nfor instance), and you'd still need to go thru all the commands and change\nthem to use SearchSysCacheNotDropped(blah). I'll do it tho, if you think\nit's a better way.\n\n> The implementation routine for DROP COLUMN is wrong; it needs to be\n> split into two parts, one that calls performDeletion and one that's\n> called by same. AlterTableDropConstraint might be a useful model,\n> although looking at it I wonder whether it has the permissions checks\n> right. (Should the owner of a table be allowed to drop constraints on\n> child tables he doesn't own? If not, we should recurse to\n> AlterTableDropConstraint rather than calling RemoveRelChecks directly.)\n\nYeah - I hadn't gotten around to researching how to split it up yet. That's\nwhy I commented out the performDeletion. I'll do it soon.\n\n> Also, you have\n>\n> ! \t * Propagate to children if desired\n> ! \t * @@ THIS BEHAVIOR IS BROKEN, HOWEVER IT MATCHES RENAME COLUMN @@\n>\n> What's broken about it? RENAME probably is broken, in that it should\n> never allow non-recursion, but DROP doesn't have that issue.\n\nHrm. Yeah I guess the real problem is dropping a column that still exists\nin an ancestor. I got mixed up there. Is there a problem with this,\nthough:\n\nALTER TABLE ONLY ancestor DROP a;\n\nSeems a little dodgy to me...\n\n> Why not just\n>\n> \twhile (SearchSysCacheExists(...))\n> \t{\n> \t\tsnprintf(... ++n ...);\n> \t}\n\nAh yes.\n\n> More to the point, though, this is a waste of time. Since you're using\n> a physical column number there can be no conflict against the names of\n> other dropped columns. There might be a conflict against a user column\n> name, but the answer to that is to choose a weird name in the first\n> place; the hack above only avoids half of the problem, namely when the\n> user column already exists.\n\nProblem was, I couldn't find a generated name that was _impossible_ to enter\nas a user. So, I made sure it would never be a problem.\n\n> You're still wrong if he tries to do ALTER\n> TABLE ADD COLUMN ___pg_dropped1 later on. Therefore what you want to\n> do is use a name that is (a) very unlikely to be used in practice, and\n> (b) predictable so that we can document it and say \"tough, you can't use\n> that name\". I'd suggest something like\n> \t........pg.dropped.%d........\n> Note the use of dots instead of underscores --- we may as well pick a\n> name that's not valid as an unquoted identifier.\n\nNot that I already use spaces in my generated names, for this exact reason.\nBut I'll change it to your example above...\n\n> (Dashes or other\n> choices would work as well. I'm not in favor of control characters\n> or whitespace in the name though.)\n\nOK, no whitespace.\n\n> Although I think it was my suggestion to put nulls in the eref lists,\n> it sure looks messy. What happens if you just allow the dropped-column\n> names to be used in eref? After getting a match you'll need to check if\n> the column is actually valid, but I think maybe this can be combined\n> with lookups that will occur anyway. Probably it'd net out being fewer\n> changes in total.\n\nWell, that change was actually trivial after I thought it was going to be\ndifficult... When you say \"after getting a match need to check if actually\nvalid\", wasn't that what I proposed originally, but we decided it'd be\nfaster if the columns never even made it in? Where would you propose doing\nthese post hoc checks?\n\n> The proposed pg_dump change will NOT work, since it will cause pg_dump's\n> idea of column numbers not to agree with reality, thus breaking\n> pg_attrdef lookups and probably other places. Probably best bet is to\n> actually retrieve attisdropped (or constant 'f' for older releases),\n> store dropped attrs in the pg_dump datastructures anyway, and then omit\n> them at the time of listing out the table definition.\n\nYes, I'm actually aware of this. In fact, I was planning to make exactly\nthe change you have suggested here.\n\n> tab-complete query is wrong (try it)...\n\nOK, hadn't noticed that one. BTW, how do I make a dropped column disappear\nfrom the column-name-tab-complete-cache thingy in psql? I guess I'll see\nhow DROP TABLE works...\n\nChris\n\n", "msg_date": "Mon, 22 Jul 2002 10:22:40 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: Demo patch for DROP COLUMN " }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n>> What's broken about it? RENAME probably is broken, in that it should\n>> never allow non-recursion, but DROP doesn't have that issue.\n\n> Hrm. Yeah I guess the real problem is dropping a column that still exists\n> in an ancestor. I got mixed up there.\n\nYup, we need to figure out a way of preventing that. I've been thinking\nabout adding an attisinherited column to pg_attribute, to mark columns\nthat came from a parent table. Such a column could not be renamed or\ndropped except in a command that's recursed from the parent. (But what\nabout multiply-inherited columns?)\n\n> Is there a problem with this,\n> though:\n> ALTER TABLE ONLY ancestor DROP a;\n> Seems a little dodgy to me...\n\nSeems okay to me. The columns in the children would cease to be\ninherited and would become plain columns. That might mean going\naround and marking them so, if we adopt an explicit marking.\n\n>> the hack above only avoids half of the problem, namely when the\n>> user column already exists.\n\n> Problem was, I couldn't find a generated name that was _impossible_ to enter\n> as a user. So, I made sure it would never be a problem.\n\nBut you *didn't* make sure it would never be a problem.\n\n> Well, that change was actually trivial after I thought it was going to be\n> difficult... When you say \"after getting a match need to check if actually\n> valid\", wasn't that what I proposed originally,\n\nNo, your original code tested for validity before the strcmp, which is\nsurely the slowest possible way to do things.\n\n> Where would you propose doing these post hoc checks?\n\nNot sure yet. I'm just wondering whether you've found all the places\nthat will need to be tweaked to not dump core on nulls in the eref\nlists...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 22 Jul 2002 11:18:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Demo patch for DROP COLUMN " }, { "msg_contents": "> Yup, we need to figure out a way of preventing that. I've been thinking\n> about adding an attisinherited column to pg_attribute, to mark columns\n> that came from a parent table. Such a column could not be renamed or\n> dropped except in a command that's recursed from the parent. (But what\n> about multiply-inherited columns?)\n\nMany-to-many...\n\n> But you *didn't* make sure it would never be a problem.\n\nWasn't I looping until I found a unique name?? Dropping a column would\nnever fail in this case? Adding a column might, but I don't think that's\n_impossible_ to avoid.\n\n> > Where would you propose doing these post hoc checks?\n>\n> Not sure yet. I'm just wondering whether you've found all the places\n> that will need to be tweaked to not dump core on nulls in the eref\n> lists...\n\nWell have a squiz at the regression test I submitted and see if you can spot\nanything. I've attached the latest version of the patch where I've changed\nnaming to be like you suggested and improved code. Haven't looked at fixing\ndependencies yet. I've also fixed foreign keys and the copy command as well\nas pg_dump. The only command left is CREATE CONSTRAINT TRIGGER which I have\nto hunt down where the heck it actually is implemented.\n\nEven if you decide to change how the commands detect dropped columns (which\nI don't think there's terribly much point in doing), it is easy to see from\nmy patch all the places that need the change.\n\nChris", "msg_date": "Tue, 23 Jul 2002 09:35:48 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] Demo patch for DROP COLUMN " }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n>> But you *didn't* make sure it would never be a problem.\n\n> Wasn't I looping until I found a unique name?\n\nMy point was that there could still be a conflict against a user column\nthat the user tries to create *later*. So it's illusory to think that\nmaking the name of a dropped column less predictable will improve\nmatters.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 Jul 2002 11:42:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Demo patch for DROP COLUMN " }, { "msg_contents": "On Tue, 2002-07-23 at 20:42, Tom Lane wrote:\n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> >> But you *didn't* make sure it would never be a problem.\n> \n> > Wasn't I looping until I found a unique name?\n> \n> My point was that there could still be a conflict against a user column\n> that the user tries to create *later*. So it's illusory to think that\n> making the name of a dropped column less predictable will improve\n> matters.\n\nThe simple (to describe, perhaps not to implement ;) way to resolve it\nwould be for the ADD COLUMN (or CREATE TABLE INHERITS) rename the\noffending deleted column once more.\n\n--------------\nHannu\n", "msg_date": "24 Jul 2002 00:56:39 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Demo patch for DROP COLUMN" }, { "msg_contents": "> > My point was that there could still be a conflict against a user column\n> > that the user tries to create *later*. So it's illusory to think that\n> > making the name of a dropped column less predictable will improve\n> > matters.\n> \n> The simple (to describe, perhaps not to implement ;) way to resolve it\n> would be for the ADD COLUMN (or CREATE TABLE INHERITS) rename the\n> offending deleted column once more.\n\nHah! What a wonderful idea! Now why didn't I think of that!\n\nChris\n\n", "msg_date": "Wed, 24 Jul 2002 09:45:06 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] Demo patch for DROP COLUMN" } ]
[ { "msg_contents": "Hello!\n\ni've got a question on my problem..\ni'm using contrib/fulltextindex to index my tables to do a fulltext search\nbut i'm using a non-default locale settings on my database.\ni found out that if the locale is default 'C' then fulltextindex tables are\nscanned very fast using btree indices. for sure, i'm using prefix match\n(string ~ '^blabla') which is supposed to scan the table in this way.. so\nit's ok!\nbut if i change to different locale (i.e. sk_SK or cs_CZ) the fti table is\nscanned sequentially and it's very slow.. so, the more words are there in my\nfti table and the more more words i'm trying to search at once using table\njoin the slower is the search..\n\nso my question is.. is this a bug or the prefix match has no support to\nperform a table scan using indices under non-default locale?\nplease, give me a hint..\n\nregards\n\nTomas Lehuta\n\n\n", "msg_date": "Fri, 19 Jul 2002 15:14:15 +0200", "msg_from": "\"Tomas Lehuta\" <lharp@aurius.sk>", "msg_from_op": true, "msg_subject": "contrib/fulltextindex" } ]
[ { "msg_contents": "Folks,\n\nI've recently started a new series of articles at TechDocs (\nhttp://techdocs.postgresql.org ). This will be a biweekly (or\ntriweekly if I'm busy) column, with articles covering an entire range\nof issues around my professional support of PostgreSQL.\n\nCurrently there are two articles up, both oriented toward\nbeginning-intermediate PostgreSQL users:\n\n1. Restoring a corrupted Template 1 Database\n2. Part I of \"The Joy Of Index\"\n\nThere will be a few articles oriented toward advanced users, but most\nwill keep the focus of the first two. Please take a look, and check\nback regularly for new columns.\n\n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\nP.S. I'm posting this because the articles were inspired by questions I\nwas asked on these lists.\n\nP.P.S. I still need book review submissions for the PostgreSQL Book\nReview page at TechDocs!\n", "msg_date": "Fri, 19 Jul 2002 08:26:37 -0700", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": true, "msg_subject": "Adventures in PostgreSQL" }, { "msg_contents": "\nI'm trying to create a index from a timestamp+tz field and want the index\nto be date_trunc'd down to just the date\n\nwhen i try to do a\n\ncreate idxfoo on foo (date(footime));\n\ni get a \n\nERROR: DefineIndex: index function must be marked IMMUTABLE\n\nand it chokes on when i try to use the date_trunc() function as well\n\ncreate idxfoo on foo (date_trunc('day',footime));\n\nERROR: parser: parse error at or near \"'day'\" at character 53\n\nAny suggestions/workarounds (other than creating additional date-only\ncolumns in the schema and indexing those???)\n\n-d\n\n", "msg_date": "Mon, 27 Sep 2004 19:14:09 -0500 (CDT)", "msg_from": "\"D. Duccini\" <duccini@backpack.com>", "msg_from_op": false, "msg_subject": "date_trunc'd timestamp index possible?" }, { "msg_contents": "On Mon, Sep 27, 2004 at 19:14:09 -0500,\n \"D. Duccini\" <duccini@backpack.com> wrote:\n> \n> I'm trying to create a index from a timestamp+tz field and want the index\n> to be date_trunc'd down to just the date\n> \n> when i try to do a\n> \n> create idxfoo on foo (date(footime));\n> \n> i get a \n> \n> ERROR: DefineIndex: index function must be marked IMMUTABLE\n> \n> and it chokes on when i try to use the date_trunc() function as well\n> \n> create idxfoo on foo (date_trunc('day',footime));\n> \n> ERROR: parser: parse error at or near \"'day'\" at character 53\n> \n> Any suggestions/workarounds (other than creating additional date-only\n> columns in the schema and indexing those???)\n\nThe reason this doesn't work is that the timestamp to date conversion\ndepends on the time zone setting. In theory you should be able to avoid\nthis by specifying the time zone to check the date in. I tried something\nlike the following which I think should work, but doesn't:\ncreate idxfoo on foo (date(timezone('UTC',footime)));\n\nThe conversion of the timestamp stored in footime should be immutable\nand then taking the date should work. I did find that date of a timestamp\nwithout time zone is treated as immutable.\n\nI am not sure how to check if the supplied function for converting\na timestamp with time zone to a timestamp without timezone using a\nspecified time zone is immutable. I think this function should be\nimmutable, but that it probably isn't.\n", "msg_date": "Fri, 1 Oct 2004 13:28:30 -0500", "msg_from": "Bruno Wolff III <bruno@wolff.to>", "msg_from_op": false, "msg_subject": "Re: date_trunc'd timestamp index possible?" }, { "msg_contents": "\n> The reason this doesn't work is that the timestamp to date conversion\n> depends on the time zone setting. In theory you should be able to avoid\n> this by specifying the time zone to check the date in. I tried something\n> like the following which I think should work, but doesn't:\n> create idxfoo on foo (date(timezone('UTC',footime)));\n> \n> The conversion of the timestamp stored in footime should be immutable\n> and then taking the date should work. I did find that date of a timestamp\n> without time zone is treated as immutable.\n> \n> I am not sure how to check if the supplied function for converting\n> a timestamp with time zone to a timestamp without timezone using a\n> specified time zone is immutable. I think this function should be\n> immutable, but that it probably isn't.\n\nI think we found a way around it!\n\n\nCREATE OR REPLACE FUNCTION date_immutable( timestamptz ) RETURNS date AS\n'SELECT date( $1 ) ;' LANGUAGE 'sql' IMMUTABLE ;\n\nCREATE INDEX \"new_event_dt\" ON \"the_events\" USING btree (\ndate_immutable( \"event_dt_tm\" ) ) ;\n\n\n\n-----------------------------------------------------------------------------\ndavid@backpack.com BackPack Software, Inc. www.backpack.com\n+1 651.645.7550 voice \"Life is an Adventure. \n+1 651.645.9798 fax Don't forget your BackPack!\" \n-----------------------------------------------------------------------------\n\n", "msg_date": "Fri, 1 Oct 2004 13:28:47 -0500 (CDT)", "msg_from": "\"D. Duccini\" <duccini@backpack.com>", "msg_from_op": false, "msg_subject": "Re: date_trunc'd timestamp index possible?" }, { "msg_contents": "On Fri, Oct 01, 2004 at 13:28:30 -0500,\n Bruno Wolff III <bruno@wolff.to> wrote:\n> \n> I am not sure how to check if the supplied function for converting\n> a timestamp with time zone to a timestamp without timezone using a\n> specified time zone is immutable. I think this function should be\n> immutable, but that it probably isn't.\n\nI found that most of the various timezone functions are marked as stable\ninstead of immutable. I think at least a couple of these should be\nmarked as immutable and I will try reporting this as a bug.\n", "msg_date": "Fri, 1 Oct 2004 13:44:37 -0500", "msg_from": "Bruno Wolff III <bruno@wolff.to>", "msg_from_op": false, "msg_subject": "Re: date_trunc'd timestamp index possible?" }, { "msg_contents": "Bruno Wolff III <bruno@wolff.to> writes:\n> I am not sure how to check if the supplied function for converting\n> a timestamp with time zone to a timestamp without timezone using a\n> specified time zone is immutable. I think this function should be\n> immutable, but that it probably isn't.\n\nYup. In 7.4:\n\nregression=# select provolatile from pg_proc where oid = 'timezone(text,timestamptz)'::regprocedure;\n provolatile\n-------------\n s\n(1 row)\n\nregression=#\n\nThis is a thinko that's already been corrected for 8.0:\n\nregression=# select provolatile from pg_proc where oid = 'timezone(text,timestamptz)'::regprocedure;\n provolatile\n-------------\n i\n(1 row)\n\nregression=#\n\nIf you wanted you could just UPDATE pg_proc to correct this mistake.\nAnother possibility is to create a function that's an IMMUTABLE\nwrapper around the standard function.\n\nLooking at this, I realize that date_trunc() is mismarked: the\ntimestamptz variant is strongly dependent on the timezone setting\nand so should be STABLE not IMMUTABLE. Ooops.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 01 Oct 2004 14:49:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [SQL] date_trunc'd timestamp index possible? " }, { "msg_contents": "\"D. Duccini\" <duccini@backpack.com> writes:\n> I think we found a way around it!\n\n> CREATE OR REPLACE FUNCTION date_immutable( timestamptz ) RETURNS date AS\n> 'SELECT date( $1 ) ;' LANGUAGE 'sql' IMMUTABLE ;\n\nNo, you just found a way to corrupt your index. Pretending that\ndate(timestamptz) is immutable does not make it so. The above\n*will* break the first time someone uses the table with a different\ntimezone setting.\n\nWhat you can do safely is date(footime AT TIME ZONE 'something'),\nsince this nails down the zone in which the date is interpreted.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 01 Oct 2004 17:17:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: date_trunc'd timestamp index possible? " }, { "msg_contents": "I wrote:\n> Looking at this, I realize that date_trunc() is mismarked: the\n> timestamptz variant is strongly dependent on the timezone setting\n> and so should be STABLE not IMMUTABLE. Ooops.\n\nOn looking more closely, I think that all of these functions are\nmislabeled:\n\n oid | prorettype | prosrc | provolatile \n\nshould be stable not immutable:\n date_trunc(text,timestamptz) | timestamptz | timestamptz_trunc\t | i\n interval_pl_timestamptz(interval,timestamptz) | timestamptz | select $2 + $1\t | i\n timestamptz_pl_interval(timestamptz,interval) | timestamptz | timestamptz_pl_interval\t| i\n timestamptz_mi_interval(timestamptz,interval) | timestamptz | timestamptz_mi_interval\t| i\n \"overlaps\"(timestamptz,timestamptz,timestamptz,interval) | boolean | select ($1, $2) overlaps ($3, ($3 + $4))\t| i\n \"overlaps\"(timestamptz,interval,timestamptz,timestamptz) | boolean | select ($1, ($1 + $2)) overlaps ($3, $4)\t| i\n \"overlaps\"(timestamptz,interval,timestamptz,interval) | boolean | select ($1, ($1 + $2)) overlaps ($3, ($3 + $4))\t| i\n\nshould be immutable not stable:\n to_char(timestamp,text) | text | timestamp_to_char | s\n timestamptz(abstime) | timestamptz | abstime_timestamptz\t| s\n abstime(timestamptz) | abstime | timestamptz_abstime\t| s\n\nIt's easy to demonstrate that timestamptz+interval is dependent on the\ntimezone setting:\n\nregression=# set timezone = 'EST5EDT';\nSET\nregression=# select '2004-03-31 00:00-05'::timestamptz + '1 month'::interval;\n ?column?\n------------------------\n 2004-04-30 00:00:00-04\n(1 row)\n\nregression=# set timezone = 'GMT';\nSET\nregression=# select '2004-03-31 00:00-05'::timestamptz + '1 month'::interval;\n ?column?\n------------------------\n 2004-04-30 05:00:00+00\n(1 row)\n\nand then the overlaps variants have to follow along.\n\nOn the other side of the coin, I don't think that to_char has any\ndependency on timezone when it is dealing with a timestamp without time\nzone. (If you ask it for TZ you always get an empty string.) Likewise\nthere's no such dependency in abstime/timestamptz conversions.\n\nDo you see any other mislabelings?\n\nWhat I'm inclined to do with these is change pg_proc.h but not force an\ninitdb. Does anyone want to argue for an initdb to force it to be fixed\nin 8.0? We've lived with the wrong labelings for some time now without\nnoticing, so it doesn't seem like a serious enough bug to force a\npost-beta initdb ... to me anyway.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 01 Oct 2004 18:53:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Mislabeled timestamp functions (was Re: [SQL] [NOVICE] date_trunc'd\n\ttimestamp index possible?)" }, { "msg_contents": "On Fri, Oct 01, 2004 at 18:53:03 -0400,\n Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> What I'm inclined to do with these is change pg_proc.h but not force an\n> initdb. Does anyone want to argue for an initdb to force it to be fixed\n> in 8.0? We've lived with the wrong labelings for some time now without\n> noticing, so it doesn't seem like a serious enough bug to force a\n> post-beta initdb ... to me anyway.\n\nAs long as it is mentioned in the release notes, it doesn't seem worth\nforcing an initdb.\n", "msg_date": "Fri, 1 Oct 2004 19:50:08 -0500", "msg_from": "Bruno Wolff III <bruno@wolff.to>", "msg_from_op": false, "msg_subject": "Re: Mislabeled timestamp functions (was Re: [SQL] [NOVICE]\n\tdate_trunc'd timestamp index possible?)" }, { "msg_contents": "Tom Lane wrote:\n> I wrote:\n> Do you see any other mislabelings?\n\nI don't but I think that the concept of immutable shall be expanded.\nI mean I can use safely a date_trunc immutable in a query ( I think this\nis a sort of \"immutable per statement\" ) but not in a index definition\n( the index mantainance is affected by the current timezonesettings ).\nSo may be another modifier shall be introduced that reflect the \"immutable\nper statement\"\n\n> What I'm inclined to do with these is change pg_proc.h but not force an\n> initdb. Does anyone want to argue for an initdb to force it to be fixed\n> in 8.0? We've lived with the wrong labelings for some time now without\n> noticing, so it doesn't seem like a serious enough bug to force a\n> post-beta initdb ... to me anyway.\n\nI think that an initdb is not required but at least a script, released\nonly with the 8.0, that will update the catalogs could be usefull.\n\n\n\nRegards\nGaetano Mendola\n\n", "msg_date": "Sat, 02 Oct 2004 10:43:01 +0200", "msg_from": "Gaetano Mendola <mendola@bigfoot.com>", "msg_from_op": false, "msg_subject": "Re: Mislabeled timestamp functions (was Re: [SQL] [NOVICE]\n\tdate_trunc'd" }, { "msg_contents": "Tom Lane wrote:\n> What I'm inclined to do with these is change pg_proc.h but not force\n> an initdb. Does anyone want to argue for an initdb to force it to be\n> fixed in 8.0? We've lived with the wrong labelings for some time now\n> without noticing, so it doesn't seem like a serious enough bug to\n> force a post-beta initdb ... to me anyway.\n\nI'd prefer if all users of 8.0 were guaranteed to have the same catalog. \nI don't want to ask users, \"what version, and when did you last \ninitdb\". We're still in beta; no one purchased any stability \nguarantees.\n\n-- \nPeter Eisentraut\nhttp://developer.postgresql.org/~petere/\n\n", "msg_date": "Sat, 2 Oct 2004 19:41:29 +0200", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Mislabeled timestamp functions (was Re: [SQL] [NOVICE]\n\tdate_trunc'd timestamp index possible?)" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane wrote:\n>> What I'm inclined to do with these is change pg_proc.h but not force\n>> an initdb. Does anyone want to argue for an initdb to force it to be\n>> fixed in 8.0? We've lived with the wrong labelings for some time now\n>> without noticing, so it doesn't seem like a serious enough bug to\n>> force a post-beta initdb ... to me anyway.\n\n> I'd prefer if all users of 8.0 were guaranteed to have the same catalog. \n\nWell, there's something to be said for that viewpoint too. Anyone else\nfeel the same?\n\nIf we do go for an initdb, I'd like at the same time to do something\nI had intended to do but forgotten, which is to yank the functions\nand operators for basic arithmetic on type \"char\", and instead put in\n(explicit) conversions between \"char\" and integer. See for instance\nhttp://archives.postgresql.org/pgsql-sql/2002-11/msg00116.php\nhttp://archives.postgresql.org/pgsql-general/2004-08/msg01562.php\nhttp://archives.postgresql.org/pgsql-general/2004-09/msg01209.php\n\nSpecifically I want to remove these operators:\n\n oid | oid | oprresult \n-----+-------------------+-----------\n 635 | +(\"char\",\"char\") | \"char\"\n 636 | -(\"char\",\"char\") | \"char\"\n 637 | *(\"char\",\"char\") | \"char\"\n 638 | /(\"char\",\"char\") | \"char\"\n\nand their underlying functions:\n\n oid | oid | prorettype | prosrc \n------+--------------------------+------------+-------------\n 1248 | charpl(\"char\",\"char\") | \"char\" | charpl\n 1250 | charmi(\"char\",\"char\") | \"char\" | charmi\n 77 | charmul(\"char\",\"char\") | \"char\" | charmul\n 78 | chardiv(\"char\",\"char\") | \"char\" | chardiv\n\nThe following operators on \"char\" will remain:\n\n oid | oid | oprresult \n-----+-------------------+-----------\n 92 | =(\"char\",\"char\") | boolean\n 630 | <>(\"char\",\"char\") | boolean\n 631 | <(\"char\",\"char\") | boolean\n 632 | <=(\"char\",\"char\") | boolean\n 633 | >(\"char\",\"char\") | boolean\n 634 | >=(\"char\",\"char\") | boolean\n\nThese are not as dangerous as the arithmetic operators, because in a\nsituation where the parser is having difficulty resolving types, it\nwill prefer the \"text\" comparison operators over these. The reason\nthe \"char\" arithmetic operators are dangerous is that they are the only\nones of those names in the STRING type category.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 02 Oct 2004 14:22:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Mislabeled timestamp functions (was Re: [SQL] [NOVICE]\n\tdate_trunc'd timestamp index possible?)" }, { "msg_contents": "On Sat, Oct 02, 2004 at 10:43:01 +0200,\n Gaetano Mendola <mendola@bigfoot.com> wrote:\n> Tom Lane wrote:\n> >I wrote:\n> >Do you see any other mislabelings?\n> \n> I don't but I think that the concept of immutable shall be expanded.\n> I mean I can use safely a date_trunc immutable in a query ( I think this\n> is a sort of \"immutable per statement\" ) but not in a index definition\n> ( the index mantainance is affected by the current timezonesettings ).\n> So may be another modifier shall be introduced that reflect the \"immutable\n> per statement\"\n\nThere has been such a distinction for a major release or two. \"Stable\"\nis how you mark a function that will return the same value within a\nsingle transaction.\n", "msg_date": "Sat, 2 Oct 2004 15:04:51 -0500", "msg_from": "Bruno Wolff III <bruno@wolff.to>", "msg_from_op": false, "msg_subject": "Re: Mislabeled timestamp functions (was Re: [SQL] [NOVICE]\n\tdate_trunc'd" }, { "msg_contents": "On Sat, Oct 02, 2004 at 15:04:51 -0500,\n Bruno Wolff III <bruno@wolff.to> wrote:\n> On Sat, Oct 02, 2004 at 10:43:01 +0200,\n> \n> There has been such a distinction for a major release or two. \"Stable\"\n> is how you mark a function that will return the same value within a\n> single transaction.\n\nI should have said within a single statement instead of within a single\ntransaction.\n", "msg_date": "Sat, 2 Oct 2004 20:16:41 -0500", "msg_from": "Bruno Wolff III <bruno@wolff.to>", "msg_from_op": false, "msg_subject": "Re: Mislabeled timestamp functions (was Re: [SQL] [NOVICE]\n\tdate_trunc'd" }, { "msg_contents": "Bruno Wolff III wrote:\n> On Sat, Oct 02, 2004 at 15:04:51 -0500,\n> Bruno Wolff III <bruno@wolff.to> wrote:\n> \n>>On Sat, Oct 02, 2004 at 10:43:01 +0200,\n>>\n>>There has been such a distinction for a major release or two. \"Stable\"\n>>is how you mark a function that will return the same value within a\n>>single transaction.\n> \n> \n> I should have said within a single statement instead of within a single\n> transaction.\n\nI know that but a stable function is not called once inside the same query,\ninstead an immutable is:\n\nsp_immutable() is a simple immutable function\nsp_stable() is a simple stable function\nsp_foo() is a simple function\n\ntest is a table with two rows in it.\n\nregression=# select sp_stable(), sp_immutable(), sp_foo() from test;\nNOTICE: sp_immutable called\nNOTICE: sp_stable called\nNOTICE: sp_foo called\nNOTICE: sp_stable called\nNOTICE: sp_foo called\n sp_stable | sp_immutable | sp_foo\n-----------+--------------+--------\n 0 | 0 | 0\n 0 | 0 | 0\n(2 rows)\n\n\nso now do you see what do I mean ?\n\nThe stable function is threated \"stable\" only if inserted inside a filter:\n\nregression=# select * from test where sp_stable() = 3;\nNOTICE: sp_stable called\n a\n---\n(0 rows)\n\n\nand from this point of view immutable is not immutable enough:\n\nregression=# select sp_immutable() from test where sp_immutable() = 3;\nNOTICE: sp_immutable called\nNOTICE: sp_immutable called\n sp_immutable\n--------------\n(0 rows)\n\n\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Sun, 03 Oct 2004 13:18:27 +0200", "msg_from": "Gaetano Mendola <mendola@bigfoot.com>", "msg_from_op": false, "msg_subject": "Re: Mislabeled timestamp functions (was Re: [SQL] [NOVICE]\n\tdate_trunc'd" }, { "msg_contents": "Not that my 2c is worth 1c, but I second this. I'd rather initdb now\nthan get bitten by some catalog difference when I move my DB into\nproduction. :)\n\n--miker\n\nOn Sat, 02 Oct 2004 14:22:50 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n[...]\n> \n> > I'd prefer if all users of 8.0 were guaranteed to have the same catalog.\n> \n> Well, there's something to be said for that viewpoint too. Anyone else\n> feel the same?\n[...]\n", "msg_date": "Sun, 3 Oct 2004 10:49:12 -0400", "msg_from": "Mike Rylander <mrylander@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Mislabeled timestamp functions (was Re: [SQL] [NOVICE]\n\tdate_trunc'd timestamp index possible?)" }, { "msg_contents": "Oliver Jowett <oliver@opencloud.com> writes:\n> Bruno Wolff III wrote:\n>> I should have said within a single statement instead of within a single\n>> transaction.\n\n> As I understand Tom's earlier explanation of this, the definition is \n> even more narrow: stable functions only need to return the same value \n> across a single tablescan.\n\n> It might be useful to have some variant of stable (or perhaps just a \n> change in semantics) such that the function returns the same value for \n> identical parameters until the next CommandCounterIncrement.\n\nIn practice I think these are equivalent definitions.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 03 Oct 2004 17:27:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Stable function semantics (was Re: Mislabeled timestamp\n\tfunctions)" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Tom Lane wrote:\n> >> What I'm inclined to do with these is change pg_proc.h but not force\n> >> an initdb. Does anyone want to argue for an initdb to force it to be\n> >> fixed in 8.0? We've lived with the wrong labelings for some time now\n> >> without noticing, so it doesn't seem like a serious enough bug to\n> >> force a post-beta initdb ... to me anyway.\n> \n> > I'd prefer if all users of 8.0 were guaranteed to have the same catalog. \n> \n> Well, there's something to be said for that viewpoint too. Anyone else\n> feel the same?\n\nI would wonder about any users that are happily using beta3 with test data and\nupgrade to 8.0 without any problems but at some point later have trouble\nrestoring from a pg_dump.\n\n> Specifically I want to remove these operators:\n> \n> oid | oid | oprresult \n> -----+-------------------+-----------\n> 635 | +(\"char\",\"char\") | \"char\"\n> 636 | -(\"char\",\"char\") | \"char\"\n> 637 | *(\"char\",\"char\") | \"char\"\n> 638 | /(\"char\",\"char\") | \"char\"\n> ...\n> The reason the \"char\" arithmetic operators are dangerous is that they are\n> the only ones of those names in the STRING type category.\n\nWhat would happen if \"char\" were just removed from the STRING type category?\nOr alternatively if it were broken out into two data types, \"char\" which\ndidn't have these operators, and int1 which only had these operators and not\nall the string operators?\n\nIt does seem like having a fixed size 1 byte integer data type would be\nsomething appealing. Personally I find a lot of demand in my database models\nfor status flags that have very few possible states (often only two but I\ndon't want to limit myself with a boolean and booleans don't behave nicely\nwith any other application language since they return 't' and 'f'). I could\neasily see some very large table where someone wants to store lots of small\nintegers that need some arithmetic capabilities.\n\n-- \ngreg\n\n", "msg_date": "03 Oct 2004 17:49:32 -0400", "msg_from": "Greg Stark <gsstark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Mislabeled timestamp functions (was Re: [SQL] [NOVICE]\n\tdate_trunc'd timestamp index possible?)" }, { "msg_contents": "Greg Stark <gsstark@mit.edu> writes:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>> The reason the \"char\" arithmetic operators are dangerous is that they are\n>> the only ones of those names in the STRING type category.\n\n> What would happen if \"char\" were just removed from the STRING type category?\n\nWhat other category would you put it in? The I/O behavior of \"char\"\nis certainly not very conducive to thinking of it as storing integral\nvalues, anyway.\n\n> Or alternatively if it were broken out into two data types, \"char\" which\n> didn't have these operators, and int1 which only had these operators and not\n> all the string operators?\n\nI don't have an objection in principle to an int1 datatype, but there\nare a couple of practical objections; primarily that that looks way too\nmuch like a new feature for this point in the 8.0 cycle. (I seem to\nrecall having once had concerns about unexpected side effects from\nadding another set of overloaded operators to the NUMERIC category, too;\nbut I'm not sure if that's still relevant given the resolution-rule\nchanges we've made in the last couple releases.)\n\nEven with an int1 datatype, I'm not sure it makes sense to provide\narithmetic operators specifically for the type, as opposed to providing\nimplicit coercions to \"integer\" and letting the actual arithmetic\nhappen at integer width.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 03 Oct 2004 18:37:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Mislabeled timestamp functions (was Re: [SQL] [NOVICE]\n\tdate_trunc'd timestamp index possible?)" } ]
[ { "msg_contents": "Shouldn't the effect of ALTER TABLE ALTER COLUMN SET STATISTICS be\nrecorded by pg_dump?\n\nFor example:\n\nCREATE TABLE foo (col1 int, col2 int);\n\nALTER TABLE foo ALTER COLUMN col1 SET STATISTICS 100;\n\n$ pg_dumpall\n\nAfter removing the database and restoring from the dump, the effect of\nthe ALTER TABLE ALTER COLUMN SET STATISTICS will be lost...\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Fri, 19 Jul 2002 12:27:10 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": true, "msg_subject": "preserving statistics settings" }, { "msg_contents": "nconway@klamath.dyndns.org (Neil Conway) writes:\n> Shouldn't the effect of ALTER TABLE ALTER COLUMN SET STATISTICS be\n> recorded by pg_dump?\n\nProbably. I was dithering about whether that should be true/false/\ncontrollable, and never got 'round to doing anything at all. One\nproblem is that I didn't really want to enshrine the current default\nvalue in a bunch of pg_dump scripts, in case we decide it's too small.\nSo we shouldn't dump SET STATISTICS commands always.\n\nIf there were a reliable way to tell whether the attstattarget value had\nactually been set by the user, or was merely a default, it'd be easier\nto determine what pg_dump should do.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Jul 2002 12:39:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: preserving statistics settings " }, { "msg_contents": "On Fri, Jul 19, 2002 at 12:39:16PM -0400, Tom Lane wrote:\n> nconway@klamath.dyndns.org (Neil Conway) writes:\n> > Shouldn't the effect of ALTER TABLE ALTER COLUMN SET STATISTICS be\n> > recorded by pg_dump?\n\n> One problem is that I didn't really want to enshrine the current\n> default value in a bunch of pg_dump scripts, in case we decide it's\n> too small.\n\nGood point.\n\n> If there were a reliable way to tell whether the attstattarget value had\n> actually been set by the user, or was merely a default, it'd be easier\n> to determine what pg_dump should do.\n\nHmmm... we could allow SET STATISTICS to take 'DEFAULT' easily enough\n(and that might even be a good idea in any case), but you're right --\nwithout a way for pg_dump to determine the default value and/or whether\nthe admin have explicitely changed the attstattarget for that column,\nthere's not much that can be done...\n\nI suppose we could hard-code the current default value into pg_dump,\nbut that's pretty ugly.\n\nDoes anyone have a better suggestion?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Fri, 19 Jul 2002 12:51:43 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": true, "msg_subject": "Re: preserving statistics settings" }, { "msg_contents": "nconway@klamath.dyndns.org (Neil Conway) writes:\n> Hmmm... we could allow SET STATISTICS to take 'DEFAULT' easily enough\n> (and that might even be a good idea in any case), but you're right --\n> without a way for pg_dump to determine the default value and/or whether\n> the admin have explicitely changed the attstattarget for that column,\n> there's not much that can be done...\n\n> I suppose we could hard-code the current default value into pg_dump,\n> but that's pretty ugly.\n\nAlso, there's no guarantee that pg_dump would know what default the\nbackend had been compiled with, so it might make the wrong conclusion\nanyway.\n\n> Does anyone have a better suggestion?\n\nNot sure why I didn't think of this before, but we could make the stored\nvalue of attstattarget be \"-1\" to indicate \"use the default\". Zero or a\npositive value then indicates an explicit selection. We can't\nretroactively fix 7.2, but going forward we'd have a reasonable answer.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Jul 2002 13:28:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: preserving statistics settings " }, { "msg_contents": "On Fri, Jul 19, 2002 at 01:28:29PM -0400, Tom Lane wrote:\n> nconway@klamath.dyndns.org (Neil Conway) writes:\n> > Does anyone have a better suggestion?\n> \n> Not sure why I didn't think of this before, but we could make the stored\n> value of attstattarget be \"-1\" to indicate \"use the default\". Zero or a\n> positive value then indicates an explicit selection. We can't\n> retroactively fix 7.2, but going forward we'd have a reasonable answer.\n\nSounds good to me. I can implement this, unless you'd rather do it.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Fri, 19 Jul 2002 17:05:40 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": true, "msg_subject": "Re: preserving statistics settings" }, { "msg_contents": "nconway@klamath.dyndns.org (Neil Conway) writes:\n> On Fri, Jul 19, 2002 at 01:28:29PM -0400, Tom Lane wrote:\n>> Not sure why I didn't think of this before, but we could make the stored\n>> value of attstattarget be \"-1\" to indicate \"use the default\". Zero or a\n>> positive value then indicates an explicit selection. We can't\n>> retroactively fix 7.2, but going forward we'd have a reasonable answer.\n\n> Sounds good to me. I can implement this, unless you'd rather do it.\n\nGo for it. Note you should be able to remove genbki.sh's knowledge of\nDEFAULT_ATTSTATTARGET, as well as all the occurrences of that symbol\nin include/catalog/*.h. Offhand I guess only analyze.c should refer\nto that symbol once it's done right.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Jul 2002 17:20:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: preserving statistics settings " }, { "msg_contents": "BTW, checking my notes about this issue, I notice a related question:\nshould attstattarget be inherited for the inherited columns when a child\ntable is created? Right now it's not, and it might be a bit ugly to\nmake it so.\n\nIn a lot of scenarios you wouldn't necessarily expect the parent and\nchild tables to have similar contents, so I'm not sure inheriting\nattstattarget is appropriate anyway. Comments anyone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Jul 2002 18:52:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: preserving statistics settings " }, { "msg_contents": "On Fri, 2002-07-19 at 18:52, Tom Lane wrote:\n> BTW, checking my notes about this issue, I notice a related question:\n> should attstattarget be inherited for the inherited columns when a child\n> table is created? Right now it's not, and it might be a bit ugly to\n> make it so.\n> \n> In a lot of scenarios you wouldn't necessarily expect the parent and\n> child tables to have similar contents, so I'm not sure inheriting\n> attstattarget is appropriate anyway. Comments anyone?\n\nIn any of my uses of inheritance the contents have been drastically\ndifferent between the various tables. Not that I've played much with\nthe statistics stuff, but I don't think blindly applying the value would\nbe good.\n\n", "msg_date": "19 Jul 2002 21:02:48 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: preserving statistics settings" } ]
[ { "msg_contents": "This is a repost. I originally posted to the novice list,\nbut it seems to be a very low traffic list, and no one seems\nto have noticed my message. Then I posted to the General\nList, where I was kindly advised to try the hackers list. So\nhere I am.\n------------------------------------------------------------\n\nHello! I'm a PGSQL newbie. I have installed postgres only a\nfew days ago in the attempt to use it to solve a specific\nproblem.\n\nI am using (trying to...) PGSQL to store a database of\ndigital signals. Each signal is a sequence of (signal_id,\ntimestamp, double) tuples. I've managed to write resampling\nalogrithms in pl/pgsql, and I don't think it would be hard\nto write autoregressive filters. However, now I'm confronted\nwith the need to compute the power spectra of my signals. I\nwould like to use FFTW, which is lightning fast on my\nmachine. Has anyone already written FFTW bindings for\nPostgreSQL?\n\nIf I have to write the code myself, I would need to create a\ndatabase function calling code from a C module. Such code\nwould have to operate on real and complex float arrays. I\nunderstand how I could use a pl/pgsql function to create a\nnew table where each signal is stored as a (signal_id,\ndouble array) tuple, but how am I supposed to pass such\narrays to a C function? How are postgres arrays actually\nimplemented in memory? In short, I need someone to get me\nstarted on writing an FFTW binding for pgsql, in none is\nalready available.\n\nThank you in advance for any help you can give me. And\ndouble thumbs up to the developers: running PostgreSQL for\nthe first time is an epiphanic experience. I want to study\nthe ins and outs of it rapidly so that, hopefully, in a\nwhile, I will be able to contribute to the pgsql project.\n\nAlex Baretta\n\n\n\n\n", "msg_date": "Fri, 19 Jul 2002 21:23:45 +0200", "msg_from": "Alessandro Baretta <alex@baretta.com>", "msg_from_op": true, "msg_subject": "Arrays and FFTW" }, { "msg_contents": "Alessandro Baretta <alex@baretta.com> writes:\n> If I have to write the code myself, I would need to create a\n> database function calling code from a C module. Such code\n> would have to operate on real and complex float arrays. I\n> understand how I could use a pl/pgsql function to create a\n> new table where each signal is stored as a (signal_id,\n> double array) tuple, but how am I supposed to pass such\n> arrays to a C function? How are postgres arrays actually\n> implemented in memory? In short, I need someone to get me\n> started on writing an FFTW binding for pgsql, in none is\n> already available.\n\nYou're intending to store each complete signal as a big array in one\nrow? That could get a bit ugly if the signals are very large (many\nmegabytes). But if you want to do it that way, I think the coding\nwould be pretty straightforward. See src/backend/utils/adt/float.c\nfor some examples of C functions that process arrays of floats ---\nthe \"FLOAT AGGREGATE OPERATORS\" section is relevant.\nsrc/include/utils/array.h is relevant reading as well.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Jul 2002 16:52:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Arrays and FFTW " }, { "msg_contents": "Tom Lane wrote:\n >\n> You're intending to store each complete signal as a big array in one\n> row? That could get a bit ugly if the signals are very large (many\n> megabytes). But if you want to do it that way, I think the coding\n> would be pretty straightforward. See src/backend/utils/adt/float.c\n> for some examples of C functions that process arrays of floats ---\n> the \"FLOAT AGGREGATE OPERATORS\" section is relevant.\n> src/include/utils/array.h is relevant reading as well.\n\nOk. I'll take a look at them.\n\nWhat is going to be ugly, exactly? To be precise, my \napplication requires storing samples obtained at random \nintervals, and resampling them a 1Hz. My pgsql installation \nis already managing 3 months worth of data (1.8 Mrows), \nimported from MS/Excel x-( . But power spectra are only \nintersting for a moving window about 8-12 hours long (9.1h \nyields an array of 2**15 samples, whose FFT can be computed \nin about 2ms on an Athlon 1200).\n\nMy problem is that I need to know how the arrays are \nrepresented in memory, and how they are passed to my \nfunction. Also, I'll need to define an abstract type for the \nexecution plan of the dft alorithm--yes, fftw uses execution \nplans just like PostgreSQL :-) --and I will need to store \nthem in the database. This seems a little tricky to me. As I \nstated earlier, I have already learned SQL and PL/pgSQL \nfunctions, but I still have to learn how to create a C \nfunction. Anyway, I'm working on it, and I hope to be able \nto share my code with you guys, for possible inclusion in \nthe distribution.\n\nAlex\n\n", "msg_date": "Fri, 19 Jul 2002 23:48:59 +0200", "msg_from": "Alessandro Baretta <alex@baretta.com>", "msg_from_op": true, "msg_subject": "Re: Arrays and FFTW" }, { "msg_contents": "Alessandro Baretta wrote:\n> My problem is that I need to know how the arrays are represented in \n> memory, and how they are passed to my function. Also, I'll need to \n> define an abstract type for the execution plan of the dft alorithm--yes, \n> fftw uses execution plans just like PostgreSQL :-) --and I will need to \n> store them in the database. This seems a little tricky to me. As I \n> stated earlier, I have already learned SQL and PL/pgSQL functions, but I \n> still have to learn how to create a C function. Anyway, I'm working on \n> it, and I hope to be able to share my code with you guys, for possible \n> inclusion in the distribution.\n\nAlso see contrib/array, contrib/dblink, and contrib/intarray (and \nprobably others under contrib) for user function examples which handle \narrays. And be sure to see:\n http://www.postgresql.org/idocs/index.php?xfunc-c.html\nfor general information on user C language functions.\n\nJoe\n\n", "msg_date": "Fri, 19 Jul 2002 23:18:53 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Arrays and FFTW" }, { "msg_contents": "Joe Conway wrote:\n> \n> Also see contrib/array, contrib/dblink, and contrib/intarray (and \n> probably others under contrib) for user function examples which handle \n> arrays. And be sure to see:\n> http://www.postgresql.org/idocs/index.php?xfunc-c.html\n> for general information on user C language functions.\n> \n> Joe\n\nI read that already, but it does not mention how arrays are \npassed to a C function. I need some examples. Now I have \nsome places to look for. Now, I'll download the source and \ntake a look at it. Hopefully, it will not be too difficult \nto figure out how arrays are passed to C functions. The docs \n*seem* to imply that I will get a blob of memory where the \nfirst 4 bytes are the size_t of the array, and the rest is \nthe array itself. However, this is not clearly stated, so I \ncould have misunderstood this point. PostgreSQL is a great \nthing, but the documentation is a little too tight in a few \nspots. I managed to wade through most of it--I'd hate to get \nan RTFM from you guys--but I wish it were a little more \nexplicative.\n\nThanks to everybody for the kind help. Good work. I'll get \nback to mine momentarily.\n\nAlex\n\n", "msg_date": "Sat, 20 Jul 2002 10:30:30 +0200", "msg_from": "Alessandro Baretta <alex@baretta.com>", "msg_from_op": true, "msg_subject": "Re: Arrays and FFTW" }, { "msg_contents": "Matthew T. O'Connor wrote:\n\n> Why are you using plpgsql for this?\n\nSince I prefer to store my data in a database than in a \nfile, because this allows me to handle concurrency in a very \nnatural way, I want to keep my application code, insofar as \npossible together with my data.\n\n > You can write it in C.\n\nI could use a Turing Machine if a cared to. The fact is that \nC is not my favorite language. I do not feel compelled to \nuse C, in this case, except to allow me to interface pgsql \nwith FFTW, the Fastest Fourier Transform in the West.\n\nAlex\n\n", "msg_date": "Sat, 20 Jul 2002 16:04:14 +0200", "msg_from": "Alessandro Baretta <alex@baretta.com>", "msg_from_op": true, "msg_subject": "Re: Arrays and FFTW" } ]
[ { "msg_contents": "I looked through your CREATE CAST commit a little. Looks pretty good\nbut I had a few suggestions/concerns.\n\n* The biggie is that I'm not satisfied with the permissions checking.\nYou have \"To be able to create a cast, you must own the underlying\nfunction. To be able to create a binary compatible cast, you must own\nboth the source and the target data type.\" (Same to drop a cast.)\nIt seems to me that this is quite dangerous, as any random luser who\ncan access both datatypes can create a function and then a cast,\nthereby changing the behavior of *both* types in rather subtle ways,\nand perhaps insinuating unexpected, untrusted code into queries issued\nby people who weren't expecting any cast to be applied.\n(Not to mention causing a denial-of-service for the legitimate definer\nof that cast, if he hadn't gotten around to making it quite yet.)\nThe problem would be slightly less bad if function definition required\nUSAGE privilege on the arg/result types ... but not much.\n\nI think I would prefer this definition: to create/drop a cast, you must\nown both datatypes, plus the underlying function if any. What's the\nrationale for having such a weak permissions scheme?\n\n\n* I see that you are worried in pg_dump about which schema to associate\na cast with, if it's binary-compatible. I'm confused too; would it work\nto dump the cast iff either of the source/dest datatypes are to be\ndumped? (This might be a better rule even in the not-binary-compatible\ncase.)\n\n\n* Various more-or-less minor coding gripes:\n\n* pg_cast table not documented in catalogs.sgml.\n\n* shoddy implementation of getObjectDescription() for cast. (I have\nsome to-do items in that routine myself, though, and will be happy to fix\nthis while at it.)\n\n* since pg_cast depends on having a unique OID, it *must* have an OID\nindex to enforce that uniqueness; otherwise the system can fail after\nOID wraparound.\n\n* since you must define the OID index anyway, you may as well use it in\na systable scan in DropCastById, instead of using heapscan.\n\n* in CreateCast, there's no need to use the relatively expensive \nget_system_catalog_relid() lookup; you've got pg_cast open and so\nyou can just do RelationGetRelid on it.\n\n\t\t\tregards, tom lane\n\nPS: I also want to raise a flag that we still haven't resolved the\nissues we discussed a few months ago, about exactly *which* implicit\ncasts should be provided. I think that's a different thread though.\n", "msg_date": "Sat, 20 Jul 2002 20:11:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "CREATE CAST code review" }, { "msg_contents": "Tom Lane writes:\n\n> I looked through your CREATE CAST commit a little. Looks pretty good\n> but I had a few suggestions/concerns.\n>\n> * The biggie is that I'm not satisfied with the permissions checking.\n\nMe neither. I had sent a message earlier about this but it went\nunnoticed, but I had to implement something that was a little more\nrestrictive that the previous first come first serve scheme.\n\n> I think I would prefer this definition: to create/drop a cast, you must\n> own both datatypes, plus the underlying function if any. What's the\n> rationale for having such a weak permissions scheme?\n\nThat doesn't quite work, because then no ordinary user can define a cast\nfrom some built-in type to his own type. What I'm thinking about is to\nimplement the USAGE privilege on types, and then you need to have that to\nbe allowed to create casts. (Possibly there should be an even more\nrestrictive privilege invented here, but that would be a second step.)\n\n> * I see that you are worried in pg_dump about which schema to associate\n> a cast with, if it's binary-compatible. I'm confused too; would it work\n> to dump the cast iff either of the source/dest datatypes are to be\n> dumped? (This might be a better rule even in the not-binary-compatible\n> case.)\n\nI'm not sure about the implications of associating objects with schemas in\npg_dump. I suppose there might be an option to dump only certain schemas,\nin which case it's tricky to associate a cast to any one schema.\n\n> PS: I also want to raise a flag that we still haven't resolved the\n> issues we discussed a few months ago, about exactly *which* implicit\n> casts should be provided. I think that's a different thread though.\n\nYes, I have some thoughts on that which I plan to present soon.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 22 Jul 2002 20:35:56 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: CREATE CAST code review" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> That doesn't quite work, because then no ordinary user can define a cast\n> from some built-in type to his own type. What I'm thinking about is to\n> implement the USAGE privilege on types, and then you need to have that to\n> be allowed to create casts.\n\nStill seems too weak. What about requiring ownership of at least one\nof the types?\n\n> I'm not sure about the implications of associating objects with schemas in\n> pg_dump. I suppose there might be an option to dump only certain schemas,\n\nThat is the intention (it's not implemented yet).\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 22 Jul 2002 16:06:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: CREATE CAST code review " }, { "msg_contents": "Tom Lane writes:\n\n> What about requiring ownership of at least one of the types?\n\nYes, that would work.\n\nThere would be a somewhat bizzare consequence, though: User U1 creates\ntype T1, user U2 creates type T2. Then user U1 creates a cast from T1 to\nT2. Now user U2 would be allowed to drop that cast (unless we store a\ncast owner). I guess that lies in the nature of things.\n\nA much more complex yet powerful alternative would be to associate casts\nwith schemas. For example, this would allow an ordinary user to create a\ncast from numeric to text in his own little world. But that might be\ngoing too far.\n\n> > I'm not sure about the implications of associating objects with schemas in\n> > pg_dump. I suppose there might be an option to dump only certain schemas,\n>\n> That is the intention (it's not implemented yet).\n\nMy concern was that if you, say, have two schemas and a cast that involves\ntypes from both schemas. If you dump all of them, you have a consistent\ndump. But if you dump both schemas separately, do you dump the cast in\nboth of them (thus making each schema's dump self-contained) or in only\none of them (thus allowing concatenation the dumps). This issue\ngeneralizes to every kind of dependency in pg_dump.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sat, 27 Jul 2002 20:05:32 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: CREATE CAST code review " } ]
[ { "msg_contents": "Now that we have dependencies implemented, it would be a real good idea\nif operator classes could be included in the web of dependencies.\nHowever, the pg_depends code implicitly assumes that it can do a DROP,\nif demanded, for any entity referenced by a dependency link. This means\nwe need DROP OPERATOR CLASS.\n\nI am thinking of picking up Bill Studenmund's patch from last fall for\nCREATE OPERATOR CLASS and bringing it forward into current sources,\nand adding DROP support too.\n\nIn our last episode, Peter said:\n> I'm having a few issues with the syntax.\n\nI *think* what we had agreed to was\n\nCREATE OPERATOR CLASS name [ DEFAULT ] FOR TYPE type USING accessmethod\nAS { FUNCTION number name(parms)\n | OPERATOR number name [ ( type, type ) ] [ RECHECK ]\n | STORAGE typename\n } [, ...]\n\n... at least that's the last message in the thread I can find in my\narchives. Anyone unhappy with this? (Obviously the names will now\nneed to include optional schema qualification.)\n\nDrop will need to look approximately like\n\nDROP OPERATOR CLASS name USING accessmethod\n\nsince opclass names are only unique within a specific index AM.\n\nI don't think we'd discussed permissions issues. My inclination\nis to restrict CREATE to the owner of the underlying type, on the\ngrounds that letting other people do it interferes with the legitimate\nrights of the type owner (cf my comments about creating casts,\nyesterday). I'm willing to listen to other arguments though.\nWe do have an \"opcowner\" column in pg_opclass, so in any case we\nwill record the creator and allow only him to do DROP.\n\nThoughts, objections?\n\nI'm off to San Diego for a week for the O'Reilly convention, and\nwas casting about for some fairly self-contained project to take\nwith me for idle moments. This looks like it would do...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 21 Jul 2002 11:20:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "CREATE/DROP OPERATOR CLASS" }, { "msg_contents": "> Now that we have dependencies implemented, it would be a real good idea\n> if operator classes could be included in the web of dependencies.\n> However, the pg_depends code implicitly assumes that it can do a DROP,\n> if demanded, for any entity referenced by a dependency link.\n\nHmmm...does this mean that there is any situation in which there would be a\ncascade delete of a _column_?\n\nie. If you drop the domain a column is using with the cascade option, will\nthe column get dropped automatically? That doesn't sound like very nice\nbehaviour...?\n\nChris\n\n", "msg_date": "Mon, 22 Jul 2002 09:20:05 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: CREATE/DROP OPERATOR CLASS" }, { "msg_contents": "On Sun, 2002-07-21 at 21:20, Christopher Kings-Lynne wrote:\n> > Now that we have dependencies implemented, it would be a real good idea\n> > if operator classes could be included in the web of dependencies.\n> > However, the pg_depends code implicitly assumes that it can do a DROP,\n> > if demanded, for any entity referenced by a dependency link.\n> \n> Hmmm...does this mean that there is any situation in which there would be a\n> cascade delete of a _column_?\n> \n> ie. If you drop the domain a column is using with the cascade option, will\n> the column get dropped automatically? That doesn't sound like very nice\n> behaviour...?\n\nIf you drop the domain a column is using, the column has become\ncompletely useless -- and the table broken.\n\nMore obvious if you drop a user defined type that a column was using. Or\nthe functions that the user defined type depended on to import / export\ndata then there is no method to insert or extract information from that\ncolumn -- or more to the point any tuple within that table.\n\nThis is what CASCADE does. Ensures the structure is clean much like\nforeign keys and constraints help keep data clean.\n\nRESTRICT is there for those who don't fully understand the structure. \nCautiously test the waters to see what else you may be affecting first.\n\n", "msg_date": "21 Jul 2002 22:00:39 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: CREATE/DROP OPERATOR CLASS" }, { "msg_contents": "> More obvious if you drop a user defined type that a column was using. Or\n> the functions that the user defined type depended on to import / export\n> data then there is no method to insert or extract information from that\n> column -- or more to the point any tuple within that table.\n> \n> This is what CASCADE does. Ensures the structure is clean much like\n> foreign keys and constraints help keep data clean.\n> \n> RESTRICT is there for those who don't fully understand the structure. \n> Cautiously test the waters to see what else you may be affecting first.\n\n\nYeah, I guess you're right!\n\nChris\n\n", "msg_date": "Mon, 22 Jul 2002 10:13:31 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: CREATE/DROP OPERATOR CLASS" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Hmmm...does this mean that there is any situation in which there would be a\n> cascade delete of a _column_?\n\nYou bet. That's why we need DROP COLUMN done ;-)\n\n> ie. If you drop the domain a column is using with the cascade option, will\n> the column get dropped automatically? That doesn't sound like very nice\n> behaviour...?\n\nWhat did you think CASCADE was for?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 22 Jul 2002 11:11:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: CREATE/DROP OPERATOR CLASS " } ]
[ { "msg_contents": "We are about to submit brand bew contrib/ltree module\n(first draft of documetation is available from\nhttp://www.sai.msu.su/~megera/postgres/gist/ltree/)\nand I have a question what version to submit - 7.2 or 7.3 ?\nWe have tested the module with 7.2 and have a patch for 7.3.\nProbably better to submit version for 7.3 development and\nhave archive for 7.2.\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Sun, 21 Jul 2002 18:38:16 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "contrib/ltree for 7.2 or 7.3 ?" }, { "msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> We are about to submit brand bew contrib/ltree module\n> (first draft of documetation is available from\n> http://www.sai.msu.su/~megera/postgres/gist/ltree/)\n> and I have a question what version to submit - 7.2 or 7.3 ?\n\n7.3. There are unlikely to be any more 7.2 releases, and in any\ncase they would be bugfixes only, no new features.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 21 Jul 2002 11:55:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: contrib/ltree for 7.2 or 7.3 ? " }, { "msg_contents": "On Sun, 21 Jul 2002, Tom Lane wrote:\n\n> Oleg Bartunov <oleg@sai.msu.su> writes:\n> > We are about to submit brand bew contrib/ltree module\n> > (first draft of documetation is available from\n> > http://www.sai.msu.su/~megera/postgres/gist/ltree/)\n> > and I have a question what version to submit - 7.2 or 7.3 ?\n>\n> 7.3. There are unlikely to be any more 7.2 releases, and in any\n> case they would be bugfixes only, no new features.\n\nOK. We've got documentation written and the module could be downloaded\nfrom http://www.sai.msu.su/~megera/postgres/gist/ltree/ltree.tar.gz\nIt's works with current CVS and there is patch.72 within the archive,\nso people could use it with 7.2 release.\n\nI've attached text version of documentation, it's about 16Kb, sorry for that.\nHTML version is available from http://www.sai.msu.su/~megera/postgres/gist/ltree/\n\nAlso, we prepared test data based on DMOZ catalog (about 300,000 nodes)\nand encourage people to play with queries.\n\nOne known issue: It'll not works with 64-bit OS. We'll certainly fix this\nbut will appreciate if somebody with access to 64-bit machine could help us.\nIt's known problem with byte-alignment.\n\n\n>\n> \t\t\tregards, tom lane\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83", "msg_date": "Tue, 23 Jul 2002 19:33:21 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "Re: contrib/ltree for 7.2 or 7.3 ? " }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nOleg Bartunov wrote:\n> On Sun, 21 Jul 2002, Tom Lane wrote:\n> \n> > Oleg Bartunov <oleg@sai.msu.su> writes:\n> > > We are about to submit brand bew contrib/ltree module\n> > > (first draft of documetation is available from\n> > > http://www.sai.msu.su/~megera/postgres/gist/ltree/)\n> > > and I have a question what version to submit - 7.2 or 7.3 ?\n> >\n> > 7.3. There are unlikely to be any more 7.2 releases, and in any\n> > case they would be bugfixes only, no new features.\n> \n> OK. We've got documentation written and the module could be downloaded\n> from http://www.sai.msu.su/~megera/postgres/gist/ltree/ltree.tar.gz\n> It's works with current CVS and there is patch.72 within the archive,\n> so people could use it with 7.2 release.\n> \n> I've attached text version of documentation, it's about 16Kb, sorry for that.\n> HTML version is available from http://www.sai.msu.su/~megera/postgres/gist/ltree/\n> \n> Also, we prepared test data based on DMOZ catalog (about 300,000 nodes)\n> and encourage people to play with queries.\n> \n> One known issue: It'll not works with 64-bit OS. We'll certainly fix this\n> but will appreciate if somebody with access to 64-bit machine could help us.\n> It's known problem with byte-alignment.\n> \n> \n> >\n> > \t\t\tregards, tom lane\n> >\n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Jul 2002 19:13:18 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: contrib/ltree for 7.2 or 7.3 ?" }, { "msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\n\n\nOleg Bartunov wrote:\n> On Sun, 21 Jul 2002, Tom Lane wrote:\n> \n> > Oleg Bartunov <oleg@sai.msu.su> writes:\n> > > We are about to submit brand bew contrib/ltree module\n> > > (first draft of documetation is available from\n> > > http://www.sai.msu.su/~megera/postgres/gist/ltree/)\n> > > and I have a question what version to submit - 7.2 or 7.3 ?\n> >\n> > 7.3. There are unlikely to be any more 7.2 releases, and in any\n> > case they would be bugfixes only, no new features.\n> \n> OK. We've got documentation written and the module could be downloaded\n> from http://www.sai.msu.su/~megera/postgres/gist/ltree/ltree.tar.gz\n> It's works with current CVS and there is patch.72 within the archive,\n> so people could use it with 7.2 release.\n> \n> I've attached text version of documentation, it's about 16Kb, sorry for that.\n> HTML version is available from http://www.sai.msu.su/~megera/postgres/gist/ltree/\n> \n> Also, we prepared test data based on DMOZ catalog (about 300,000 nodes)\n> and encourage people to play with queries.\n> \n> One known issue: It'll not works with 64-bit OS. We'll certainly fix this\n> but will appreciate if somebody with access to 64-bit machine could help us.\n> It's known problem with byte-alignment.\n> \n> \n> >\n> > \t\t\tregards, tom lane\n> >\n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 30 Jul 2002 12:44:26 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: contrib/ltree for 7.2 or 7.3 ?" }, { "msg_contents": "> Oleg Bartunov wrote:\n>> One known issue: It'll not works with 64-bit OS. We'll certainly fix this\n>> but will appreciate if somebody with access to 64-bit machine could help us.\n\nActually, it dumps core instantly on 32-bit machines too, if they are\npickier about alignment than Intel hardware is. You can't map\nstructures onto char[] arrays that start at odd byte offsets and not\nexpect trouble.\n\nI also do not trust macros like this:\n\ntypedef struct {\n\tint32\tlen;\n\tuint16\tnumlevel;\n\tchar\tdata[1];\n} ltree;\n\n#define LTREE_HDRSIZE\t( sizeof(int32) + sizeof(uint16) )\n\nbecause they take no account of the possibility of padding between fields.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Jul 2002 14:48:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: contrib/ltree for 7.2 or 7.3 ? " }, { "msg_contents": "I'm now working on it.\n\nTom Lane wrote:\n>>Oleg Bartunov wrote:\n>>\n>>>One known issue: It'll not works with 64-bit OS. We'll certainly fix this\n>>>but will appreciate if somebody with access to 64-bit machine could help us.\n>>\n> \n> Actually, it dumps core instantly on 32-bit machines too, if they are\n> pickier about alignment than Intel hardware is. You can't map\n> structures onto char[] arrays that start at odd byte offsets and not\n> expect trouble.\n> \n> I also do not trust macros like this:\n> \n> typedef struct {\n> \tint32\tlen;\n> \tuint16\tnumlevel;\n> \tchar\tdata[1];\n> } ltree;\n> \n> #define LTREE_HDRSIZE\t( sizeof(int32) + sizeof(uint16) )\n> \n> because they take no account of the possibility of padding between fields.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n\n-- \nTeodor Sigaev\nteodor@stack.net\n\n\n", "msg_date": "Tue, 30 Jul 2002 22:52:11 +0400", "msg_from": "Teodor Sigaev <teodor@stack.net>", "msg_from_op": false, "msg_subject": "Re: contrib/ltree for 7.2 or 7.3 ?" }, { "msg_contents": "\nFolks, has this been fixed?\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> > Oleg Bartunov wrote:\n> >> One known issue: It'll not works with 64-bit OS. We'll certainly fix this\n> >> but will appreciate if somebody with access to 64-bit machine could help us.\n> \n> Actually, it dumps core instantly on 32-bit machines too, if they are\n> pickier about alignment than Intel hardware is. You can't map\n> structures onto char[] arrays that start at odd byte offsets and not\n> expect trouble.\n> \n> I also do not trust macros like this:\n> \n> typedef struct {\n> \tint32\tlen;\n> \tuint16\tnumlevel;\n> \tchar\tdata[1];\n> } ltree;\n> \n> #define LTREE_HDRSIZE\t( sizeof(int32) + sizeof(uint16) )\n> \n> because they take no account of the possibility of padding between fields.\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 6 Aug 2002 02:27:44 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: contrib/ltree for 7.2 or 7.3 ?" }, { "msg_contents": "On Tue, 6 Aug 2002, Bruce Momjian wrote:\n\n>\n> Folks, has this been fixed?\n>\n\nBruce, you already did apply patches which fixed this issue :-)\n\n\n> ---------------------------------------------------------------------------\n>\n> Tom Lane wrote:\n> > > Oleg Bartunov wrote:\n> > >> One known issue: It'll not works with 64-bit OS. We'll certainly fix this\n> > >> but will appreciate if somebody with access to 64-bit machine could help us.\n> >\n> > Actually, it dumps core instantly on 32-bit machines too, if they are\n> > pickier about alignment than Intel hardware is. You can't map\n> > structures onto char[] arrays that start at odd byte offsets and not\n> > expect trouble.\n> >\n> > I also do not trust macros like this:\n> >\n> > typedef struct {\n> > \tint32\tlen;\n> > \tuint16\tnumlevel;\n> > \tchar\tdata[1];\n> > } ltree;\n> >\n> > #define LTREE_HDRSIZE\t( sizeof(int32) + sizeof(uint16) )\n> >\n> > because they take no account of the possibility of padding between fields.\n> >\n> > \t\t\tregards, tom lane\n> >\n>\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 6 Aug 2002 11:54:35 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "Re: contrib/ltree for 7.2 or 7.3 ?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Folks, has this been fixed?\n\nI can report that ltree passes its regression test now on HPPA.\nIt didn't before...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Aug 2002 09:40:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: contrib/ltree for 7.2 or 7.3 ? " } ]
[ { "msg_contents": "\n Hi, i want to announce you the release of a new PostgreSQL Interface... its \n name is XPg, and it was made on Java under the GPL license.\n Please, check the site... it could be a useful tool for much people (although\n the application is still under development). \n \n The website of XPg is:\n http://www.kazak.ws/xpg\n\n The PostgreSQL hackers opinion about XPg is really important for us \n developers. \n\n Thank you for your attention.\n Have nice day.\n-- \n+-----------------------------------------------------xtingray--+\n Gustavo Gonzalez\n Director Area de Redes\n\n Linux User: #239480 - ICQ: 95848099\n\n Soluciones KAZAK Limitada\n Grupo de Investigacion y Desarrollo\n en Tecnologias de Software Libre\n www.kazak.ws - A.A. 25180 Cali, Colombia\n+---------------------------------------------------------------+\n\n\n-------------------------------------------------------\n\n-- \n+-----------------------------------------------------xtingray--+\n Gustavo Gonzalez\n Director Area de Redes\n\n Linux User: #239480 - ICQ: 95848099\n\n Soluciones KAZAK Limitada\n Grupo de Investigacion y Desarrollo\n en Tecnologias de Software Libre\n www.kazak.ws - A.A. 25180 Cali, Colombia\n+---------------------------------------------------------------+\n", "msg_date": "Sun, 21 Jul 2002 13:11:54 -0500", "msg_from": "Gustavo Gonzalez Giron <xtingray@kazak.ws>", "msg_from_op": true, "msg_subject": "Fwd: XPg: a new Java PostgreSQL Interface" }, { "msg_contents": "Gustavo,\n\nI had a quick look at it, and it looks great!\n\nCouple of questions?\n\n1) Why did you choose the GPL license, I expect it cannot be added to\nthe postgres distribution with that license? Postgres is released under\nthe BSD license.\n\n2) What did you modify in the driver to make it work?\n3) You should post this on the jdbc list.\n\n4) Not so much a question, but I have started a project on sourceforge\nwhich is essentially the same project. http://sf.net/projects/jpgadmin.\nI may be interested in collaborating.\n\nDave\n\n\nOn Sun, 2002-07-21 at 14:11, Gustavo Gonzalez Giron wrote:\n> \n> Hi, i want to announce you the release of a new PostgreSQL Interface... its \n> name is XPg, and it was made on Java under the GPL license.\n> Please, check the site... it could be a useful tool for much people (although\n> the application is still under development). \n> \n> The website of XPg is:\n> http://www.kazak.ws/xpg\n> \n> The PostgreSQL hackers opinion about XPg is really important for us \n> developers. \n> \n> Thank you for your attention.\n> Have nice day.\n> -- \n> +-----------------------------------------------------xtingray--+\n> Gustavo Gonzalez\n> Director Area de Redes\n> \n> Linux User: #239480 - ICQ: 95848099\n> \n> Soluciones KAZAK Limitada\n> Grupo de Investigacion y Desarrollo\n> en Tecnologias de Software Libre\n> www.kazak.ws - A.A. 25180 Cali, Colombia\n> +---------------------------------------------------------------+\n> \n> \n> -------------------------------------------------------\n> \n> -- \n> +-----------------------------------------------------xtingray--+\n> Gustavo Gonzalez\n> Director Area de Redes\n> \n> Linux User: #239480 - ICQ: 95848099\n> \n> Soluciones KAZAK Limitada\n> Grupo de Investigacion y Desarrollo\n> en Tecnologias de Software Libre\n> www.kazak.ws - A.A. 25180 Cali, Colombia\n> +---------------------------------------------------------------+\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n> \n\n\n\n", "msg_date": "21 Jul 2002 21:07:51 -0400", "msg_from": "Dave Cramer <dave@fastcrypt.com>", "msg_from_op": false, "msg_subject": "Re: Fwd: XPg: a new Java PostgreSQL Interface" }, { "msg_contents": "\n Hello Dave... thank you for pay attention to our application.\n\n> Couple of questions?\n Couple of answers! :)\n \n> 1) Why did you choose the GPL license, I expect it cannot be added to\n> the postgres distribution with that license? Postgres is released under\n> the BSD license.\n\n We choose the GPL license because we want that everyone could have\n access to the application, including the source code. But we love it if\n XPg is added or included as a part of the postgres distribution... it \n would be great for us. If is necessary to change de license, we will do\n it. \n\n> 2) What did you modify in the driver to make it work?\n\n We tried to get some information about foreing keys with the standar \n JDBC... but it nevers works... :(\n\n> 3) You should post this on the jdbc list.\n\n I did it in several places... even java.sun.com... if you could help me \n with others sites and mail lists... it will be nice ;)\n \n> 4) Not so much a question, but I have started a project on sourceforge\n> which is essentially the same project. http://sf.net/projects/jpgadmin.\n> I may be interested in collaborating.\n\n Your help and experience will be welcome always :)\n\n Have a nice day.\n\n\n> \n> On Sun, 2002-07-21 at 14:11, Gustavo Gonzalez Giron wrote:\n> > \n> > Hi, i want to announce you the release of a new PostgreSQL Interface... \nits \n> > name is XPg, and it was made on Java under the GPL license.\n> > Please, check the site... it could be a useful tool for much people \n(although\n> > the application is still under development). \n> > \n> > The website of XPg is:\n> > http://www.kazak.ws/xpg\n> > \n> > The PostgreSQL hackers opinion about XPg is really important for us \n \n> > developers. \n> > \n> > Thank you for your attention.\n> > Have nice day.\n> > -- \n> > +-----------------------------------------------------xtingray--+\n> > Gustavo Gonzalez\n> > Director Area de Redes\n> > \n> > Linux User: #239480 - ICQ: 95848099\n> > \n> > Soluciones KAZAK Limitada\n> > Grupo de Investigacion y Desarrollo\n> > en Tecnologias de Software Libre\n> > www.kazak.ws - A.A. 25180 Cali, Colombia\n> > +---------------------------------------------------------------+\n> > \n> > \n> > -------------------------------------------------------\n> > \n> > -- \n> > +-----------------------------------------------------xtingray--+\n> > Gustavo Gonzalez\n> > Director Area de Redes\n> > \n> > Linux User: #239480 - ICQ: 95848099\n> > \n> > Soluciones KAZAK Limitada\n> > Grupo de Investigacion y Desarrollo\n> > en Tecnologias de Software Libre\n> > www.kazak.ws - A.A. 25180 Cali, Colombia\n> > +---------------------------------------------------------------+\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> > \n> > \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n> \n\n-- \n+-----------------------------------------------------xtingray--+\n Gustavo Gonzalez\n Director Area de Redes\n\n Linux User: #239480 - ICQ: 95848099\n\n Soluciones KAZAK Limitada\n Grupo de Investigacion y Desarrollo\n en Tecnologias de Software Libre\n www.kazak.ws - A.A. 25180 Cali, Colombia\n+---------------------------------------------------------------+\n", "msg_date": "Sun, 21 Jul 2002 23:37:02 -0500", "msg_from": "Gustavo Gonzalez Giron <xtingray@kazak.ws>", "msg_from_op": true, "msg_subject": "Re: Fwd: XPg: a new Java PostgreSQL Interface" }, { "msg_contents": "On Sun, 21 Jul 2002, Gustavo Gonzalez Giron wrote:\n\n>\n> Hello Dave... thank you for pay attention to our application.\n>\n> > Couple of questions?\n> Couple of answers! :)\n>\n> > 1) Why did you choose the GPL license, I expect it cannot be added to\n> > the postgres distribution with that license? Postgres is released under\n> > the BSD license.\n>\n> We choose the GPL license because we want that everyone could have\n> access to the application, including the source code. But we love it if\n> XPg is added or included as a part of the postgres distribution... it\n> would be great for us. If is necessary to change de license, we will do\n> it.\n\nIt would require a change of license ... but that wouldn't necessarily\nget it included as part of the distribution.\n\n\n", "msg_date": "Mon, 22 Jul 2002 02:04:29 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Fwd: XPg: a new Java PostgreSQL Interface" }, { "msg_contents": "With pg 7.2 I had no problems, but getting the development version from CVS I\nam unable to compile. Specifically it does a:\nmake: *** No rule to make target `isinf.o', needed by `SUBSYS.o'. Stop.\nwithin src/backend/port. I noticed that the Makefile setup has been changed\nsince the stable version.\nHas it been tested on Solaris 9? Is something amiss in the new Makefile setup?\n\nThanks;\nEric\n\n", "msg_date": "Thu, 25 Jul 2002 20:40:04 -0500", "msg_from": "redmonde@purdue.edu", "msg_from_op": false, "msg_subject": "solaris 9?" }, { "msg_contents": "redmonde@purdue.edu writes:\n> With pg 7.2 I had no problems, but getting the development version from CVS I\n> am unable to compile. Specifically it does a:\n> make: *** No rule to make target `isinf.o', needed by `SUBSYS.o'. Stop.\n> within src/backend/port. I noticed that the Makefile setup has been changed\n> since the stable version.\n> Has it been tested on Solaris 9?\n\nYou just did ;-)\n\n> Is something amiss in the new Makefile setup?\n\nEvidently. Send us a patch.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Jul 2002 11:16:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: solaris 9? " }, { "msg_contents": "\nWe are in the process of refactoring how the port-specific files are\nused in PostgreSQL. Please grab a newer CVS and give it a try. Thanks.\n\n---------------------------------------------------------------------------\n\nredmonde@purdue.edu wrote:\n> With pg 7.2 I had no problems, but getting the development version from CVS I\n> am unable to compile. Specifically it does a:\n> make: *** No rule to make target `isinf.o', needed by `SUBSYS.o'. Stop.\n> within src/backend/port. I noticed that the Makefile setup has been changed\n> since the stable version.\n> Has it been tested on Solaris 9? Is something amiss in the new Makefile setup?\n> \n> Thanks;\n> Eric\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 Jul 2002 22:40:49 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: solaris 9?" } ]
[ { "msg_contents": "I am working on a rather extensive Oracle compatibility layer that I use \nwith PostgreSQL. I am considering contributing the code back to the \nproject but the problem is that the layer is currently implemented in \nC++. Simply looking throughout the sources and the dev FAQ, there's \nnever any mention of C++ (libpq++ excepted). So, at a risk of stating \nthe obvious (and I'm 99.99% sure I am), does backend code need to be \nsubmitted as C even if it's for an entirely NEW module?\n\nThanks,\n\nMarc L.\n\nPS: (Background) The code originally started out as client-side C++ code \nfor a project but I'm seeing the value of it for the server side for \nachieving SQL*Net compatibility.\n\n\n", "msg_date": "Sun, 21 Jul 2002 15:20:34 -0400", "msg_from": "Marc Lavergne <mlavergne-pub@richlava.com>", "msg_from_op": true, "msg_subject": "C vs. C++ contributions" }, { "msg_contents": "Marc Lavergne <mlavergne-pub@richlava.com> writes:\n> never any mention of C++ (libpq++ excepted). So, at a risk of stating \n> the obvious (and I'm 99.99% sure I am), does backend code need to be \n> submitted as C even if it's for an entirely NEW module?\n\nBackend code must be C; we do not want to deal with C++ portability\nissues.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 22 Jul 2002 11:29:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: C vs. C++ contributions " }, { "msg_contents": "Tom Lane wrote:\n> Marc Lavergne <mlavergne-pub@richlava.com> writes:\n> > never any mention of C++ (libpq++ excepted). So, at a risk of stating \n> > the obvious (and I'm 99.99% sure I am), does backend code need to be \n> > submitted as C even if it's for an entirely NEW module?\n> \n> Backend code must be C; we do not want to deal with C++ portability\n> issues.\n\nIs it something that could be in /conrib?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Jul 2002 19:16:29 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: C vs. C++ contributions" }, { "msg_contents": "The code comes from a project in which I needed to import from Oracle \nexport files into PostgreSQL. I determined the export file format and \ncreated a parser class. Since the INSERT syntax in Oracle export files \nis based on a parse/bind/execute model, I created second module which \nimplements a client-side parse/bind/execute interface to PostgreSQL. \nAlso, some syntax is Oracle-specific and needed to be converted (eg. \nDATE->TIMESTAMP), so I created a 3rd module to perform appropriate \nconversions for PostgreSQL.\n\nIt become apparent to me that these could form the foundation for the \nTODO of a SQL*Net interface into PostgreSQL since the SQL*Net protocol \nuses a lot of the same constructs found in an export file. Essentially, \nthe only pieces missing would be a SQL*Net protocol implementation and \nmore comprehensive set of Oracle-like data dictionary views.\n\nI don't mind converting this to C but since the code was not written \nusing PostgreSQL coding standards it would take a fair amount of time \nand I want to be certain it's worth the effort. The only built-in C++ \nclass in the code is the string class in the conversion module and it \ncan easily be replaced by a basic C routine. A quick note that the code \nis currently being used on Solaris and Linux so it's at least somewhat \nportable\n\nThe question really is do people want an Oracle compatibility layer and \nif yes where should it be in the source tree. Otherwise, I'd be happy to \ncontribute it as is, a utility to import Oracle export dumps into \nPostgreSQL.\n\nI'm sure there are people on this list who are indifferent to Oracle \ncompatibility. However, a compatibility layer would open the door to \nmany existing Oracle applications and more importantly would really ease \nmigration to PostgreSQL from Oracle!\n\n\nBruce Momjian wrote:\n> Tom Lane wrote:\n> \n>>Marc Lavergne <mlavergne-pub@richlava.com> writes:\n>>\n>>>never any mention of C++ (libpq++ excepted). So, at a risk of stating \n>>>the obvious (and I'm 99.99% sure I am), does backend code need to be \n>>>submitted as C even if it's for an entirely NEW module?\n>>\n>>Backend code must be C; we do not want to deal with C++ portability\n>>issues.\n> \n> \n> Is it something that could be in /conrib?\n> \n\n\n", "msg_date": "Wed, 24 Jul 2002 01:00:38 -0400", "msg_from": "Marc Lavergne <mlavergne-pub@richlava.com>", "msg_from_op": true, "msg_subject": "Re: C vs. C++ contributions" }, { "msg_contents": "\nMarc, wher did we leave this? Also, 7.3 will have prepared statements\nin the backend code, so that should make the porting job easier.\n\n---------------------------------------------------------------------------\n\nMarc Lavergne wrote:\n> The code comes from a project in which I needed to import from Oracle \n> export files into PostgreSQL. I determined the export file format and \n> created a parser class. Since the INSERT syntax in Oracle export files \n> is based on a parse/bind/execute model, I created second module which \n> implements a client-side parse/bind/execute interface to PostgreSQL. \n> Also, some syntax is Oracle-specific and needed to be converted (eg. \n> DATE->TIMESTAMP), so I created a 3rd module to perform appropriate \n> conversions for PostgreSQL.\n> \n> It become apparent to me that these could form the foundation for the \n> TODO of a SQL*Net interface into PostgreSQL since the SQL*Net protocol \n> uses a lot of the same constructs found in an export file. Essentially, \n> the only pieces missing would be a SQL*Net protocol implementation and \n> more comprehensive set of Oracle-like data dictionary views.\n> \n> I don't mind converting this to C but since the code was not written \n> using PostgreSQL coding standards it would take a fair amount of time \n> and I want to be certain it's worth the effort. The only built-in C++ \n> class in the code is the string class in the conversion module and it \n> can easily be replaced by a basic C routine. A quick note that the code \n> is currently being used on Solaris and Linux so it's at least somewhat \n> portable\n> \n> The question really is do people want an Oracle compatibility layer and \n> if yes where should it be in the source tree. Otherwise, I'd be happy to \n> contribute it as is, a utility to import Oracle export dumps into \n> PostgreSQL.\n> \n> I'm sure there are people on this list who are indifferent to Oracle \n> compatibility. However, a compatibility layer would open the door to \n> many existing Oracle applications and more importantly would really ease \n> migration to PostgreSQL from Oracle!\n> \n> \n> Bruce Momjian wrote:\n> > Tom Lane wrote:\n> > \n> >>Marc Lavergne <mlavergne-pub@richlava.com> writes:\n> >>\n> >>>never any mention of C++ (libpq++ excepted). So, at a risk of stating \n> >>>the obvious (and I'm 99.99% sure I am), does backend code need to be \n> >>>submitted as C even if it's for an entirely NEW module?\n> >>\n> >>Backend code must be C; we do not want to deal with C++ portability\n> >>issues.\n> > \n> > \n> > Is it something that could be in /conrib?\n> > \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 27 Aug 2002 12:33:24 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: C vs. C++ contributions" }, { "msg_contents": "This covers a few different sub-projects so I'm breaking it down in \norder of how I'm going to attack it. The TTC estimates are going to vary \n based on my bandwidth which, unfortunately, is pretty tight right now. \nHowever, there is relief on the horizon and I do have a vested interest \nin getting this completed, so here goes:\n\n\nSYNONYM functionality:\n----------------------\nI need to create a patch for SYNONYM like functionality against 7.2.2 \n(and 7.3). This is a very high priority for me right now. This can't be \na simple mimic using views, it has to be a true synonym mechanism. This \nmay require a significant amount of work.\n\nTTC: mid/end-September\n\n\nPostgreSQL parse/bind/execute Layer:\n------------------------------------\nThis would be mimicked since PostgreSQL doesn't support it natively. \nIt's needed because most Oracle based code (and certainly export files) \nexpect the data to be processed in this manner. I essentially already \nhave this done but the code for it is in C++ so I just need to convert. \nThis is currently a client side extension.\n\nTTC: end-September\n\n\nOracle Export File Reader:\n--------------------------\nI am going to convert my code to C and offer it up. It's already posted \non Gborg in C++ form but I see the benefit of moving it to C. This is \nalready written so it's only a matter of repackaging it. Note that this \nrequires several emulated Oracle features.\n\nTTC: mid-October\n\n\nOracle Compatibility Layer:\n---------------------------\nHas to be attacked in stages due to the large number of Oracle \nextensions to SQL. I would guess that Oracle JOIN syntax would be first \non my list (I'm thinking of just doing syntax expansion), followed by \nOracle data dictionary compatibility (this is already mostly completed). \nThere are also some commonly used Oracle functions that need to be \nduplicated by name (ie. decode -> case).\n\nTTC: end-October\n\n\nSQL*Net Compatibility:\n----------------------\nThis is a much nastier (but more interesting) piece of work. This will \nrequire implementing a virtual Oracle SQL*Net listener. I've done some \nwork like this on the client side but never on the server side. It's \ncertainly doable and I already know how I want to approach this. I don't \nexpect much trouble getting this to work under extremely controlled \nconditions. However, getting this to a workable state for general use is \nreally an unknown right now.\n\nTTC: end-December???\n\n\nSo in brief, I haven't abandoned the plan, I just haven't had the \nbandwidth to get back on it. I'm hoping to be able to commit some time \nto this very shorty.\n\nCheers,\n\nMarc L.\n\nBruce Momjian wrote:\n> Marc, wher did we leave this? Also, 7.3 will have prepared statements\n> in the backend code, so that should make the porting job easier.\n> \n> ---------------------------------------------------------------------------\n> \n> Marc Lavergne wrote:\n> \n>>The code comes from a project in which I needed to import from Oracle \n>>export files into PostgreSQL. I determined the export file format and \n>>created a parser class. Since the INSERT syntax in Oracle export files \n>>is based on a parse/bind/execute model, I created second module which \n>>implements a client-side parse/bind/execute interface to PostgreSQL. \n>>Also, some syntax is Oracle-specific and needed to be converted (eg. \n>>DATE->TIMESTAMP), so I created a 3rd module to perform appropriate \n>>conversions for PostgreSQL.\n>>\n>>It become apparent to me that these could form the foundation for the \n>>TODO of a SQL*Net interface into PostgreSQL since the SQL*Net protocol \n>>uses a lot of the same constructs found in an export file. Essentially, \n>>the only pieces missing would be a SQL*Net protocol implementation and \n>>more comprehensive set of Oracle-like data dictionary views.\n>>\n>>I don't mind converting this to C but since the code was not written \n>>using PostgreSQL coding standards it would take a fair amount of time \n>>and I want to be certain it's worth the effort. The only built-in C++ \n>>class in the code is the string class in the conversion module and it \n>>can easily be replaced by a basic C routine. A quick note that the code \n>>is currently being used on Solaris and Linux so it's at least somewhat \n>>portable\n>>\n>>The question really is do people want an Oracle compatibility layer and \n>>if yes where should it be in the source tree. Otherwise, I'd be happy to \n>>contribute it as is, a utility to import Oracle export dumps into \n>>PostgreSQL.\n>>\n>>I'm sure there are people on this list who are indifferent to Oracle \n>>compatibility. However, a compatibility layer would open the door to \n>>many existing Oracle applications and more importantly would really ease \n>>migration to PostgreSQL from Oracle!\n>>\n>>\n>>Bruce Momjian wrote:\n>>\n>>>Tom Lane wrote:\n>>>\n>>>\n>>>>Marc Lavergne <mlavergne-pub@richlava.com> writes:\n>>>>\n>>>>\n>>>>>never any mention of C++ (libpq++ excepted). So, at a risk of stating \n>>>>>the obvious (and I'm 99.99% sure I am), does backend code need to be \n>>>>>submitted as C even if it's for an entirely NEW module?\n>>>>\n>>>>Backend code must be C; we do not want to deal with C++ portability\n>>>>issues.\n>>>\n>>>\n>>>Is it something that could be in /conrib?\n>>>\n>>\n>>\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>>\n> \n> \n\n\n", "msg_date": "Tue, 27 Aug 2002 18:24:07 -0400", "msg_from": "Marc Lavergne <mlavergne-pub@richlava.com>", "msg_from_op": true, "msg_subject": "Re: C vs. C++ contributions" }, { "msg_contents": "Marc Lavergne <mlavergne-pub@richlava.com> writes:\n> PostgreSQL parse/bind/execute Layer:\n> ------------------------------------\n> This would be mimicked since PostgreSQL doesn't support it\n> natively.\n\nWhat's stopping you from implementing native support for this? There\nwill hopefully be an FE/BE protocol change during the 7.4 development\ncycle, which should give you the opportunity to make any\nprotocol-level changes required to implement this properly.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "27 Aug 2002 18:52:57 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: C vs. C++ contributions" }, { "msg_contents": "That's an quite a bite to chew given my level of experience with \nPostgreSQL internals! However, I will keep it in mind and whatever I do \nwill be fully abstracted (already is actually) so that it should just a \nmatter of snapping it into place when 7.4 forks. Realistically, I can't \ncomment from an informed position on this yet. When I get a chance to \nlook into what is happening in 7.3 and the 7.4 roadmap, I will post back \nif I feel I can provide something of substance.\n\nCheers,\n\nMarc\n\nNeil Conway wrote:\n> Marc Lavergne <mlavergne-pub@richlava.com> writes:\n> \n>>PostgreSQL parse/bind/execute Layer:\n>>------------------------------------\n>>This would be mimicked since PostgreSQL doesn't support it\n>>natively.\n> \n> \n> What's stopping you from implementing native support for this? There\n> will hopefully be an FE/BE protocol change during the 7.4 development\n> cycle, which should give you the opportunity to make any\n> protocol-level changes required to implement this properly.\n> \n> Cheers,\n> \n> Neil\n> \n\n\n", "msg_date": "Wed, 28 Aug 2002 00:39:03 -0400", "msg_from": "Marc Lavergne <mlavergne-pub@richlava.com>", "msg_from_op": true, "msg_subject": "Re: C vs. C++ contributions" }, { "msg_contents": "Marc Lavergne writes:\n\n> This covers a few different sub-projects so I'm breaking it down in\n> order of how I'm going to attack it.\n\nJust to give you a fair warning: I'm not going to be in favor of adding\nany \"Oracle compatibility\" functionality that overlaps with existing\nand/or standardized functionality. That kind of thing would lock us into\nan endless catch-up game and would induce users to code their applications\nto proprietary interfaces.\n\nI suppose some of the things you propose would be external applications,\nsuch as the export file reader or the SQL*Net proxy. The synonym\nfunctionality would be interesting to add, since there is no existing\nfeature of that type. But random misfeatures such as the join syntax or\nthe decode() function are going to have a hard time getting accepted.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 28 Aug 2002 20:27:45 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: C vs. C++ contributions" }, { "msg_contents": "Peter Eisentraut wrote:\n> Marc Lavergne writes:\n> \n> > This covers a few different sub-projects so I'm breaking it down in\n> > order of how I'm going to attack it.\n> \n> Just to give you a fair warning: I'm not going to be in favor of adding\n> any \"Oracle compatibility\" functionality that overlaps with existing\n> and/or standardized functionality. That kind of thing would lock us into\n> an endless catch-up game and would induce users to code their applications\n> to proprietary interfaces.\n\nI think some of the Oracle stuff will have to be turned on to be\nenabled, others of it can be on by default.\n\n> I suppose some of the things you propose would be external applications,\n> such as the export file reader or the SQL*Net proxy. The synonym\n> functionality would be interesting to add, since there is no existing\n> feature of that type. But random misfeatures such as the join syntax or\n> the decode() function are going to have a hard time getting accepted.\n\nBut we have decode():\n\n\ttest=> \\df decode\n\t List of functions\n\t Result data type | Schema | Name | Argument data types \n\t------------------+------------+--------+---------------------\n\t bytea | pg_catalog | decode | text, text\n\t(1 row)\n\nIs this a different decode?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 28 Aug 2002 15:03:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: C vs. C++ contributions" }, { "msg_contents": " >> But we have decode():\n\nIt is a different decode function, more akin to \"case when\".\n\n >> Just to give you a fair warning: I'm not going to be in favor of \nadding any \"Oracle compatibility\" functionality that overlaps with \nexisting and/or standardized functionality.\n\nI wouldn't even dream of it. The last thing I want to do is turn \nPostgreSQL into an Oracle clone. My aim is to provide a simplified \nmigration path from Oracle and to implement basic compatibility to \nenable interoperability between the two DBs.\n\nBruce Momjian wrote:\n> Peter Eisentraut wrote:\n> \n>>Marc Lavergne writes:\n>>\n>>\n>>>This covers a few different sub-projects so I'm breaking it down in\n>>>order of how I'm going to attack it.\n>>\n>>Just to give you a fair warning: I'm not going to be in favor of adding\n>>any \"Oracle compatibility\" functionality that overlaps with existing\n>>and/or standardized functionality. That kind of thing would lock us into\n>>an endless catch-up game and would induce users to code their applications\n>>to proprietary interfaces.\n> \n> \n> I think some of the Oracle stuff will have to be turned on to be\n> enabled, others of it can be on by default.\n> \n> \n>>I suppose some of the things you propose would be external applications,\n>>such as the export file reader or the SQL*Net proxy. The synonym\n>>functionality would be interesting to add, since there is no existing\n>>feature of that type. But random misfeatures such as the join syntax or\n>>the decode() function are going to have a hard time getting accepted.\n> \n> \n> But we have decode():\n> \n> \ttest=> \\df decode\n> \t List of functions\n> \t Result data type | Schema | Name | Argument data types \n> \t------------------+------------+--------+---------------------\n> \t bytea | pg_catalog | decode | text, text\n> \t(1 row)\n> \n> Is this a different decode?\n> \n\n\n", "msg_date": "Wed, 28 Aug 2002 17:05:22 -0400", "msg_from": "Marc Lavergne <mlavergne-pub@richlava.com>", "msg_from_op": true, "msg_subject": "Re: C vs. C++ contributions" }, { "msg_contents": "> > feature of that type. But random misfeatures such as the join syntax or\n> > the decode() function are going to have a hard time getting accepted.\n>\n> But we have decode():\n>\n> \ttest=> \\df decode\n> \t List of functions\n> \t Result data type | Schema | Name | Argument data types\n> \t------------------+------------+--------+---------------------\n> \t bytea | pg_catalog | decode | text, text\n> \t(1 row)\n>\n> Is this a different decode?\n\nYes this is completly different. Oracle decode works like this:\n\ndecode(value, choice1, result1, choice 2, result2, ......, default_result)\n\nit works this way:\ncase \n when value=choice1 then result1\n when value=choice2 then result2\n ....\n else\n default_result\nend\n\nThe nice part is it has an varying number of arguments, and can used within \"sort\". Very useful sometimes.\n\nAnother one very useful oracle function is \"nvl\", the same like \"coalesce\", but a lot easier to write :-)\n\n\n", "msg_date": "Thu, 29 Aug 2002 08:57:06 +0200", "msg_from": "Mario Weilguni <mweilguni@sime.com>", "msg_from_op": false, "msg_subject": "Re: C vs. C++ contributions" }, { "msg_contents": "Mario Weilguni wrote:\n> Yes this is completly different. Oracle decode works like this:\n> \n> decode(value, choice1, result1, choice 2, result2, ......,\n> default_result)\n> \n> it works this way: case\n> when value=choice1 then result1 when value=choice2 then result2\n> ....\n> else default_result end\n\nYes, it would be nice to be able turn these on just for portability with\nOracle queries.\n\n> The nice part is it has an varying number of arguments, and can\n> used within \"sort\". Very useful sometimes.\n\nWith our CASE, you create a column using the CASE, then ODER BY on that\ncolumn.\n\n> Another one very useful oracle function is \"nvl\", the same like\n> \"coalesce\", but a lot easier to write :-)\n\nYes, another nice portability thing if it could be enabled/disabled.\n\n--\n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 29 Aug 2002 14:32:31 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: C vs. C++ contributions" }, { "msg_contents": "> > The nice part is it has an varying number of arguments, and can\n> > used within \"sort\". Very useful sometimes.\n>\n> With our CASE, you create a column using the CASE, then ODER BY on that\n> column.\n\nNot exactly, you have to create a column for this, and it will be in the select list. Oracle does not need this.\n", "msg_date": "Fri, 30 Aug 2002 07:59:40 +0200", "msg_from": "Mario Weilguni <mweilguni@sime.com>", "msg_from_op": false, "msg_subject": "Re: C vs. C++ contributions" }, { "msg_contents": "Marc Lavergne wrote:\n> That's an quite a bite to chew given my level of experience with \n> PostgreSQL internals! However, I will keep it in mind and whatever I do \n> will be fully abstracted (already is actually) so that it should just a \n> matter of snapping it into place when 7.4 forks. Realistically, I can't \n> comment from an informed position on this yet. When I get a chance to \n> look into what is happening in 7.3 and the 7.4 roadmap, I will post back \n> if I feel I can provide something of substance.\n\nFYI, we just split off 7.4 so we are ready to accept 7.4 patches.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 28 Sep 2002 22:09:44 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: C vs. C++ contributions" } ]
[ { "msg_contents": "Hackers,\n\nif the weather stays fine, I'll be offline for up to two weeks. Hope\nthere are no show stoppers in my recent heap tuple header changes. If\nproblems arise, report them here; I'll address them when I'm back.\n\nServus\n Manfred\n", "msg_date": "Sun, 21 Jul 2002 23:08:58 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Heap tuple header, vacation" } ]
[ { "msg_contents": "\nI'm pleased to announce the release of pgbash-2.4a.2.\nhttp://www.psn.co.jp/PostgreSQL/pgbash/index-e.html\n\nChangeLog:\n\n1.Add \"REINDEX\" as a SQL reserved word.\n\n2.Fix a bug of the single/double quotaion surrounded by the \n double/single quotation.\n ex.) \n insert into \"test\" values(123,'name\"name');\n select code as \"He's code\" from test where name='sanme\"name';\n \n3. Add the functionality of processing the single quotation \n data surrounded by \\'.\n ex.) \n DATA=\"I can't\"\n select * from test where mesg=\\'$DATA\\';\n\n# A bug of \"REINDEX\" was reported by ISHIDA akio.\n# A bug of single/double quotation was reported by Tomoo Nomura.\n\n\n(examples)\nDATA1=\"sakaida's\"\nDATA2='kobe\"desu'\ninsert into \"test\" values(111,'sakaida''s','kobe\"desu');\ninsert into \"test\" values(111,\\'$DATA1\\',\\'$DATA2\\');\n\nDATA1=\"sakaida''s\"\nDATA2='kobe\"d\"esu'\ninsert into \"test\" values(111,'sakaida''s','kobe\"d\"esu');\ninsert into \"test\" values(111,\\'$DATA1\\', '$DATA2' );\n\nselect * from test;\nselect * from test where name=\\'$DATA1\\';\nselect code as \"code's\" from test where name=\\'$DATA1\\';\n\n--\nSAKAIDA Masaaki \n\n", "msg_date": "Mon, 22 Jul 2002 12:30:22 +0900", "msg_from": "SAKAIDA Masaaki <sakaida@psn.co.jp>", "msg_from_op": true, "msg_subject": "pgbash-2.4a.2 released" }, { "msg_contents": "Would it be worth creating a list of postgres project like this somewhere on\nthe postgres site? I had no idea this existed...\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of SAKAIDA Masaaki\n> Sent: Monday, 22 July 2002 11:30 AM\n> To: pgsql-hackers@postgresql.org\n> Subject: [HACKERS] pgbash-2.4a.2 released\n>\n>\n>\n> I'm pleased to announce the release of pgbash-2.4a.2.\n> http://www.psn.co.jp/PostgreSQL/pgbash/index-e.html\n>\n> ChangeLog:\n>\n> 1.Add \"REINDEX\" as a SQL reserved word.\n>\n> 2.Fix a bug of the single/double quotaion surrounded by the\n> double/single quotation.\n> ex.)\n> insert into \"test\" values(123,'name\"name');\n> select code as \"He's code\" from test where name='sanme\"name';\n>\n> 3. Add the functionality of processing the single quotation\n> data surrounded by \\'.\n> ex.)\n> DATA=\"I can't\"\n> select * from test where mesg=\\'$DATA\\';\n>\n> # A bug of \"REINDEX\" was reported by ISHIDA akio.\n> # A bug of single/double quotation was reported by Tomoo Nomura.\n>\n>\n> (examples)\n> DATA1=\"sakaida's\"\n> DATA2='kobe\"desu'\n> insert into \"test\" values(111,'sakaida''s','kobe\"desu');\n> insert into \"test\" values(111,\\'$DATA1\\',\\'$DATA2\\');\n>\n> DATA1=\"sakaida''s\"\n> DATA2='kobe\"d\"esu'\n> insert into \"test\" values(111,'sakaida''s','kobe\"d\"esu');\n> insert into \"test\" values(111,\\'$DATA1\\', '$DATA2' );\n>\n> select * from test;\n> select * from test where name=\\'$DATA1\\';\n> select code as \"code's\" from test where name=\\'$DATA1\\';\n>\n> --\n> SAKAIDA Masaaki\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n", "msg_date": "Mon, 22 Jul 2002 11:33:36 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: pgbash-2.4a.2 released" }, { "msg_contents": "\nBest would be to get it added as a project on GBorg ... Chris is working\non changes to GBorg to allow a project to be added to the site *without*\nit necessarily be 'hosted' there, so that projects like this, or PgAdmin,\ncan be tracked through on central location, without limiting the\ndevelopers themselves ...\n\n\nOn Mon, 22 Jul 2002, Christopher Kings-Lynne wrote:\n\n> Would it be worth creating a list of postgres project like this somewhere on\n> the postgres site? I had no idea this existed...\n>\n> Chris\n>\n> > -----Original Message-----\n> > From: pgsql-hackers-owner@postgresql.org\n> > [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of SAKAIDA Masaaki\n> > Sent: Monday, 22 July 2002 11:30 AM\n> > To: pgsql-hackers@postgresql.org\n> > Subject: [HACKERS] pgbash-2.4a.2 released\n> >\n> >\n> >\n> > I'm pleased to announce the release of pgbash-2.4a.2.\n> > http://www.psn.co.jp/PostgreSQL/pgbash/index-e.html\n> >\n> > ChangeLog:\n> >\n> > 1.Add \"REINDEX\" as a SQL reserved word.\n> >\n> > 2.Fix a bug of the single/double quotaion surrounded by the\n> > double/single quotation.\n> > ex.)\n> > insert into \"test\" values(123,'name\"name');\n> > select code as \"He's code\" from test where name='sanme\"name';\n> >\n> > 3. Add the functionality of processing the single quotation\n> > data surrounded by \\'.\n> > ex.)\n> > DATA=\"I can't\"\n> > select * from test where mesg=\\'$DATA\\';\n> >\n> > # A bug of \"REINDEX\" was reported by ISHIDA akio.\n> > # A bug of single/double quotation was reported by Tomoo Nomura.\n> >\n> >\n> > (examples)\n> > DATA1=\"sakaida's\"\n> > DATA2='kobe\"desu'\n> > insert into \"test\" values(111,'sakaida''s','kobe\"desu');\n> > insert into \"test\" values(111,\\'$DATA1\\',\\'$DATA2\\');\n> >\n> > DATA1=\"sakaida''s\"\n> > DATA2='kobe\"d\"esu'\n> > insert into \"test\" values(111,'sakaida''s','kobe\"d\"esu');\n> > insert into \"test\" values(111,\\'$DATA1\\', '$DATA2' );\n> >\n> > select * from test;\n> > select * from test where name=\\'$DATA1\\';\n> > select code as \"code's\" from test where name=\\'$DATA1\\';\n> >\n> > --\n> > SAKAIDA Masaaki\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> >\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n", "msg_date": "Mon, 22 Jul 2002 02:06:10 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: pgbash-2.4a.2 released" }, { "msg_contents": "On Mon, 22 Jul 2002, Christopher Kings-Lynne wrote:\n\n> Would it be worth creating a list of postgres project like this somewhere on\n> the postgres site? I had no idea this existed...\n\nIt's been listed forever. http://www.us.postgresql.org/interfaces.html\nIt's about the 17th one from the top.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 22 Jul 2002 05:48:03 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: pgbash-2.4a.2 released" } ]
[ { "msg_contents": "Hello!\n\ni've got a question on my problem..\ni'm using contrib/fulltextindex to index my tables to do a fulltext search\nbut i'm using a non-default locale settings on my database.\ni found out that if the locale is default 'C' then fulltextindex tables are\nscanned very fast using btree indices. for sure, i'm using prefix match\n(string ~ '^blabla') which is supposed to scan the table in this way.. so\nit's ok!\nbut if i change to different locale (i.e. sk_SK or cs_CZ) the fti table is\nscanned sequentially and it's very slow.. so, the more words are there in my\nfti table and the more more words i'm trying to search at once using table\njoin the slower is the search..\n\nso my question is.. is this a bug or the prefix match is not supported to\nperform a table scan using indices under non-default locale?\nplease, give me a hint..\n\nregards\n\nTomas Lehuta\n\n\n\n", "msg_date": "Mon, 22 Jul 2002 09:37:46 +0200", "msg_from": "\"Tomas Lehuta\" <lharp@aurius.sk>", "msg_from_op": true, "msg_subject": "contrib/fulltextindex" }, { "msg_contents": "It's known problem with LIKE and non-C locale.\nYou may try contrib/tsearch for full text search. It's works with\nlocale and does index search.\n\n\tOleg\nOn Mon, 22 Jul 2002, Tomas Lehuta wrote:\n\n> Hello!\n>\n> i've got a question on my problem..\n> i'm using contrib/fulltextindex to index my tables to do a fulltext search\n> but i'm using a non-default locale settings on my database.\n> i found out that if the locale is default 'C' then fulltextindex tables are\n> scanned very fast using btree indices. for sure, i'm using prefix match\n> (string ~ '^blabla') which is supposed to scan the table in this way.. so\n> it's ok!\n> but if i change to different locale (i.e. sk_SK or cs_CZ) the fti table is\n> scanned sequentially and it's very slow.. so, the more words are there in my\n> fti table and the more more words i'm trying to search at once using table\n> join the slower is the search..\n>\n> so my question is.. is this a bug or the prefix match is not supported to\n> perform a table scan using indices under non-default locale?\n> please, give me a hint..\n>\n> regards\n>\n> Tomas Lehuta\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Mon, 22 Jul 2002 11:30:09 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: contrib/fulltextindex" } ]
[ { "msg_contents": "Hi,\n\nIs it possible using pl/pgSQL functions to grab data from another database\nor even another database on a different host.\n\nThanks.\n", "msg_date": "Tue, 23 Jul 2002 11:41:00 +1000 (Tasmania Standard Time)", "msg_from": "\"Dean Grubb\" <dean@atrium-online.com.au>", "msg_from_op": true, "msg_subject": "Access Two Databases" }, { "msg_contents": "Not that i am aware of\n\n\nOn Tue, 23 Jul 2002, Dean Grubb wrote:\n\n> Hi,\n> \n> Is it possible using pl/pgSQL functions to grab data from another database\n> or even another database on a different host.\n> \n> Thanks.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \nDarren Ferguson\n\n", "msg_date": "Mon, 22 Jul 2002 22:04:54 -0400 (EDT)", "msg_from": "Darren Ferguson <darren@crystalballinc.com>", "msg_from_op": false, "msg_subject": "Re: Access Two Databases" }, { "msg_contents": "On Tue, 23 Jul 2002, Dean Grubb wrote:\n\n> Is it possible using pl/pgSQL functions to grab data from another database\n> or even another database on a different host.\n\nThis is where your external-to-PostgreSQL language comes in handy. Open a\nsecond connection to the database, do whatever you are going to do and\nmash the results together outside of PostgreSQL. If for some reason you\nthink the data is related then perhaps you've erred by having more than\none database.\n\nJoshua b. Jore ; http://www.greentechnologist.org\n\n", "msg_date": "Mon, 22 Jul 2002 22:31:14 -0500 (CDT)", "msg_from": "Josh Jore <josh@greentechnologist.org>", "msg_from_op": false, "msg_subject": "Re: Access Two Databases" }, { "msg_contents": "Darren Ferguson wrote:\n> Not that i am aware of\n> \n> On Tue, 23 Jul 2002, Dean Grubb wrote:\n>>Hi,\n>>\n>>Is it possible using pl/pgSQL functions to grab data from another database\n>>or even another database on a different host.\n\nYou can with contrib/dblink.\n\nIn 7.2, it is a bit (well, maybe more than a bit) awkward, but it works. \nI hope to have some substantial improvements in usability for the \nupcoming 7.3 release making use of the new table functions feature.\n\nJoe\n\n", "msg_date": "Mon, 22 Jul 2002 20:59:24 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Access Two Databases" }, { "msg_contents": "On Tuesday 23 July 2002 02:59, Joe Conway wrote:\n> Darren Ferguson wrote:\n> > Not that i am aware of\n> >\n> > On Tue, 23 Jul 2002, Dean Grubb wrote:\n> >>Hi,\n> >>\n> >>Is it possible using pl/pgSQL functions to grab data from another\n> >> database or even another database on a different host.\n>\n> You can with contrib/dblink.\n>\n\nHi Joe !\nRemember me ?\nBefore about 3 months I send to You pl/pgSql wrapper functions for libpq. \nWe agreed then, that merging it with dblink would be a good idea.\nMeanwhile i used dblink and those functions and wrote some kind of\nreplication for my app. \nIs there interest for such interface, or should I forget the whole thing ?\n\nregards \n", "msg_date": "Tue, 23 Jul 2002 15:44:42 -0100", "msg_from": "Darko Prenosil <darko.prenosil@finteh.hr>", "msg_from_op": false, "msg_subject": "Re: Access Two Databases" }, { "msg_contents": "Darko Prenosil wrote:\n> Before about 3 months I send to You pl/pgSql wrapper functions for libpq. \n> We agreed then, that merging it with dblink would be a good idea.\n> Meanwhile i used dblink and those functions and wrote some kind of\n> replication for my app. \n> Is there interest for such interface, or should I forget the whole thing ?\n\nI still think it might be a good idea -- I just haven't had time to work \non dblink recently. I have a few things on my personal TODO list ahead \nof it yet.\n\nJoe\n\n", "msg_date": "Tue, 23 Jul 2002 16:28:45 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Access Two Databases" }, { "msg_contents": "On Tuesday 23 July 2002 22:28, Joe Conway wrote:\n> Darko Prenosil wrote:\n> > Before about 3 months I send to You pl/pgSql wrapper functions for libpq.\n> > We agreed then, that merging it with dblink would be a good idea.\n> > Meanwhile i used dblink and those functions and wrote some kind of\n> > replication for my app.\n> > Is there interest for such interface, or should I forget the whole thing\n> > ?\n>\n> I still think it might be a good idea -- I just haven't had time to work\n> on dblink recently. I have a few things on my personal TODO list ahead\n> of it yet.\n>\n> Joe\n>\nOK! Let me know when you are ready !\n\nRegards .\n\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n", "msg_date": "Wed, 24 Jul 2002 09:43:27 -0100", "msg_from": "Darko Prenosil <darko.prenosil@finteh.hr>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Access Two Databases" } ]
[ { "msg_contents": "It seems bootstrap parser(bootparse.y) does not accept partial index\ndefinitions. Is there any reason for this?\n--\nTatsuo Ishii\n", "msg_date": "Tue, 23 Jul 2002 10:58:38 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "partial index on system indexes?" }, { "msg_contents": "Tatsuo Ishii wrote:\n> It seems bootstrap parser(bootparse.y) does not accept partial index\n> definitions. Is there any reason for this?\n\nProbably just because we never needed them. We could add it, or just\ncreate the index later in the initdb script. That later seems easier.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Jul 2002 21:57:09 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: partial index on system indexes?" }, { "msg_contents": "Tatsuo Ishii wrote:\n> It seems bootstrap parser(bootparse.y) does not accept partial index\n> definitions. Is there any reason for this?\n\nIn private email with Tatsuo, I learned it is for the new loadable\nencoding patch, and he wants to use the index from the syscache. The\nreason for the partial index is because the index itself would not be\nunique, but a partial index would be unique.\n\nBecause the index is part of the syscache, we have to create it as part\nof initdb bootstrap, rather than in the initdb script.\n\nTatsuo mentioned there is a boolean, and he only wants cases where the\nboolean is true, and such values are unique.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Jul 2002 22:07:35 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: partial index on system indexes?" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> It seems bootstrap parser(bootparse.y) does not accept partial index\n> definitions. Is there any reason for this?\n\nWhy should it? The boot parser need handle only a minimal set of\noperations. Why is there any need to handle partial indexes there?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Jul 2002 03:18:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: partial index on system indexes? " }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> In private email with Tatsuo, I learned it is for the new loadable\n> encoding patch, and he wants to use the index from the syscache. The\n> reason for the partial index is because the index itself would not be\n> unique, but a partial index would be unique.\n> Because the index is part of the syscache, we have to create it as part\n> of initdb bootstrap, rather than in the initdb script.\n\nThis sounds like a really bad idea to me. A syscache based on a partial\nindex is almost certainly not going to work.\n\nBefore we invest in a lot of effort making bootstrap, syscache, and who\nknows what else support partial indexes, I want to see a very clear\nexplanation why we must do it. Note I am looking for \"*must* do it\",\nnot \"it makes this other part of the system a little simpler\".\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Jul 2002 03:52:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: partial index on system indexes? " }, { "msg_contents": "> This sounds like a really bad idea to me. A syscache based on a partial\n> index is almost certainly not going to work.\n> \n> Before we invest in a lot of effort making bootstrap, syscache, and who\n> knows what else support partial indexes, I want to see a very clear\n> explanation why we must do it. Note I am looking for \"*must* do it\",\n> not \"it makes this other part of the system a little simpler\".\n\nOk, I'm going to look into bootstrap and syscache etc. codes more to\nstudy why \"a syscache based on a partial index is almost certainly not\ngoing to work\" and how hard it would be to fix that when I have spare\ntime.\n\nIn the mean time I'm going to add an unique index to pg_conversion\n(actually it is not so unique one. I will add the oid column to it so\nthat it seems \"unique\") and use SearchSysCacheList(). As far as I know\nthis is the only way to avoid heap scan every time an encoding\nconversion is performed if partial index cannot be used. \n\nOnce I thought of a conversion lookup cache, but it seems impossible\nto implent it since the cache needs to be invalidated when the schema\nsearch path is changed.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 24 Jul 2002 23:49:20 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: partial index on system indexes? " }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> Once I thought of a conversion lookup cache, but it seems impossible\n> to implent it since the cache needs to be invalidated when the schema\n> search path is changed.\n\nOn the contrary, that seems very easy to do. There is a hook to let you\nget control whenever a syscache inval event is received. Look at the\nway that namespace.c arranges to invalidate its cache of the search\npath OIDs whenever pg_namespace is modified.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Jul 2002 11:05:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: partial index on system indexes? " } ]
[ { "msg_contents": "Is there any way of resetting the stats collector without restarting\nPostgres? I want all the pg_stat_* tables to have zero for everything\nagain...\n\nChris\n\n", "msg_date": "Tue, 23 Jul 2002 10:43:37 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Stats Collector" }, { "msg_contents": "I don't think there is any way to do it without restarting the server\nand using STATS_RESET_ON_SERVER_START from the postgresql.conf\n\nyou might be able to do it by killing the stats collector process on\nyour machine, but i cant recall if postmaster will automagically fire up\na new collector or if you have to do something more involved. It might\nbe worth trying on a development machine.\n\nRobert Treat\n\n\nOn Mon, 2002-07-22 at 22:43, Christopher Kings-Lynne wrote:\n> Is there any way of resetting the stats collector without restarting\n> Postgres? I want all the pg_stat_* tables to have zero for everything\n> again...\n> \n> Chris\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n\n\n", "msg_date": "23 Jul 2002 10:03:25 -0400", "msg_from": "Robert Treat <xzilla@users.sourceforge.net>", "msg_from_op": false, "msg_subject": "Re: Stats Collector" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> \n> Is there any way of resetting the stats collector without restarting\n> Postgres? I want all the pg_stat_* tables to have zero for everything\n> again...\n\nHmmm,\n\nthere is a special control message for the collector to do so, and a\nfunction that sends it exists too. But there is absolutely no way to\ncall it =:-O\n\nLooks to me, someone forgot something. That would be me and now I\nremember that I originally wanted to add some utility command for that.\n\nWhat you need in the meantime is a little C function that calls \n\nvoid pgstat_reset_counters(void);\n\nI might find the time tomorrow to write one for you if you don't know\nhow.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n", "msg_date": "Tue, 23 Jul 2002 10:05:28 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Stats Collector" }, { "msg_contents": "Robert Treat wrote:\n> \n> I don't think there is any way to do it without restarting the server\n> and using STATS_RESET_ON_SERVER_START from the postgresql.conf\n> \n> you might be able to do it by killing the stats collector process on\n> your machine, but i cant recall if postmaster will automagically fire up\n> a new collector or if you have to do something more involved. It might\n> be worth trying on a development machine.\n\nYou can kill it and postmaster will fire up a new one, which ironically\nreads in the data/global/pgstat.stat file if it exists, which on a busy\nserver get's recreated/overwritten every 500 milliseconds. You must\nremove that file and kill the collector before it get's recreated ... so\nsemicolon is your friend ;-)\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n", "msg_date": "Tue, 23 Jul 2002 10:35:53 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Stats Collector" }, { "msg_contents": "> Looks to me, someone forgot something. That would be me and now I\n> remember that I originally wanted to add some utility command for that.\n>\n> What you need in the meantime is a little C function that calls\n>\n> void pgstat_reset_counters(void);\n>\n> I might find the time tomorrow to write one for you if you don't know\n> how.\n\nIs this the kind of thing you mean?\n\n#include \"postgres.h\"\n#include \"fmgr.h\"\n\nextern Datum pg_reset_stats(PG_FUNCTION_ARGS);\n\nPG_FUNCTION_INFO_V1(pg_reset_stats);\n\nDatum\npg_reset_stats(PG_FUNCTION_ARGS)\n{\n void pgstat_reset_counters(void);\n\n PG_RETURN_VOID();\n}\n\nWith this code I get this:\n\ntest=# select pg_reset_stats();\nERROR: Unable to look up type id 0\n\nI'm creating it like this:\n\ncreate or replace function pg_reset_stats() returns opaque as\n '/home/chriskl/local/lib/postgresql/pg_reset_stats.so'\n language 'C';\n\nIs it something to do with the return type being declared wrongly?\nHmm...the manual indicates that opaque functions cannot be called directly -\nso what the heck do I do?\n\nAlso, where would I put this function in the main postgres source and how\nwould I modify initdb?\n\nChris\n\n", "msg_date": "Mon, 29 Jul 2002 11:51:25 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] Stats Collector" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Is it something to do with the return type being declared wrongly?\n\nYup. Make it return a useless '1' or 'true' or some such.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 Jul 2002 02:18:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Stats Collector " }, { "msg_contents": "OK, now I run it and it does absolutely nothing to the pg_stat_all_tables\nrelation for instance. In fact, it seems to do nothing at all - does the\nreset function even work?\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Tom Lane\n> Sent: Monday, 29 July 2002 2:19 PM\n> To: Christopher Kings-Lynne\n> Cc: Jan Wieck; Hackers\n> Subject: Re: [HACKERS] [GENERAL] Stats Collector\n>\n>\n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > Is it something to do with the return type being declared wrongly?\n>\n> Yup. Make it return a useless '1' or 'true' or some such.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n", "msg_date": "Mon, 29 Jul 2002 14:34:45 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] Stats Collector " }, { "msg_contents": "> OK, now I run it and it does absolutely nothing to the pg_stat_all_tables\n> relation for instance. In fact, it seems to do nothing at all - does the\n> reset function even work?\n\nOK, I'm an idiot, I was calling the funciton like this: void blah(void)\nwhich actually does nothing.\n\nIt all works now and I have just submitted it to -patches as a new contrib,\nbut it probably should make its way into the backend one day.\n\nChris\n\n", "msg_date": "Mon, 29 Jul 2002 15:06:05 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] Stats Collector " }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> > OK, now I run it and it does absolutely nothing to the pg_stat_all_tables\n> > relation for instance. In fact, it seems to do nothing at all - does the\n> > reset function even work?\n> \n> OK, I'm an idiot, I was calling the funciton like this: void blah(void)\n> which actually does nothing.\n> \n> It all works now and I have just submitted it to -patches as a new contrib,\n> but it probably should make its way into the backend one day.\n\nOK, the big question is how do we want to make stats reset visible to\nusers? The current patch uses a function call. Is that how we want to\ndo it?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 30 Jul 2002 15:46:12 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Stats Collector" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> It all works now and I have just submitted it to -patches as a new contrib,\n>> but it probably should make its way into the backend one day.\n\n> OK, the big question is how do we want to make stats reset visible to\n> users? The current patch uses a function call. Is that how we want to\n> do it?\n\nShould we make it visible at all? I'm concerned about security.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Jul 2002 15:55:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Stats Collector " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> It all works now and I have just submitted it to -patches as a new contrib,\n> >> but it probably should make its way into the backend one day.\n> \n> > OK, the big question is how do we want to make stats reset visible to\n> > users? The current patch uses a function call. Is that how we want to\n> > do it?\n> \n> Should we make it visible at all? I'm concerned about security.\n\nYea, as long as it is in contrib, only the super-user can install it,\nbut once installed, anyone can run it, I think. \n\nA function seems like the wrong way to go on this. SET has super-user\nprotections we could use to control this but I am not sure what SET\nsyntax to use.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 30 Jul 2002 16:04:45 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Stats Collector" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> A function seems like the wrong way to go on this. SET has super-user\n> protections we could use to control this but I am not sure what SET\n> syntax to use.\n\nI don't like SET for it --- SET is for setting state that will persist\nover some period of time, not for taking one-shot actions. We could\nperhaps use a function that checks that it's been called by the\nsuperuser.\n\nHowever, the real question is what is the use-case for this feature\nanyway. Why should people want to reset the stats while the system\nis running? If we had a clear example then it might be more apparent\nwhat restrictions to place on it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Jul 2002 16:21:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Stats Collector " }, { "msg_contents": "On Tue, Jul 30, 2002 at 04:21:24PM -0400, Tom Lane wrote:\n> However, the real question is what is the use-case for this feature\n> anyway. Why should people want to reset the stats while the system\n> is running? If we had a clear example then it might be more apparent\n> what restrictions to place on it.\n\nWell, you might want to use the statistics as part of a monthly\nreporting cycle, for instance. You could archive the old results,\nand then reset, so that you have information by month.\n\nOr you might have made a number of changes to a database which has\nbeen running for a while, and want to see whether the changes have\nhad the desired effect. (Say, whether some new index has helped\nthings.)\n\nOr you might want to track whether some training you've done for your\nusers has been effective in teaching them how to do certain things,\nwhich has resulted in a reduction of rolled back transactions.\n\nOr you might just want to reduce your overhead. If you want\nstatistics, but you are not allowed to shut down your database, you\nhave to keep the statistics until the next planned service outage. \nMaybe you're generating a lot of data; it'd be nice to keep overhead\nlight on your production machines, so you could reset every week.\n\nThose are some things I can think of off the top of my head. I can\nappreciate the security concern, however. And you could probably\nwork around things in such a way that you could get all of this\nanyway. Still, if it's possible, it'd be nice to have.\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 30 Jul 2002 16:43:08 -0400", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Stats Collector" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > A function seems like the wrong way to go on this. SET has super-user\n> > protections we could use to control this but I am not sure what SET\n> > syntax to use.\n> \n> I don't like SET for it --- SET is for setting state that will persist\n> over some period of time, not for taking one-shot actions. We could\n> perhaps use a function that checks that it's been called by the\n> superuser.\n> \n> However, the real question is what is the use-case for this feature\n> anyway. Why should people want to reset the stats while the system\n> is running? If we had a clear example then it might be more apparent\n> what restrictions to place on it.\n\nYep, I think Andrew explained possible uses. You may want to reset the\ncounters and run a benchmark to look at the results.\n\nShould we have RESET clear the counter, perhaps RESET STATCOLLECTOR?\nI don't think we have other RESET variables that can't be SET, but I\ndon't see a problem with it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 30 Jul 2002 18:16:03 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Stats Collector" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> I don't like SET for it --- SET is for setting state that will persist\n>> over some period of time, not for taking one-shot actions. We could\n>> perhaps use a function that checks that it's been called by the\n>> superuser.\n\n> Should we have RESET clear the counter, perhaps RESET STATCOLLECTOR?\n> I don't think we have other RESET variables that can't be SET, but I\n> don't see a problem with it.\n\nRESET is just a variant form of SET. It's not for one-shot actions\neither (and especially not for one-shot actions against state that's\nnot accessible to SHOW or SET...)\n\nI still like the function-call approach better.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Jul 2002 18:24:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Stats Collector " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom Lane wrote:\n> >> I don't like SET for it --- SET is for setting state that will persist\n> >> over some period of time, not for taking one-shot actions. We could\n> >> perhaps use a function that checks that it's been called by the\n> >> superuser.\n> \n> > Should we have RESET clear the counter, perhaps RESET STATCOLLECTOR?\n> > I don't think we have other RESET variables that can't be SET, but I\n> > don't see a problem with it.\n> \n> RESET is just a variant form of SET. It's not for one-shot actions\n> either (and especially not for one-shot actions against state that's\n> not accessible to SHOW or SET...)\n> \n> I still like the function-call approach better.\n\nOK, so you are suggesting a function call, and a check in there to make\nsure it is the superuser. Comments?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 30 Jul 2002 18:33:37 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Stats Collector" }, { "msg_contents": "> Or you might have made a number of changes to a database which has\n> been running for a while, and want to see whether the changes have\n> had the desired effect. (Say, whether some new index has helped\n> things.)\n\nWell out stats had gotten up in to the millions and hence were useless when\nI made some query changes designed to remove a lot of seqscans. I also made\nsome changes that might have caused indices to no longer be used, and hence\nI want to know if they ever switch from 0 uses to some uses...\n\nIf only the super user can install it - then surely the superuser can GRANT\nusage permissions on it? Otherwise, how do I put in a superuser check? Do\nI just do it ALTER TABLE-style?\n\nChris\n\n", "msg_date": "Wed, 31 Jul 2002 10:32:31 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] Stats Collector" }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> It all works now and I have just submitted it to -patches as a\n> new contrib,\n> >> but it probably should make its way into the backend one day.\n>\n> > OK, the big question is how do we want to make stats reset visible to\n> > users? The current patch uses a function call. Is that how we want to\n> > do it?\n>\n> Should we make it visible at all? I'm concerned about security.\n\nThe function it's calling in the backend:\n\nvoid\npgstat_reset_counters(void)\n{\n PgStat_MsgResetcounter msg;\n\n if (pgStatSock < 0)\n return;\n\n if (!superuser())\n elog(ERROR, \"Only database superusers can reset statistic\ncounters\");\n\n pgstat_setheader(&msg.m_hdr, PGSTAT_MTYPE_RESETCOUNTER);\n pgstat_send(&msg, sizeof(msg));\n}\n\nNote it does actually check that you're a superuser...\n\nChris\n\n", "msg_date": "Wed, 31 Jul 2002 15:24:39 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] Stats Collector " } ]
[ { "msg_contents": "Tom wrote: \n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > That doesn't quite work, because then no ordinary user can define a cast\n> > from some built-in type to his own type. What I'm thinking about is to\n> > implement the USAGE privilege on types, and then you need to have that to\n> > be allowed to create casts.\n> \n> Still seems too weak.\n\nYes.\n\n> What about requiring ownership of at least one\n> of the types?\n\nI was thinking that too, but, would it be possible to circumvent such \na restriction with a \"type in the middle\" attack ? \nCreate your own type and then\n1. (auto)cast type1 to own type\n2. (auto)cast own type to type2 ?\n\nAndreas\n", "msg_date": "Tue, 23 Jul 2002 09:39:21 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: CREATE CAST code review " }, { "msg_contents": "Zeugswetter Andreas SB SD writes:\n\n> > What about requiring ownership of at least one\n> > of the types?\n>\n> I was thinking that too, but, would it be possible to circumvent such\n> a restriction with a \"type in the middle\" attack ?\n> Create your own type and then\n> 1. (auto)cast type1 to own type\n> 2. (auto)cast own type to type2 ?\n\nBut that doesn't affect casts between type1 and type2. PostgreSQL doesn't\ndo indirect casts.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sat, 27 Jul 2002 20:08:09 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: CREATE CAST code review " } ]
[ { "msg_contents": "I've finally got around to try RAMDISK with PostgreSQL. The attached doc\ncontains the test results that I'd like to share with PostgreSQL's users\nand developers groups. \n\nRegards,\nSamuel Sutjiono\n_________________________________________________\n Expand your wireless world with Arkdom PLUS\n http://www.arkdom.com/", "msg_date": "Tue, 23 Jul 2002 10:36:13 -0400", "msg_from": "\"Samuel J. Sutjiono\" <ssutjiono@wc-group.com>", "msg_from_op": true, "msg_subject": "RAMDISK" }, { "msg_contents": "Interesting results. You didn't really offer much in how your system\nwas configured to use the ramdisk. Did you use it to simply store a\ndatabase on it? Was the entire database able to fit into available\nmemory even without the RAMDISK? Did you try only storing indicies on\nthe RAMDISK? There are lots of other questions that unanswered on the\ntopic.\n\nWorth mentioning that it is very possible and in fact, fairly easy to\ndo, for the use of a RAMDISK to significantly hinder the performance of\na system running a database.\n\nGreg\n\n\nOn Tue, 2002-07-23 at 09:36, Samuel J. Sutjiono wrote:\n> I've finally got around to try RAMDISK with PostgreSQL. The attached doc\n> contains the test results that I'd like to share with PostgreSQL's users\n> and developers groups. \n> \n> Regards,\n> Samuel Sutjiono\n> _________________________________________________\n> Expand your wireless world with Arkdom PLUS\n> http://www.arkdom.com/\n> \n> ----\n> \n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster", "msg_date": "23 Jul 2002 09:56:44 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: RAMDISK" } ]
[ { "msg_contents": "Hi,\n\nWe are moving to postgres from Oarcle. When we were with Oracle, we were\nusing a total of 160 connections(4 app servers each maintaining a pool of 40\nconnections). After moving to postgres we want to make it higher i.e make it\n60 connections for each app server i.e a total of 240 connections. How will\npostgres behave with this many no of connections?\n\nThanks\nYuva\nSr. Java Developer\nwww.ebates.com\n", "msg_date": "Tue, 23 Jul 2002 13:01:42 -0700", "msg_from": "Yuva Chandolu <ychandolu@ebates.com>", "msg_from_op": true, "msg_subject": "Howmany connections postgres can handle upto?" }, { "msg_contents": "On Tue, Jul 23, 2002 at 01:01:42PM -0700, Yuva Chandolu wrote:\n> Hi,\n> \n> We are moving to postgres from Oarcle. When we were with Oracle, we were\n> using a total of 160 connections(4 app servers each maintaining a pool of 40\n> connections). After moving to postgres we want to make it higher i.e make it\n> 60 connections for each app server i.e a total of 240 connections. How will\n> postgres behave with this many no of connections?\n\nIt rather depends. For instance, are you running on a 386 with 4\nmegs of memory? I expect that it won't work. If you are running on\nan 8-way Sun box with 16 gig of memory, I can report that you can\nhave 1024 connections without undue pain.\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 23 Jul 2002 16:08:51 -0400", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: Howmany connections postgres can handle upto?" } ]
[ { "msg_contents": "Hello,\n\npgaccess 0.98.8 weekly release 1 is available for download from the\npgaccess web site -\nwww.pgaccess.org\n\nThis version is the net effect of the effort started about April this year\nfor merging three large groups of patches accumulated by Bartus, Boyan and\nChris.\nSince then the pgaccess development group enlarged with the efforts of\neven more people -\nhttp://www.pgaccess.org/?page=12\n\nThe pgaccess 0.98.8 is planned to be released together with PostgreSQL 7.3\nBug reports and ideas are welcome.\nAll best,\n\nIavor\n\n--\nwww.pgaccess.org\n\n\n", "msg_date": "Tue, 23 Jul 2002 22:49:13 +0200 (CEST)", "msg_from": "\"Iavor Raytchev\" <iavor.raytchev@verysmall.org>", "msg_from_op": true, "msg_subject": "pgaccess 0.98.8 - weekly release 1" } ]
[ { "msg_contents": "Hello,\n\nI would like to implement a function similar to the Decode function in\nOracle. I was wondering if it is possible to accept a variable number\nof parameters (array??).\n\nThanks,\nEdwin S. Ramirez\n", "msg_date": "23 Jul 2002 14:42:12 -0700", "msg_from": "ramirez@idconcepts.org (Edwin S. Ramirez)", "msg_from_op": true, "msg_subject": "Oracle Decode Function" }, { "msg_contents": " > I would like to implement a function similar to the Decode function in\n > Oracle.\n\nTake a look at the CASE WHEN ... THEN functionality. For example:\n\nOracle:\nselect decode(col1,'abc',1,'xyz',2,0) from test;\n\nPostgresql:\nselect case when col1 = 'abc' then 1 when col1 = 'xyz' then 2 else 0 end \nfrom test;\n\n > I was wondering if it is possible to accept a variable number\n > of parameters (array??).\n\nIf you're asking about whether a custom function can have vararg \nparameters, the answer appears to depend on the CREATE FUNCTION syntax. \nI've never used them personally, but the PG_FUNCTION_ARGS and \nPG_GETARG_xxx(#) macros (/src/includes/fmgr.h) available for compiled \nfunctions would appear to support variable length argument lists. The \nproblem is that I couldn't pin down a CREATE FUNCTION that provided the \nsame vararg functionality. Hopefully somebody can answer this conclusively.\n\nIf it can't be done using custom functions, it should be implementable \n\"internally\" using the same concepts used to support the IN() function \nso maybe take a look in /src/backend/parser/parse_func.c for a start.\n\n\nEdwin S. Ramirez wrote:\n> Hello,\n> \n> I would like to implement a function similar to the Decode function in\n> Oracle. I was wondering if it is possible to accept a variable number\n> of parameters (array??).\n> \n> Thanks,\n> Edwin S. Ramirez\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n\n", "msg_date": "Thu, 25 Jul 2002 10:01:31 -0400", "msg_from": "Marc Lavergne <mlavergne-pub@richlava.com>", "msg_from_op": false, "msg_subject": "Re: Oracle Decode Function" }, { "msg_contents": "Marc Lavergne <mlavergne-pub@richlava.com> writes:\n> If you're asking about whether a custom function can have vararg \n> parameters, the answer appears to depend on the CREATE FUNCTION\n> syntax. \n\nCan't do it, though you could imagine creating a family of functions\nof the same name and different numbers of parameters. Trying to\nemulate DECODE this way would have a much worse problem: what's the\ndatatype of the parameters? (Or the result?)\n\nUse CASE; it does more than DECODE *and* is ANSI-standard.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 25 Jul 2002 11:40:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Oracle Decode Function " }, { "msg_contents": "That would get ugly in a real hurry! Oracle does get around the issue of \nparameter datatypes by having automatic datatype conversions, more or \nless, everything becomes a varchar2. The only real attractants to \nimplementing a DECODE() function is that it's one less thing to convert \nwhen migrating apps from Oracle and, unfortunately, this is also a piece \nof the SQL*Net compatibility that I'm looking into doing!\n\n\nTom Lane wrote:\n> Marc Lavergne <mlavergne-pub@richlava.com> writes:\n> \n>>If you're asking about whether a custom function can have vararg \n>>parameters, the answer appears to depend on the CREATE FUNCTION\n>>syntax. \n> \n> \n> Can't do it, though you could imagine creating a family of functions\n> of the same name and different numbers of parameters. Trying to\n> emulate DECODE this way would have a much worse problem: what's the\n> datatype of the parameters? (Or the result?)\n> \n> Use CASE; it does more than DECODE *and* is ANSI-standard.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n\n", "msg_date": "Thu, 25 Jul 2002 14:56:45 -0400", "msg_from": "Marc Lavergne <mlavergne-pub@richlava.com>", "msg_from_op": false, "msg_subject": "Re: Oracle Decode Function" }, { "msg_contents": " > would be interested to hear a valid reason why you feel the need\n > to use decode().\n\nJust let me start by saying that this is NOT for me (see the original \nemail in thread)! Personally, I have no trouble using CASE. However, if \nI want to create an Oracle-compatibilty layer, I have to implement all \nOracle functions both good and bad ... my opinion either way is totally \nirrelevant!\n\n > if you find yourself using the decode statement, you are probably\n > doing something wrong. why have it, do you _need_ it?\n\nThe only place I have found DECODE/CASE to be attractive is in ORDER BY \nclauses (hunts through code for example) ... imagine an LOV based on the \nfollowing:\n\ncreate table some_list (id integer, label varchar(20), position integer);\n\n-- defaults\ninsert into some_list values (-1,'Any Value',0)\ninsert into some_list values (0,'No Value',0)\n-- values\ninsert into some_list values (1,'Apple',2)\ninsert into some_list values (2,'Orange',1)\n\nselect id, name from some_list\norder by decode(id,-1,-999,0,-998,position) asc\n\nOf course this is a highly diluted example so don't over-analyze it but \nthe intent is for the \"default\" entries (IDs -1 and 0) to always appear \nfirst while giving the user the ability to change the label and position \nvalues (but not the id) of any row, including the defaults. In this \ncase, shy of restricting the input for position based on individual row \nIDs (imagine this is a JSP app), it makes much more sense to override \nthe position column using the id column and then just use a function \nbased index.\n\n > seems that oracle gives you alot of functions and\n > abilities that allow dba's and programmers to be lazy, instead of\n > having a good db [relational] design (and that is more standards\n > compliant).\n\nOh heck yeah ... of course they do that! It locks you in to their \nplatform and makes migrating apps off of Oracle a more expensive \nproposition. It's a really smart move when you have a huge chunk of the \nmarket and you want to keep it that way. That's why converting from \nPostgreSQL to Oracle is relatively easy while the reverse is ... well \ntough. Oracle used to encourage standards but now the only thing I see \nbeing encouraged is lock in ... that's why I'm here! ;-)\n\nChris Humphries wrote:\n> if you find yourself using the decode statement, you are probably\n> doing something wrong. why have it, do you _need_ it?\n> \n> if you are using it for display strings based on conditions, \n> you shouldnt be using a function to do this. it should be a table,\n> or something in the middle layer. try to keep the frame of mind of\n> letting the db do it's job of just managing data; middle layer for\n> doing something useful with the data and sending to the top layer\n> for presentation or formatted data that is meaningful there. It \n> is the right(tm) way to do things, and will make life alot easier :)\n> \n> would be interested to hear a valid reason why you feel the need\n> to use decode(). seems that oracle gives you alot of functions and\n> abilities that allow dba's and programmers to be lazy, instead of \n> having a good db [relational] design (and that is more standards\n> compliant). \n> \n> though like Tom Lane said, there is case, if you need it. \n> good luck!\n> \n> -chris\n> \n> Marc Lavergne writes:\n> > That would get ugly in a real hurry! Oracle does get around the issue of \n> > parameter datatypes by having automatic datatype conversions, more or \n> > less, everything becomes a varchar2. The only real attractants to \n> > implementing a DECODE() function is that it's one less thing to convert \n> > when migrating apps from Oracle and, unfortunately, this is also a piece \n> > of the SQL*Net compatibility that I'm looking into doing!\n> > \n> > \n> > Tom Lane wrote:\n> > > Marc Lavergne <mlavergne-pub@richlava.com> writes:\n> > > \n> > >>If you're asking about whether a custom function can have vararg \n> > >>parameters, the answer appears to depend on the CREATE FUNCTION\n> > >>syntax. \n> > > \n> > > \n> > > Can't do it, though you could imagine creating a family of functions\n> > > of the same name and different numbers of parameters. Trying to\n> > > emulate DECODE this way would have a much worse problem: what's the\n> > > datatype of the parameters? (Or the result?)\n> > > \n> > > Use CASE; it does more than DECODE *and* is ANSI-standard.\n> > > \n> > > \t\t\tregards, tom lane\n> > > \n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> > > \n> > \n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> > \n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n\n", "msg_date": "Thu, 25 Jul 2002 16:09:04 -0400", "msg_from": "Marc Lavergne <mlavergne-pub@richlava.com>", "msg_from_op": false, "msg_subject": "Re: Oracle Decode Function" }, { "msg_contents": "if you find yourself using the decode statement, you are probably\ndoing something wrong. why have it, do you _need_ it?\n\nif you are using it for display strings based on conditions, \nyou shouldnt be using a function to do this. it should be a table,\nor something in the middle layer. try to keep the frame of mind of\nletting the db do it's job of just managing data; middle layer for\ndoing something useful with the data and sending to the top layer\nfor presentation or formatted data that is meaningful there. It \nis the right(tm) way to do things, and will make life alot easier :)\n\nwould be interested to hear a valid reason why you feel the need\nto use decode(). seems that oracle gives you alot of functions and\nabilities that allow dba's and programmers to be lazy, instead of \nhaving a good db [relational] design (and that is more standards\ncompliant). \n\nthough like Tom Lane said, there is case, if you need it. \ngood luck!\n\n-chris\n\nMarc Lavergne writes:\n > That would get ugly in a real hurry! Oracle does get around the issue of \n > parameter datatypes by having automatic datatype conversions, more or \n > less, everything becomes a varchar2. The only real attractants to \n > implementing a DECODE() function is that it's one less thing to convert \n > when migrating apps from Oracle and, unfortunately, this is also a piece \n > of the SQL*Net compatibility that I'm looking into doing!\n > \n > \n > Tom Lane wrote:\n > > Marc Lavergne <mlavergne-pub@richlava.com> writes:\n > > \n > >>If you're asking about whether a custom function can have vararg \n > >>parameters, the answer appears to depend on the CREATE FUNCTION\n > >>syntax. \n > > \n > > \n > > Can't do it, though you could imagine creating a family of functions\n > > of the same name and different numbers of parameters. Trying to\n > > emulate DECODE this way would have a much worse problem: what's the\n > > datatype of the parameters? (Or the result?)\n > > \n > > Use CASE; it does more than DECODE *and* is ANSI-standard.\n > > \n > > \t\t\tregards, tom lane\n > > \n > > ---------------------------(end of broadcast)---------------------------\n > > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n > > \n > \n > \n > \n > ---------------------------(end of broadcast)---------------------------\n > TIP 5: Have you checked our extensive FAQ?\n > \n > http://www.postgresql.org/users-lounge/docs/faq.html\n", "msg_date": "Thu, 25 Jul 2002 15:16:53 -0500", "msg_from": "Chris Humphries <chumphries@devis.com>", "msg_from_op": false, "msg_subject": "Re: Oracle Decode Function" }, { "msg_contents": "> If you're asking about whether a custom function can have vararg \n> parameters, the answer appears to depend on the CREATE FUNCTION syntax. \n> I've never used them personally, but the PG_FUNCTION_ARGS and \n> PG_GETARG_xxx(#) macros (/src/includes/fmgr.h) available for compiled \n> functions would appear to support variable length argument lists. The \n> problem is that I couldn't pin down a CREATE FUNCTION that provided the \n> same vararg functionality. Hopefully somebody can answer this \n> conclusively.\n\ncontrib/fulltextindex/fti.c uses variable numbers of arguments...\n\nChris\n\n", "msg_date": "Fri, 26 Jul 2002 10:30:43 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Oracle Decode Function" }, { "msg_contents": " > contrib/fulltextindex/fti.c uses variable numbers of arguments...\n\nI see the code, but maybe I don't SEE the code. I'm only on my second \ncup of coffee so I may be missing something but I am not betting any \nmoney in it :) Fulltextindex appears to work because it's called within \na trigger but I don't think you can get the parser not to complain about \narguments when your function is not called internally by the trigger \nmanager. Here's my fat-free proof of concept:\n\n-- -----------------------------------------------\n-- /tmp/varargs.c\n\n#include \"postgre.h\"\n#include \"fmgr.h\"\n\nPG_FUNCTION_INFO_V1(varargs);\n\nDatum varargs(PG_FUNCTION_ARGS)\n{\n int32 v_0 = PG_GETARG_INT32(0);\n int32 v_1 = PG_GETARG_INT32(1);\n\n PG_RETURN_INT32(v_0 + v_1);\n}\n\n-- -----------------------------------------------\n\ngcc -Wall -L. -D_REENTRANT -fPIC -shared \n-I/home/postgre/postgresql-7.2/src/include -o /tmp/varargs.so /tmp/varargs.c\n\n-- -----------------------------------------------\n-- verify it works with arg defs\n\ncreate function varargs(int4, int4) returns int4 as\n '/tmp/varargs.so'\n language 'C';\n\n-- -----------------------------------------------\n\nselect varargs(1,2);\n\n varargs\n---------\n 3\n(1 row)\n\n-- -----------------------------------------------\n-- verify the failure without arg defs\n\ndrop function varargs(int4 int4);\ncreate function varargs() returns int4 as\n '/tmp/varargs.so'\n language 'C';\n\n-- -----------------------------------------------\n\nselect varargs(1,2);\n\nERROR: Function 'varargs(int4, int4)' does not exist\n Unable to identify a function that satisfies the given argument \ntypes\n You may need to add explicit typecasts\n\n-- -----------------------------------------------\n\n\nChristopher Kings-Lynne wrote:\n>>If you're asking about whether a custom function can have vararg \n>>parameters, the answer appears to depend on the CREATE FUNCTION syntax. \n>>I've never used them personally, but the PG_FUNCTION_ARGS and \n>>PG_GETARG_xxx(#) macros (/src/includes/fmgr.h) available for compiled \n>>functions would appear to support variable length argument lists. The \n>>problem is that I couldn't pin down a CREATE FUNCTION that provided the \n>>same vararg functionality. Hopefully somebody can answer this \n>>conclusively.\n> \n> \n> contrib/fulltextindex/fti.c uses variable numbers of arguments...\n> \n> Chris\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n\n", "msg_date": "Fri, 26 Jul 2002 11:11:44 -0400", "msg_from": "Marc Lavergne <mlavergne-pub@richlava.com>", "msg_from_op": false, "msg_subject": "Re: Oracle Decode Function" }, { "msg_contents": "Marc Lavergne <mlavergne-pub@richlava.com> writes:\n>>> contrib/fulltextindex/fti.c uses variable numbers of arguments...\n> I see the code, but maybe I don't SEE the code. I'm only on my second \n> cup of coffee so I may be missing something but I am not betting any \n> money in it :) Fulltextindex appears to work because it's called within \n> a trigger but I don't think you can get the parser not to complain about \n> arguments when your function is not called internally by the trigger \n> manager.\n\nRight, fti.c is using a variable number of *trigger* arguments, which\nis a whole different can of worms.\n\nWhat you can do, if you are so inclined, is to rely on function\noverloading to make several pg_proc entries of the same name and\ndifferent numbers of arguments that all point at the same underlying\nC function. Then the C function would have to check how many\narguments it was actually passed. Slightly ugly, but doable.\n\nThere is some stuff in fmgr.h that anticipates a future feature of\nreal varargs function declarations ... but we don't support it yet.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Jul 2002 11:43:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Oracle Decode Function " } ]
[ { "msg_contents": "I have a need for relation synonyms in PostgreSQL. I don't see it in \n7.2.1 but the catalog seems to be able to support it more or less.\n\nHere's what I intend to do:\n\n1) Create a duplicate record in pg_class for the base table information \nbut with the relname set to the synonym name.\n\n2) Duplicate the attribute information in pg_attribute for the base \ntable but with the attrelid set to the synonym oid.\n\n(see test SQL below)\n\nIs there anything fundamentally wrong with this approach? In particular, \ncould this concievably break anything. I do understand that it's not a \nperfect approach since the attributes are not dynamic in so far as any \nchanges made to the base table. However, it does appear to provide a \nsuperior solution than using a view with a full set of rules. That said, \nis there a safe way of creating a \"true\" duplicate record in pg_class \n(including the oid) so that a \"true\" synonym could be created?\n\n\nHere's the testing I did:\n\ninsert into pg_class\nselect 'syn_test', reltype, relowner, relam, relfilenode, relpages, \nreltuples, reltoastrelid, reltoastidxid,\nrelhasindex, relisshared, relkind, relnatts, relchecks, reltriggers, \nrelukeys, relfkeys,\nrelrefs, relhasoids, relhaspkey, relhasrules, relhassubclass, relacl\nfrom pg_class where lower(relname) = lower('tbl_test')\n;\n\ninsert into pg_attribute\nselect c2.oid, attname, atttypid, attstattarget, attlen, attnum, \nattndims, attcacheoff, atttypmod, attbyval,\nattstorage, attisset, attalign, attnotnull, atthasdef\nfrom pg_class c1, pg_class c2, pg_attribute a1\nwhere attrelid = c1.oid\nand lower(c1.relname) = lower('tbl_test')\nand lower(c2.relname) = lower('syn_test')\n;\n\nselect * from tbl_test; (no problems)\nselect * from syn_test; (no problems)\n\ndelete from pg_attribute\nwhere attrelid = (select oid from pg_class where lower(relname) = \nlower('syn_test'))\n;\n\ndelete from pg_class\nwhere lower(relname) = lower('syn_test')\n;\n\nThanks!\n\nMarc L.\n\n", "msg_date": "Wed, 24 Jul 2002 02:22:39 -0400", "msg_from": "Marc Lavergne <mlavergne-pub@richlava.com>", "msg_from_op": true, "msg_subject": "CREATE SYNONYM suggestions" }, { "msg_contents": "On Wed, 2002-07-24 at 02:22, Marc Lavergne wrote:\n> I have a need for relation synonyms in PostgreSQL. I don't see it in \n> 7.2.1 but the catalog seems to be able to support it more or less.\n> \n> Here's what I intend to do:\n> \n> 1) Create a duplicate record in pg_class for the base table information \n> but with the relname set to the synonym name.\n\nThis will eventually cause a problem when the file oid changes and the\nold one gets removed. Cluster is one of those commands that will do\nthat.\n\nOther than that, any table changes won't be propagated -- but you\nalready mentioned that.\n\n", "msg_date": "24 Jul 2002 07:23:36 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: CREATE SYNONYM suggestions" }, { "msg_contents": "Marc Lavergne <mlavergne-pub@richlava.com> writes:\n> I have a need for relation synonyms in PostgreSQL. I don't see it in \n> 7.2.1 but the catalog seems to be able to support it more or less.\n\n> Here's what I intend to do:\n\n> 1) Create a duplicate record in pg_class for the base table information \n> but with the relname set to the synonym name.\n\n> 2) Duplicate the attribute information in pg_attribute for the base \n> table but with the attrelid set to the synonym oid.\n\n> Is there anything fundamentally wrong with this approach?\n\nYES. You just broke relation locking (a lock by OID will only lock\none access path to the table). Any sort of ALTER seems quite\nproblematical as well; how will it know to update both sets of catalog\nentries?\n\nA view seems like a better idea, especially since you can do it without\nany backend changes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Jul 2002 11:43:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: CREATE SYNONYM suggestions " }, { "msg_contents": "I thought that it might involve more than met the eye. I'm resisting the \n\"view\" approach since, like my bad kludge, it locks down the table \ndefinition and as a result doesn't provide a very effective synonym \nmechanism.\n\nI'm looking into the commands/view.c as a basis for introducing the \nconcept of synonyms. Based on what I see, it looks like implementing it \nshould be too terrible. Sadly, it looks a lot like this would require \nintroducing a new relation type.\n\nI'll have to investigate and possibly submit the patch(es) later. The \nquestion is, since CREATE SYNONYM appears to be a SQL extension, is this \nsomething the group would want to incorporate?\n\nTom Lane wrote:\n> Marc Lavergne <mlavergne-pub@richlava.com> writes:\n> \n>>I have a need for relation synonyms in PostgreSQL. I don't see it in \n>>7.2.1 but the catalog seems to be able to support it more or less.\n> \n> \n>>Here's what I intend to do:\n> \n> \n>>1) Create a duplicate record in pg_class for the base table information \n>>but with the relname set to the synonym name.\n> \n> \n>>2) Duplicate the attribute information in pg_attribute for the base \n>>table but with the attrelid set to the synonym oid.\n> \n> \n>>Is there anything fundamentally wrong with this approach?\n> \n> \n> YES. You just broke relation locking (a lock by OID will only lock\n> one access path to the table). Any sort of ALTER seems quite\n> problematical as well; how will it know to update both sets of catalog\n> entries?\n> \n> A view seems like a better idea, especially since you can do it without\n> any backend changes.\n> \n> \t\t\tregards, tom lane\n> \n\n\n", "msg_date": "Wed, 24 Jul 2002 17:27:25 -0400", "msg_from": "Marc Lavergne <mlavergne-pub@richlava.com>", "msg_from_op": true, "msg_subject": "Re: CREATE SYNONYM suggestions" }, { "msg_contents": "Marc Lavergne writes:\n\n> The question is, since CREATE SYNONYM appears to be a SQL extension, is\n> this something the group would want to incorporate?\n\nWell, you could start by explaining what exactly this concept is and what\nit would be useful for. I imagine that it is mostly equivalent to\nsymlinks. If so, I can see a use for it, but it would probably have to be\na separate type of object altogether, so you could also create\nsynonyms/symlinks to functions, types, etc.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 25 Jul 2002 22:54:53 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: CREATE SYNONYM suggestions" }, { "msg_contents": "On Fri, 2002-07-26 at 02:54, Marc Lavergne wrote:\n> Like you said, it's really just a symlink for db objects. If memory \n> serves me right, synonyms can only refer to tables and views in Oracle. \n> The most common use is for simplifying access to objects outside your \n> schema (eg. create synonym TABLEX for JOHN.TABLEX) or for simplifying \n> access to objects across database links (eg. create synonym TABLEX for \n> TABLEX@DBY).\n\nFor quick answers on SYNONYM use in DB2 see\n\nhttp://members.aol.com/cronid/db2.htm \n\n(search for synonym)\n\n\nI found no SYNONYM in SQL99 but for a similar construct ALIAS, there is\na reserved word in SQL99, but I could find no definition using it.\n\n----------------\nHannu\n\n", "msg_date": "26 Jul 2002 02:16:10 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: CREATE SYNONYM suggestions" }, { "msg_contents": "Like you said, it's really just a symlink for db objects. If memory \nserves me right, synonyms can only refer to tables and views in Oracle. \nThe most common use is for simplifying access to objects outside your \nschema (eg. create synonym TABLEX for JOHN.TABLEX) or for simplifying \naccess to objects across database links (eg. create synonym TABLEX for \nTABLEX@DBY).\n\nQuick note: I know that PostgreSQL doesn't have the same concept of \nschemas, this is just an example.\n\nThere are two types of synonyms, private and public. Public synonyms \nonly exist within the current user's schema while public synonyms apply \nsystem wide to all users.\n\n From a PostgreSQL perspective, there is little *current* value that \nsynonyms bring to the table, save for reducing an object name like \nTHIS_IS_WAY_TOO_LONG to TOO_LONG. That said, it's still a very commonly \nused construct in other commercial DBMSs.\n\nI guess the place to implement this would be somewhere shortly after the \nparser has done it's thing, probably as part of the OID resolution. I \nshould make this my standard disclaimer ;-) but again ... my goal with \nall this Oracle compatibility stuff is the SQL*Net compatibility listed \nin the TODO.\n\n\nPeter Eisentraut wrote:\n> Marc Lavergne writes:\n> \n> \n>>The question is, since CREATE SYNONYM appears to be a SQL extension, is\n>>this something the group would want to incorporate?\n> \n> \n> Well, you could start by explaining what exactly this concept is and what\n> it would be useful for. I imagine that it is mostly equivalent to\n> symlinks. If so, I can see a use for it, but it would probably have to be\n> a separate type of object altogether, so you could also create\n> synonyms/symlinks to functions, types, etc.\n> \n\n\n", "msg_date": "Thu, 25 Jul 2002 17:54:22 -0400", "msg_from": "Marc Lavergne <mlavergne-pub@richlava.com>", "msg_from_op": true, "msg_subject": "Re: CREATE SYNONYM suggestions" }, { "msg_contents": "On Fri, 26 Jul 2002 07:55:47 +1000, Marc Lavergne wrote:\n\n> Like you said, it's really just a symlink for db objects. If memory serves me\n> right, synonyms can only refer to tables and views in Oracle. \n\nIn oracle (8i) you can create synonyms for tables, views, sequences, procedures,\nfunctions, packages, materialised views, java class objects and other synonyms.\n\n\n", "msg_date": "Fri, 26 Jul 2002 13:21:50 +1000", "msg_from": "Cameron Hutchison <camh+un@xdna.net>", "msg_from_op": false, "msg_subject": "Re: CREATE SYNONYM suggestions" } ]
[ { "msg_contents": "As per earlier discussion, I'm working on the hot backup issues as part\nof the PITR support. While I was looking at the buffer manager and the\nrelcache/MyDb issues to figure out the best way to work this, it\noccurred to me that PITR will introduce a big problem with the way we\nhandle local relations.\n\nThe basic problem is that local relations (rd_myxactonly == true) are\nnot part of a checkpoint, so there is no way to get a lower bound on the\nstarting LSN needed to recover a local relation. In the past this did\nnot matter, because either the local file would be (effectively)\ndiscarded during recovery because it had not yet become visible, or the\nfile would be flushed before the transaction creating it made it\nvisible. Now this is a problem.\n\nSo I need a decision from the core team on what to do about the local\nbuffer manager. My preference would be to forget about the local buffer\nmanager entirely, or if not that then to allow it only for _true_\ntemporary data. The only alternative I can devise is to create some way\nfor all other backends to participate in a checkpoint, perhaps using a\nsignal. I'm not sure this can be done safely. \n\nAnyway, I'm glad the tuplesort stuff doesn't try to use relation files\n:-)\n\nCan the core team let me know if this is acceptable, and whether I\nshould move ahead with changes to the buffer manager (and some other\nstuff) needed to avoid special treatment of rd_myxactonly relations?\n\nAlso to Richard: have you guys at multera dealt with this issue already?\nIs there some way around this that I'm missing?\n\n\nRegards,\n\n John Nield\n\n\n\n\nJust as an example of this problem, imagine the following sequence:\n\n1) Transaction TX1 creates a local relation LR1 which will eventually\nbecome a globally visible table. Tuples are inserted into the local\nrelation, and logged to the WAL file. Some tuples remain in the local\nbuffer cache and are not yet written out, although they are logged. TX1\nis still in progress.\n\n2) Backup starts, and checkpoint is called to get a minimum starting LSN\n(MINLSN) for the backed-up files. Only the global buffers are flushed.\n\n3) Backup process copies LR1 into the backup directory. (postulate some\nway of coordinating with the local buffer manager, a problem I have not\nsolved).\n\n4) TX1 commits and flushes its local buffers. A dirty buffer exists\nwhose LSN is before MINLSN. LR1 becomes globally visible.\n\n5) Backup finishes copying all the files, including the local relations,\nand then flushes the log. The log files between MINLSN and the current\nLSN are copied to the backup directory, and backup is done.\n\n6) Sometime later, a system administrator restores the backup and plays\nthe logs forward starting at MINLSN. LR1 will be corrupt, because some\nof the log entries required for its restoration will be before MINLSN.\nThis corruption will not be detected until something goes wrong.\n\nBTW: The problem doesn't only happen with backup! It occurs at every\ncheckpoint as well, I just missed it until I started working on the hot\nbackup issue.\n\n-- \nJ. R. Nield\njrnield@usol.com\n\n\n\n", "msg_date": "24 Jul 2002 02:43:15 -0400", "msg_from": "\"J. R. Nield\" <jrnield@usol.com>", "msg_from_op": true, "msg_subject": "PITR, checkpoint, and local relations" }, { "msg_contents": "\nJ.R needs comments on this. PITR has problems because local relations\naren't logged to WAL. Suggestions?\n\n---------------------------------------------------------------------------\n\nJ. R. Nield wrote:\n> As per earlier discussion, I'm working on the hot backup issues as part\n> of the PITR support. While I was looking at the buffer manager and the\n> relcache/MyDb issues to figure out the best way to work this, it\n> occurred to me that PITR will introduce a big problem with the way we\n> handle local relations.\n> \n> The basic problem is that local relations (rd_myxactonly == true) are\n> not part of a checkpoint, so there is no way to get a lower bound on the\n> starting LSN needed to recover a local relation. In the past this did\n> not matter, because either the local file would be (effectively)\n> discarded during recovery because it had not yet become visible, or the\n> file would be flushed before the transaction creating it made it\n> visible. Now this is a problem.\n> \n> So I need a decision from the core team on what to do about the local\n> buffer manager. My preference would be to forget about the local buffer\n> manager entirely, or if not that then to allow it only for _true_\n> temporary data. The only alternative I can devise is to create some way\n> for all other backends to participate in a checkpoint, perhaps using a\n> signal. I'm not sure this can be done safely. \n> \n> Anyway, I'm glad the tuplesort stuff doesn't try to use relation files\n> :-)\n> \n> Can the core team let me know if this is acceptable, and whether I\n> should move ahead with changes to the buffer manager (and some other\n> stuff) needed to avoid special treatment of rd_myxactonly relations?\n> \n> Also to Richard: have you guys at multera dealt with this issue already?\n> Is there some way around this that I'm missing?\n> \n> \n> Regards,\n> \n> John Nield\n> \n> \n> \n> \n> Just as an example of this problem, imagine the following sequence:\n> \n> 1) Transaction TX1 creates a local relation LR1 which will eventually\n> become a globally visible table. Tuples are inserted into the local\n> relation, and logged to the WAL file. Some tuples remain in the local\n> buffer cache and are not yet written out, although they are logged. TX1\n> is still in progress.\n> \n> 2) Backup starts, and checkpoint is called to get a minimum starting LSN\n> (MINLSN) for the backed-up files. Only the global buffers are flushed.\n> \n> 3) Backup process copies LR1 into the backup directory. (postulate some\n> way of coordinating with the local buffer manager, a problem I have not\n> solved).\n> \n> 4) TX1 commits and flushes its local buffers. A dirty buffer exists\n> whose LSN is before MINLSN. LR1 becomes globally visible.\n> \n> 5) Backup finishes copying all the files, including the local relations,\n> and then flushes the log. The log files between MINLSN and the current\n> LSN are copied to the backup directory, and backup is done.\n> \n> 6) Sometime later, a system administrator restores the backup and plays\n> the logs forward starting at MINLSN. LR1 will be corrupt, because some\n> of the log entries required for its restoration will be before MINLSN.\n> This corruption will not be detected until something goes wrong.\n> \n> BTW: The problem doesn't only happen with backup! It occurs at every\n> checkpoint as well, I just missed it until I started working on the hot\n> backup issue.\n> \n> -- \n> J. R. Nield\n> jrnield@usol.com\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 1 Aug 2002 17:14:04 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: PITR, checkpoint, and local relations" }, { "msg_contents": "On Thu, 2002-08-01 at 17:14, Bruce Momjian wrote:\n> \n> J.R needs comments on this. PITR has problems because local relations\n> aren't logged to WAL. Suggestions?\n> \nI'm sorry if it wasn't clear. The issue is not that local relations\naren't logged to WAL, they are. The issue is that you can't checkpoint\nthem. That means if you need a lower bound on the LSN to recover from,\nthen you either need to wait for transactions using them all to commit\nand flush their local buffers, or there needs to be a async way to tell\nthem all to flush.\n\nI am working on a way to do this with a signal, using holdoffs around\ncalls into the storage-manager and VFS layers to prevent re-entrant\ncalls. The local buffer manager is simple enough that it should be\npossible to flush them from within a signal handler at most times, but\nthe VFS and storage manager are not safe to re-enter from a handler.\n\nDoes this sound like a good idea?\n\n-- \nJ. R. Nield\njrnield@usol.com\n\n\n\n", "msg_date": "01 Aug 2002 17:37:38 -0400", "msg_from": "\"J. R. Nield\" <jrnield@usol.com>", "msg_from_op": true, "msg_subject": "Re: PITR, checkpoint, and local relations" }, { "msg_contents": "\"J. R. Nield\" <jrnield@usol.com> writes:\n> I am working on a way to do this with a signal, using holdoffs around\n> calls into the storage-manager and VFS layers to prevent re-entrant\n> calls. The local buffer manager is simple enough that it should be\n> possible to flush them from within a signal handler at most times, but\n> the VFS and storage manager are not safe to re-enter from a handler.\n\n> Does this sound like a good idea?\n\nNo. What happened to \"simple\"?\n\nBefore I'd accept anything like that, I'd rip out the local buffer\nmanager and just do everything in the shared manager. I've never\nseen any proof that the local manager buys any noticeable performance\ngain anyway ... how many people really do anything much with a table\nduring its first transaction of existence?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 02 Aug 2002 00:49:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PITR, checkpoint, and local relations " }, { "msg_contents": "Ok. This is what I wanted to hear, but I had assumed someone decided to\nput it in for a reason, and I wasn't going to submit a patch to pull-out\nthe local buffer manager without clearing it first.\n\nThe main area where it seems to get heavy use is during index builds,\nand for 'CREATE TABLE AS SELECT...'.\n\nSo I will remove the local buffer manager as part of the PITR patch,\nunless there is further objection.\n\nOn Fri, 2002-08-02 at 00:49, Tom Lane wrote:\n> \"J. R. Nield\" <jrnield@usol.com> writes:\n> > I am working on a way to do this with a signal, using holdoffs around\n> > calls into the storage-manager and VFS layers to prevent re-entrant\n> > calls. The local buffer manager is simple enough that it should be\n> > possible to flush them from within a signal handler at most times, but\n> > the VFS and storage manager are not safe to re-enter from a handler.\n> \n> > Does this sound like a good idea?\n> \n> No. What happened to \"simple\"?\n> \n> Before I'd accept anything like that, I'd rip out the local buffer\n> manager and just do everything in the shared manager. I've never\n> seen any proof that the local manager buys any noticeable performance\n> gain anyway ... how many people really do anything much with a table\n> during its first transaction of existence?\n> \n> \t\t\tregards, tom lane\n> \n-- \nJ. R. Nield\njrnield@usol.com\n\n\n\n", "msg_date": "02 Aug 2002 09:48:40 -0400", "msg_from": "\"J. R. Nield\" <jrnield@usol.com>", "msg_from_op": true, "msg_subject": "Re: PITR, checkpoint, and local relations" }, { "msg_contents": "\"J. R. Nield\" <jrnield@usol.com> writes:\n> Ok. This is what I wanted to hear, but I had assumed someone decided to\n> put it in for a reason, and I wasn't going to submit a patch to pull-out\n> the local buffer manager without clearing it first.\n\n> The main area where it seems to get heavy use is during index builds,\n\nYeah. I do not think it really saves any I/O: unless you abort your\nindex build, the data is eventually going to end up on disk anyway.\nWhat it saves is contention for shared buffers (the overhead of\nacquiring BufMgrLock, for example).\n\nJust out of curiosity, though, what does it matter? On re-reading your\nmessage I think you are dealing with a non problem, or at least the\nwrong problem. Local relations do not need to be checkpointed, because\nby definition they were created by a transaction that hasn't committed\nyet. They must be, and are, checkpointed to disk before the transaction\ncommits; but up till that time, if you have a crash then the entire\nrelation should just go away.\n\nThat mechanism is there already --- perhaps it needs a few tweaks for\nPITR but I do not see any need for cross-backend flush commands for\nlocal relations.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 02 Aug 2002 10:01:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PITR, checkpoint, and local relations " }, { "msg_contents": "On Fri, 2002-08-02 at 10:01, Tom Lane wrote:\n> \n> Just out of curiosity, though, what does it matter? On re-reading your\n> message I think you are dealing with a non problem, or at least the\n> wrong problem. Local relations do not need to be checkpointed, because\n> by definition they were created by a transaction that hasn't committed\n> yet. They must be, and are, checkpointed to disk before the transaction\n> commits; but up till that time, if you have a crash then the entire\n> relation should just go away.\n\nWhat happens when we have a local file that is created before the\nbackup, and it becomes global during the backup?\n\nIn order to copy this file, I either need:\n\n1) A copy of all its blocks at the time backup started (or later), plus\nall log records between then and the end of the backup.\n\nOR\n\n2) All the log records from the time the local file was created until\nthe end of the backup.\n\nIn the case of an idle uncommitted transaction that suddenly commits\nduring backup, case 2 might be very far back in the log file. In fact,\nthe log file might be archived to tape by then.\n\nSo I must do case 1, and checkpoint the local relations.\n\n\nThis brings up the question: why do I need to bother backing up files\nthat were local before the backup started, but became global during the\nbackup.\n\nWe already know that for the backup to be consistent after we restore\nit, we must play the logs forward to the completion of the backup to\nrepair our \"fuzzy copies\" of the database files. Since the transaction\nthat makes the local-file into a global one has committed during our\nbackup, its log entries will be played forward as well.\n\nWhat would happen if a transaction with a local relation commits during\nbackup, and there are log entries inserting the catalog tuples into\npg_class. Should I not apply those on restore? How do I know?\n\n> \n> That mechanism is there already --- perhaps it needs a few tweaks for\n> PITR but I do not see any need for cross-backend flush commands for\n> local relations.\n> \n\nThis problem is subtle, and I'm maybe having difficulty explaining it\nproperly. Do you understand the issue I'm raising? Have I made some kind\nof blunder, so that this is really not a problem? \n\n-- \nJ. R. Nield\njrnield@usol.com\n\n\n\n", "msg_date": "02 Aug 2002 12:26:52 -0400", "msg_from": "\"J. R. Nield\" <jrnield@usol.com>", "msg_from_op": true, "msg_subject": "Re: PITR, checkpoint, and local relations" }, { "msg_contents": "pg_copy does not handle \"local relations\" as you would suspect. To find the\ntables and indexes to backup the backend in processing the \"ALTER SYSTEM\nBACKUP\" statement reads the pg_class table. Any tables in the process of\ncoming into existence of course are not visible. If somehow they were then\nthe backup would backup up their contents. Any in private memory changes\nwould be captured during crash recovery on the copy of the database. So the\nquestion is: is it possible to read the names of the \"local relations\" from\nthe pg_class table even though there creation has not yet been committed?\n-regards\nricht\n\n> -----Original Message-----\n> From: J. R. Nield [mailto:jrnield@usol.com]\n> Sent: Friday, August 02, 2002 12:27 PM\n> To: Tom Lane\n> Cc: Bruce Momjian; Richard Tucker; PostgreSQL Hacker\n> Subject: Re: [HACKERS] PITR, checkpoint, and local relations\n>\n>\n> On Fri, 2002-08-02 at 10:01, Tom Lane wrote:\n> >\n> > Just out of curiosity, though, what does it matter? On re-reading your\n> > message I think you are dealing with a non problem, or at least the\n> > wrong problem. Local relations do not need to be checkpointed, because\n> > by definition they were created by a transaction that hasn't committed\n> > yet. They must be, and are, checkpointed to disk before the transaction\n> > commits; but up till that time, if you have a crash then the entire\n> > relation should just go away.\n>\n> What happens when we have a local file that is created before the\n> backup, and it becomes global during the backup?\n>\n> In order to copy this file, I either need:\n>\n> 1) A copy of all its blocks at the time backup started (or later), plus\n> all log records between then and the end of the backup.\n>\n> OR\n>\n> 2) All the log records from the time the local file was created until\n> the end of the backup.\n>\n> In the case of an idle uncommitted transaction that suddenly commits\n> during backup, case 2 might be very far back in the log file. In fact,\n> the log file might be archived to tape by then.\n>\n> So I must do case 1, and checkpoint the local relations.\n>\n>\n> This brings up the question: why do I need to bother backing up files\n> that were local before the backup started, but became global during the\n> backup.\n>\n> We already know that for the backup to be consistent after we restore\n> it, we must play the logs forward to the completion of the backup to\n> repair our \"fuzzy copies\" of the database files. Since the transaction\n> that makes the local-file into a global one has committed during our\n> backup, its log entries will be played forward as well.\n>\n> What would happen if a transaction with a local relation commits during\n> backup, and there are log entries inserting the catalog tuples into\n> pg_class. Should I not apply those on restore? How do I know?\n>\n> >\n> > That mechanism is there already --- perhaps it needs a few tweaks for\n> > PITR but I do not see any need for cross-backend flush commands for\n> > local relations.\n> >\n>\n> This problem is subtle, and I'm maybe having difficulty explaining it\n> properly. Do you understand the issue I'm raising? Have I made some kind\n> of blunder, so that this is really not a problem?\n>\n> --\n> J. R. Nield\n> jrnield@usol.com\n>\n>\n>\n>\n\n", "msg_date": "Fri, 02 Aug 2002 13:50:19 -0400", "msg_from": "Richard Tucker <richt@multera.com>", "msg_from_op": false, "msg_subject": "Re: PITR, checkpoint, and local relations" }, { "msg_contents": "\"J. R. Nield\" <jrnield@usol.com> writes:\n> What would happen if a transaction with a local relation commits during\n> backup, and there are log entries inserting the catalog tuples into\n> pg_class. Should I not apply those on restore? How do I know?\n\nThis is certainly a non-problem. You see a WAL log entry, you apply it.\nWhether the transaction actually commits later is not your concern (at\nleast not at that point).\n\n> This problem is subtle, and I'm maybe having difficulty explaining it\n> properly. Do you understand the issue I'm raising? Have I made some kind\n> of blunder, so that this is really not a problem? \n\nAfter thinking more, I think you are right, but you didn't explain it\nwell. The problem is not really relevant to PITR at all, but is a hole\nin the initial design of WAL. Consider\n\n\ttransaction starts\n\ttransaction creates local rel\n\ttransaction writes in local rel...\n\t\t\t\t\t\tCHECKPOINT\n\ttransaction writes in local rel...\n\t\t\t\t\t\tCHECKPOINT\n\ttransaction writes in local rel...\n\ttransaction flushes local rel pages to disk\n\ttransaction commits\n\t\t\t\t\t\tsystem crash\n\nWe'll try to replay the log from the latest checkpoint. This works only\nif all the local-rel page flushes actually made it to disk, otherwise\nthe updates of the local rel that happened before the last checkpoint\nmay be lost. (I think there is still an fsync in local-rel commit to\nensure the flushes happen, but it's sure messy to do it that way.)\n\nWe could possibly fix this by logging the local-rel-flush page writes\nthemselves in the WAL log, but that'd probably more than ruin the\nefficiency advantage of the local bufmgr. So I'm back to the idea\nthat removing it is the way to go. Certainly that would provide\nnontrivial simplifications in a number of places (no tests on local vs\nglobal buffer anymore, no special cases for local rel commit, etc).\n\nMight be useful to temporarily dike it out and see what the penalty\nfor building a large index is.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 02 Aug 2002 15:05:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PITR, checkpoint, and local relations " }, { "msg_contents": "On Fri, 2002-08-02 at 13:50, Richard Tucker wrote:\n> pg_copy does not handle \"local relations\" as you would suspect. To find the\n> tables and indexes to backup the backend in processing the \"ALTER SYSTEM\n> BACKUP\" statement reads the pg_class table. Any tables in the process of\n> coming into existence of course are not visible. If somehow they were then\n> the backup would backup up their contents. Any in private memory changes\n> would be captured during crash recovery on the copy of the database. So the\n> question is: is it possible to read the names of the \"local relations\" from\n> the pg_class table even though there creation has not yet been committed?\n> -regards\n> richt\n> \nNo, not really. At least not a consistent view.\n\nThe way to do this is using the filesystem to discover the relfilnodes,\nand there are a couple of ways to deal with the problem of files being\npulled out from under you, but you have to be careful about what the\nbuffer manager does when a file gets dropped.\n\nThe predicate for files we MUST (fuzzy) copy is: \n File exists at start of backup && File exists at end of backup\n\nAny other file, while it may be copied, doesn't need to be in the backup\nbecause either it will be created and rebuilt during play-forward\nrecovery, or it will be deleted during play-forward recovery, or both,\nassuming those operations are logged. They really must be logged to do\nwhat we want to do.\n\nAlso, you can't use the normal relation_open stuff, because local\nrelations will not have a catalog entry, and it looks like there are\ncatcache/sinval issues that I haven't completely covered. So you've got\nto do 'blind reads' through the buffer manager, which involves a minor\nextension to the buffer manager to support this if local relations go\nthrough the shared buffers, or coordinating with the local buffer\nmanager if they continue to work as they do now, which involves major\nchanges.\n\nWe also have to checkpoint at the start, and flush the log at the end.\n-- \nJ. R. Nield\njrnield@usol.com\n\n\n\n", "msg_date": "02 Aug 2002 15:27:04 -0400", "msg_from": "\"J. R. Nield\" <jrnield@usol.com>", "msg_from_op": true, "msg_subject": "Re: PITR, checkpoint, and local relations" }, { "msg_contents": "\"J. R. Nield\" <jrnield@usol.com> writes:\n> The predicate for files we MUST (fuzzy) copy is: \n> File exists at start of backup && File exists at end of backup\n\nRight, which seems to me to negate all these claims about needing a\n(horribly messy) way to read uncommitted system catalog entries, do\nblind reads, etc. What's wrong with just exec'ing tar after having\ndone a checkpoint?\n\n(In particular, I *strongly* object to using the buffer manager at all\nfor reading files for backup. That's pretty much guaranteed to blow out\nbuffer cache. Use plain OS-level file reads. An OS directory search\nwill do fine for finding what you need to read, too.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 02 Aug 2002 16:01:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PITR, checkpoint, and local relations " }, { "msg_contents": "On Fri, 2002-08-02 at 16:01, Tom Lane wrote:\n> \"J. R. Nield\" <jrnield@usol.com> writes:\n> > The predicate for files we MUST (fuzzy) copy is: \n> > File exists at start of backup && File exists at end of backup\n> \n> Right, which seems to me to negate all these claims about needing a\n> (horribly messy) way to read uncommitted system catalog entries, do\n> blind reads, etc. What's wrong with just exec'ing tar after having\n> done a checkpoint?\n> \nThere is no need to read uncommitted system catalog entries. Just take a\nsnapshot of the directory to get the OID's. You don't care whether the\nget deleted before you get to them, because the log will take care of\nthat. \n\n> (In particular, I *strongly* object to using the buffer manager at all\n> for reading files for backup. That's pretty much guaranteed to blow out\n> buffer cache. Use plain OS-level file reads. An OS directory search\n> will do fine for finding what you need to read, too.)\n\nHow do you get atomic block copies otherwise?\n\n> \n> \t\t\tregards, tom lane\n> \n-- \nJ. R. Nield\njrnield@usol.com\n\n\n\n", "msg_date": "02 Aug 2002 16:45:10 -0400", "msg_from": "\"J. R. Nield\" <jrnield@usol.com>", "msg_from_op": true, "msg_subject": "Re: PITR, checkpoint, and local relations" }, { "msg_contents": "\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> Sent: Friday, August 02, 2002 4:02 PM\n> To: J. R. Nield\n> Cc: Richard Tucker; Bruce Momjian; PostgreSQL Hacker\n> Subject: Re: [HACKERS] PITR, checkpoint, and local relations\n>\n>\n> \"J. R. Nield\" <jrnield@usol.com> writes:\n> > The predicate for files we MUST (fuzzy) copy is:\n> > File exists at start of backup && File exists at end of backup\n>\n> Right, which seems to me to negate all these claims about needing a\n> (horribly messy) way to read uncommitted system catalog entries, do\n> blind reads, etc. What's wrong with just exec'ing tar after having\n> done a checkpoint?\nYou do need to make sure to backup the pg_xlog directory last and you need\nto make sure no wal file gets reused while backing up everything else.\n>\n> (In particular, I *strongly* object to using the buffer manager at all\n> for reading files for backup. That's pretty much guaranteed to blow out\n> buffer cache. Use plain OS-level file reads. An OS directory search\n> will do fine for finding what you need to read, too.)\n>\n> \t\t\tregards, tom lane\n>\n\n", "msg_date": "Fri, 02 Aug 2002 17:20:25 -0400", "msg_from": "Richard Tucker <richt@multera.com>", "msg_from_op": false, "msg_subject": "Re: PITR, checkpoint, and local relations" }, { "msg_contents": "\"J. R. Nield\" <jrnield@usol.com> writes:\n>> (In particular, I *strongly* object to using the buffer manager at all\n>> for reading files for backup. That's pretty much guaranteed to blow out\n>> buffer cache. Use plain OS-level file reads. An OS directory search\n>> will do fine for finding what you need to read, too.)\n\n> How do you get atomic block copies otherwise?\n\nEh? The kernel does that for you, as long as you're reading the\nsame-size blocks that the backends are writing, no?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 02 Aug 2002 17:25:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PITR, checkpoint, and local relations " }, { "msg_contents": "\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Tom Lane\n> Sent: Friday, August 02, 2002 5:25 PM\n> To: J. R. Nield\n> Cc: Richard Tucker; Bruce Momjian; PostgreSQL Hacker\n> Subject: Re: [HACKERS] PITR, checkpoint, and local relations\n>\n>\n> \"J. R. Nield\" <jrnield@usol.com> writes:\n> >> (In particular, I *strongly* object to using the buffer manager at all\n> >> for reading files for backup. That's pretty much guaranteed\n> to blow out\n> >> buffer cache. Use plain OS-level file reads. An OS directory search\n> >> will do fine for finding what you need to read, too.)\n>\n> > How do you get atomic block copies otherwise?\n>\n> Eh? The kernel does that for you, as long as you're reading the\n> same-size blocks that the backends are writing, no?\nIf the OS block size is 4k and the PostgreSQL block size is 8k do we know\nfor sure that the write call does not break this into two 4k writes to the\nOS buffer cache?\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n", "msg_date": "Fri, 02 Aug 2002 17:30:26 -0400", "msg_from": "Richard Tucker <richt@multera.com>", "msg_from_op": false, "msg_subject": "Re: PITR, checkpoint, and local relations" }, { "msg_contents": "> The main area where it seems to get heavy use is during index builds,\n> and for 'CREATE TABLE AS SELECT...'.\n>\n> So I will remove the local buffer manager as part of the PITR patch,\n> unless there is further objection.\n\nWould someone mind filling me in as to what the local bugger manager is and\nhow it is different (and not useful) compared to the shared buffer manager?\n\nChris\n\n\n", "msg_date": "Sat, 3 Aug 2002 18:13:38 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: PITR, checkpoint, and local relations" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> > The main area where it seems to get heavy use is during index builds,\n> > and for 'CREATE TABLE AS SELECT...'.\n> >\n> > So I will remove the local buffer manager as part of the PITR patch,\n> > unless there is further objection.\n> \n> Would someone mind filling me in as to what the local bugger manager is and\n> how it is different (and not useful) compared to the shared buffer manager?\n\nSure. I think I can handle that.\n\nWhen you create a table in a transaction, there isn't any committed\nstate to the table yet, so any table modifications are kept in a local\nbuffer, which is local memory to the backend(?). No one needs to see it\nbecause it isn't visible to anyone yet. Same for indexes.\n\nAnyway, the WAL activity doesn't handle local buffers the same as shared\nbuffers because there is no crisis if the system crashes.\n\nThere is debate on whether the local buffers are even valuable\nconsidering the headache they cause in other parts of the system.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 3 Aug 2002 20:36:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: PITR, checkpoint, and local relations" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> There is debate on whether the local buffers are even valuable\n> considering the headache they cause in other parts of the system.\n\nMore specifically, the issue is that when (if) you commit, the contents\nof the new table now have to be pushed out to shared storage. This is\nmoderately annoying in itself (among other things, it implies fsync'ing\nthose tables before commit). But the real reason it comes up now is\nthat the proposed PITR scheme can't cope gracefully with tables that\nare suddenly there but weren't participating in checkpoints before.\n\nIt looks to me like we should stop using local buffers for ordinary\ntables that happen to be in their first transaction of existence.\nBut, per Vadim's suggestion, we shouldn't abandon the local buffer\nmanager altogether. What we could and should use it for is TEMP tables,\nwhich have no need to be checkpointed or WAL-logged or fsync'd or\naccessible to other backends *ever*. Also, a temp table can leave\nblocks in local buffers across transactions, which makes local buffers\nconsiderably more useful than they are now.\n\nIf temp tables didn't use the shared bufmgr nor did updates to them get\nWAL-logged, they'd be noticeably more efficient than plain tables, which\nIMHO would be a Good Thing. Such tables would be essentially invisible\nto WAL and PITR (at least their contents would be --- I assume we'd\nstill log file creation and deletion). But I can't see anything wrong\nwith that.\n\nIn short, the proposal runs something like this:\n\n* Regular tables that happen to be in their first transaction of\nexistence are not treated differently from any other regular table so\nfar as buffer management or WAL or PITR go. (rd_myxactonly either goes\naway or is used for much less than it is now.)\n\n* TEMP tables use the local buffer manager for their entire existence.\n(This probably means adding an \"rd_istemp\" flag to relcache entries, but\nI can't see anything wrong with that.)\n\n* Local bufmgr semantics are twiddled to reflect this reality --- in\nparticular, data in local buffers can be held across transactions, there\nis no end-of-transaction write (much less fsync). A TEMP table that\nisn't too large might never touch disk at all.\n\n* Data operations in TEMP tables do not get WAL-logged, nor do we\nWAL-log page images of local-buffer pages.\n\n\nThese changes seem very attractive to me even without regard for making\nthe world safer for PITR. I'm willing to volunteer to make them happen,\nif there are no objections.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 03 Aug 2002 22:01:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PITR, checkpoint, and local relations " }, { "msg_contents": "\nSounds like a win all around; make PITR easier and temp tables faster.\n\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > There is debate on whether the local buffers are even valuable\n> > considering the headache they cause in other parts of the system.\n> \n> More specifically, the issue is that when (if) you commit, the contents\n> of the new table now have to be pushed out to shared storage. This is\n> moderately annoying in itself (among other things, it implies fsync'ing\n> those tables before commit). But the real reason it comes up now is\n> that the proposed PITR scheme can't cope gracefully with tables that\n> are suddenly there but weren't participating in checkpoints before.\n> \n> It looks to me like we should stop using local buffers for ordinary\n> tables that happen to be in their first transaction of existence.\n> But, per Vadim's suggestion, we shouldn't abandon the local buffer\n> manager altogether. What we could and should use it for is TEMP tables,\n> which have no need to be checkpointed or WAL-logged or fsync'd or\n> accessible to other backends *ever*. Also, a temp table can leave\n> blocks in local buffers across transactions, which makes local buffers\n> considerably more useful than they are now.\n> \n> If temp tables didn't use the shared bufmgr nor did updates to them get\n> WAL-logged, they'd be noticeably more efficient than plain tables, which\n> IMHO would be a Good Thing. Such tables would be essentially invisible\n> to WAL and PITR (at least their contents would be --- I assume we'd\n> still log file creation and deletion). But I can't see anything wrong\n> with that.\n> \n> In short, the proposal runs something like this:\n> \n> * Regular tables that happen to be in their first transaction of\n> existence are not treated differently from any other regular table so\n> far as buffer management or WAL or PITR go. (rd_myxactonly either goes\n> away or is used for much less than it is now.)\n> \n> * TEMP tables use the local buffer manager for their entire existence.\n> (This probably means adding an \"rd_istemp\" flag to relcache entries, but\n> I can't see anything wrong with that.)\n> \n> * Local bufmgr semantics are twiddled to reflect this reality --- in\n> particular, data in local buffers can be held across transactions, there\n> is no end-of-transaction write (much less fsync). A TEMP table that\n> isn't too large might never touch disk at all.\n> \n> * Data operations in TEMP tables do not get WAL-logged, nor do we\n> WAL-log page images of local-buffer pages.\n> \n> \n> These changes seem very attractive to me even without regard for making\n> the world safer for PITR. I'm willing to volunteer to make them happen,\n> if there are no objections.\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 3 Aug 2002 22:52:35 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: PITR, checkpoint, and local relations" }, { "msg_contents": "On Sat, 2002-08-03 at 21:01, Tom Lane wrote:\n> * Local bufmgr semantics are twiddled to reflect this reality --- in\n> particular, data in local buffers can be held across transactions, there\n> is no end-of-transaction write (much less fsync). A TEMP table that\n> isn't too large might never touch disk at all.\n\nCurious. Is there currently such a criteria? What exactly constitutes\n\"too large\"?\n\nGreg", "msg_date": "04 Aug 2002 23:21:38 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: PITR, checkpoint, and local relations" }, { "msg_contents": "Greg Copeland <greg@copelandconsulting.net> writes:\n> On Sat, 2002-08-03 at 21:01, Tom Lane wrote:\n>> * Local bufmgr semantics are twiddled to reflect this reality --- in\n>> particular, data in local buffers can be held across transactions, there\n>> is no end-of-transaction write (much less fsync). A TEMP table that\n>> isn't too large might never touch disk at all.\n\n> Curious. Is there currently such a criteria? What exactly constitutes\n> \"too large\"?\n\n\"too large\" means \"doesn't fit in the local buffer set\". At the moment\nthe maximum number of local buffers seems to be frozen at 64. I was\nthinking of exposing that as a configuration parameter while we're at\nit.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Aug 2002 09:41:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PITR, checkpoint, and local relations " }, { "msg_contents": "This is great Tom. I will try to get what I have to you, Vadim, and\nother interested parties tonight (Mon), assuming none of my tests fail\nand reveal major bugs. It will do most of the important stuff except\nyour changes to the local buffer manager. I just have a few more minor\ntweaks, and I would like to test it a little first.\n\nOn your advice I have made it use direct OS calls to copy the files,\nusing BLCKSZ aligned read() requests, instead of going through the\nbuffer manager for reads. I can think more about the correctness of this\nlater, since the rest of the code doesn't depend on which method is\nused.\n\nTo Richard Tucker: I think duplicating the WAL files the way you plan is\nnot the way I want to do it. I'd rather have a log archiving system be\nused for this. One thing that does need to be done is an interactive\nrecovery mode, and as soon as I finish getting my current work out for\nreview I'd be glad to have you write it if you want. You'll need to see\nthis in order to interface properly.\n\nRegards,\n\n John Nield\n\nOn Sat, 2002-08-03 at 22:52, Bruce Momjian wrote: \n> \n> Sounds like a win all around; make PITR easier and temp tables faster.\n> \n> \n> ---------------------------------------------------------------------------\n> \n> Tom Lane wrote:\n> > These changes seem very attractive to me even without regard for making\n> > the world safer for PITR. I'm willing to volunteer to make them happen,\n> > if there are no objections.\n> > \n> > \t\t\tregards, tom lane\n> > \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n-- \nJ. R. Nield\njrnield@usol.com\n\n\n", "msg_date": "05 Aug 2002 12:57:58 -0400", "msg_from": "\"J. R. Nield\" <jrnield@usol.com>", "msg_from_op": true, "msg_subject": "Re: PITR, checkpoint, and local relations" }, { "msg_contents": "I said:\n> In short, the proposal runs something like this:\n\n> * Regular tables that happen to be in their first transaction of\n> existence are not treated differently from any other regular table so\n> far as buffer management or WAL or PITR go. (rd_myxactonly either goes\n> away or is used for much less than it is now.)\n\n> * TEMP tables use the local buffer manager for their entire existence.\n> (This probably means adding an \"rd_istemp\" flag to relcache entries, but\n> I can't see anything wrong with that.)\n\n> * Local bufmgr semantics are twiddled to reflect this reality --- in\n> particular, data in local buffers can be held across transactions, there\n> is no end-of-transaction write (much less fsync). A TEMP table that\n> isn't too large might never touch disk at all.\n\n> * Data operations in TEMP tables do not get WAL-logged, nor do we\n> WAL-log page images of local-buffer pages.\n\nI've committed changes to implement these ideas. One thing that proved\ninteresting was that transactions that only made changes in existing\nTEMP tables failed to commit --- RecordTransactionCommit thought it\ndidn't need to do anything, because no WAL entries had been made! This\nwas fixed by introducing another flag that gets set when we skip making\na WAL record because we're working in a TEMP relation.\n\nI have not done anything about exporting NLocBuffer as a GUC parameter.\nThe algorithms in localbuf.c are, um, pretty sucky, and would run very\nslowly if NLocBuffer were large. It'd make sense to install a hash\nindex table similar to the one used for shared buffers, and then we\ncould allow people to set NLocBuffer as large as their system can stand.\nI figured that was a task for another day, however.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Aug 2002 22:46:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PITR, checkpoint, and local relations " }, { "msg_contents": "\n\n> -----Original Message-----\n> From: J. R. Nield [mailto:jrnield@usol.com]\n> Sent: Monday, August 05, 2002 12:58 PM\n> To: Bruce Momjian\n> Cc: Tom Lane; Christopher Kings-Lynne; Richard Tucker; PostgreSQL Hacker\n> Subject: Re: [HACKERS] PITR, checkpoint, and local relations\n>\n>\n> This is great Tom. I will try to get what I have to you, Vadim, and\n> other interested parties tonight (Mon), assuming none of my tests fail\n> and reveal major bugs. It will do most of the important stuff except\n> your changes to the local buffer manager. I just have a few more minor\n> tweaks, and I would like to test it a little first.\n>\n> On your advice I have made it use direct OS calls to copy the files,\n> using BLCKSZ aligned read() requests, instead of going through the\n> buffer manager for reads. I can think more about the correctness of this\n> later, since the rest of the code doesn't depend on which method is\n> used.\n>\n> To Richard Tucker: I think duplicating the WAL files the way you plan is\n> not the way I want to do it. I'd rather have a log archiving system be\n> used for this. One thing that does need to be done is an interactive\n> recovery mode, and as soon as I finish getting my current work out for\n> review I'd be glad to have you write it if you want. You'll need to see\n> this in order to interface properly.\n\nIf you don't duplicate(mirror) the log then in the event you need to restore\na database with roll forward recovery won't the restored database be missing\non average 1/2 a log segments worth of changes?\n\n>\n> Regards,\n>\n> John Nield\n>\n> On Sat, 2002-08-03 at 22:52, Bruce Momjian wrote:\n> >\n> > Sounds like a win all around; make PITR easier and temp tables faster.\n> >\n> >\n> >\n> ------------------------------------------------------------------\n> ---------\n> >\n> > Tom Lane wrote:\n> > > These changes seem very attractive to me even without regard\n> for making\n> > > the world safer for PITR. I'm willing to volunteer to make\n> them happen,\n> > > if there are no objections.\n> > >\n> > > \t\t\tregards, tom lane\n> > >\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill,\n> Pennsylvania 19026\n> >\n> --\n> J. R. Nield\n> jrnield@usol.com\n>\n>\n>\n\n", "msg_date": "Wed, 07 Aug 2002 11:52:01 -0400", "msg_from": "Richard Tucker <richt@multera.com>", "msg_from_op": false, "msg_subject": "Re: PITR, checkpoint, and local relations" }, { "msg_contents": "On Wed, 2002-08-07 at 11:52, Richard Tucker wrote:\n> \n> \n> If you don't duplicate(mirror) the log then in the event you need to restore\n> a database with roll forward recovery won't the restored database be missing\n> on average 1/2 a log segments worth of changes?\n>\nThe xlog code must allow us to force an advance to the next log file,\nand truncate the archived file when it's copied so as not to waste\nspace. This also prevents the sysadmin from confusing two logfiles with\nthe same name and different data.\n\nThis complicates both the recovery logic and XLogInsert, and I'm trying\nto kill the \"last\" latent bug in that feature now. Hopefully I can even\nconvince myself that the code is correct and covers all the cases.\n\nAs a side effect, the refactoring of XLogInsert makes it easy to add a\nspecial record as the first XLogRecord of each file. This can contain\ninformation useful to the system administrator, like what database\ninstallation the file came from. Since it's at a fixed offset after the\npage header, external tools can read it in a simple way.\n\n \n-- \nJ. R. Nield\njrnield@usol.com\n\n\n\n", "msg_date": "07 Aug 2002 23:31:35 -0400", "msg_from": "\"J. R. Nield\" <jrnield@usol.com>", "msg_from_op": true, "msg_subject": "Re: PITR, checkpoint, and local relations" }, { "msg_contents": "\"J. R. Nield\" <jrnield@usol.com> writes:\n> The xlog code must allow us to force an advance to the next log file,\n> and truncate the archived file when it's copied so as not to waste\n> space.\n\nUh, why? Why not just force a checkpoint and remember the exact\nlocation of the checkpoint within the current log file?\n\nWhen and if you roll back to a prior checkpoint, you'd want to start the\nsystem running forward with a new xlog file, I think (compare what\npg_resetxlog does). But it doesn't follow that you MUST force an xlog\nfile boundary simply because you're taking a backup.\n\n> This complicates both the recovery logic and XLogInsert, and I'm trying\n> to kill the \"last\" latent bug in that feature now.\n\nIndeed. How about keeping it simple, instead?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Aug 2002 23:41:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PITR, checkpoint, and local relations " }, { "msg_contents": "> > The xlog code must allow us to force an advance to the next log file,\n> > and truncate the archived file when it's copied so as not to waste\n> > space.\n> \n> Uh, why? Why not just force a checkpoint and remember the exact\n> location of the checkpoint within the current log file?\n\nYes, why not just save pg_control' content with new checkpoint\nposition in it? Didn't we agree (or at least I don't remember objections\nto Tom' suggestion) that backup will not save log files at all and that\nthis will be task of log archiving procedure? Even if we are going to\nreconsider this approach, I would just save required portion of\nlog at *this moment* and do that space optimization *later*.\n\nVadim\n\n\n", "msg_date": "Wed, 7 Aug 2002 21:58:19 -0700", "msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>", "msg_from_op": false, "msg_subject": "Re: PITR, checkpoint, and local relations " }, { "msg_contents": "On Wed, 2002-08-07 at 23:41, Tom Lane wrote:\n> \"J. R. Nield\" <jrnield@usol.com> writes:\n> > The xlog code must allow us to force an advance to the next log file,\n> > and truncate the archived file when it's copied so as not to waste\n> > space.\n> \n> Uh, why? Why not just force a checkpoint and remember the exact\n> location of the checkpoint within the current log file?\n\nIf I do a backup with PITR and save it to tape, I need to be able to\nrestore it even if my machine is destroyed in a fire, and all the logs\nsince the end of a backup are destroyed. If we don't allow the user to\nforce a log advance, how will he do this? I don't want to copy the log\nfile, and then have the original be written to later, because it will\nbecome confusing as to which log file to use.\n\nIs the complexity really that big of a problem with this?\n\n> \n> When and if you roll back to a prior checkpoint, you'd want to start the\n> system running forward with a new xlog file, I think (compare what\n> pg_resetxlog does). But it doesn't follow that you MUST force an xlog\n> file boundary simply because you're taking a backup.\n> \n> > This complicates both the recovery logic and XLogInsert, and I'm trying\n> > to kill the \"last\" latent bug in that feature now.\n> \n> Indeed. How about keeping it simple, instead?\n> \n> \t\t\tregards, tom lane\n> \n-- \nJ. R. Nield\njrnield@usol.com\n\n\n\n", "msg_date": "08 Aug 2002 21:55:25 -0400", "msg_from": "\"J. R. Nield\" <jrnield@usol.com>", "msg_from_op": true, "msg_subject": "Re: PITR, checkpoint, and local relations" }, { "msg_contents": "\"J. R. Nield\" <jrnield@usol.com> writes:\n>> Uh, why? Why not just force a checkpoint and remember the exact\n>> location of the checkpoint within the current log file?\n\n> If I do a backup with PITR and save it to tape, I need to be able to\n> restore it even if my machine is destroyed in a fire, and all the logs\n> since the end of a backup are destroyed.\n\nAnd for your next trick, restore it even if the backup tape itself is\ndestroyed. C'mon, be a little reasonable here. The backups and the\nlog archive tapes are *both* critical data in any realistic view of\nthe world.\n\n> Is the complexity really that big of a problem with this?\n\nYes, it is. Didn't you just admit to struggling with bugs introduced\nby exactly this complexity?? I don't care *how* spiffy the backup\nscheme is, if when push comes to shove my backup doesn't restore because\nthere was a software bug in the backup scheme. In this context there\nsimply is not any virtue greater than \"simple and reliable\".\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Aug 2002 23:59:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PITR, checkpoint, and local relations " }, { "msg_contents": "Tom Lane wrote:\n> \"J. R. Nield\" <jrnield@usol.com> writes:\n> >> Uh, why? Why not just force a checkpoint and remember the exact\n> >> location of the checkpoint within the current log file?\n> \n> > If I do a backup with PITR and save it to tape, I need to be able to\n> > restore it even if my machine is destroyed in a fire, and all the logs\n> > since the end of a backup are destroyed.\n> \n> And for your next trick, restore it even if the backup tape itself is\n> destroyed. C'mon, be a little reasonable here. The backups and the\n> log archive tapes are *both* critical data in any realistic view of\n> the world.\n\nTom, just because he doesn't agree with you doesn't mean he is\nunreasonable.\n\nI think it is an admirable goal to allow the PITR backup to restore a\nconsistent copy of the database _without_ needing the logs. In fact, I\nconsider something that _needs_ the logs to restore to a consistent\nstate to be broken.\n\nIf you are doing offsite backup, which people should be doing, requiring\nthe log tape for restore means you have to recycle the log tape _after_\nthe PITR backup, and to restore to a point in the future, you need two\nlog tapes, one that was done during the backup, and another current.\n\nIf you can restore the PITR backup without a log tape, you can take just\nthe PITR backup tape off site _and_ you can recyle the log tape _before_\nthe PITR backup, meaning you only need one tape for a restore to a point\nin the future. I think there are good reasons to have the PITR backp be\nrestorable on its own, if possible.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 9 Aug 2002 21:27:16 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: PITR, checkpoint, and local relations" } ]
[ { "msg_contents": "Hello together,\n\ni've seen that every process created in Postgres will \neave some memory unreleased and the garbage \ncollector of the OS has to free this memory.\n\nIs there any special reason for this or is it just \nsomething that has to be solved?\n\nUlrich Neumann\n----------------------------------\n This e-mail is virus scanned\n Diese e-mail ist virusgeprueft\n\n", "msg_date": "Wed, 24 Jul 2002 12:37:40 +0200", "msg_from": "\"Ulrich Neumann\" <U_Neumann@gne.de>", "msg_from_op": true, "msg_subject": "not released memory / garbage collector" } ]
[ { "msg_contents": "Hello all,\n\nThere is a limitation currently with Table Functions in that the return \ntuple type must be known in advance, i.e. you need a pre-defined scalar \nor composite type to use as the function's declared return type.\n\nThis doesn't work well for the type of function that needs to return \ndifferent tuple structures on each call that depend on the input \nparameters. Two examples of this are dblink and the crosstab function \nthat I recently submitted. In the case of:\n\n dblink(connection_str, sql_stmt)\n\nwhat is really needed is for dblink to return a tuple of a type as \ndetermined dynamically by the input sql statement. Similarly, with:\n\n crosstab(sql)\n\nyou'd like to have the number/type of values columns dependent on the \nnumber of categories and type of the sql statement value column.\n\nSpeaking with Tom Lane the other day (off-list), he suggested a possible \nsolution. I have spent some time thinking about his suggestion (and even \nstarted working on the implementation, though I know that is getting the \ncart before the horse) and would like to propose the following solution \nbased on it:\n\n1. Create a new pg_type typtype: 'a' for anonymous (currently either 'b' \nfor base or 'c' for catalog, i.e. a class). We should also consider \nwhether typtype should be renamed typkind.\n\n2. Create new builtin type of typtype='a' named RECORD\n\n3. Modify FROM clause grammer to accept something like:\n SELECT * FROM my_func() AS mtf(colname1 type1, colname2 type1, ...)\nwhere mtf is the table alias, colname1, etc are the column names, and \ntype1, etc are the column types.\n\n4. Currently in the parsing and processing of RangeFunctions there are a \nnumber of places that must check whether the return type is base or \ncomposite. These would be changed to also handle (typtype == 'a'). When \ntyptype == 'a', a List of column defs would be required, when (typtype \n!= 'a'), it would be disallowed. The column defs would be used in place \nof the information derived from the funcrelid for cases with (typtype == \n'c').\n\n5. A check would be added (probably in nodeFunctionscan.c somewhere) to \nensure that the coldefs provide via the parser and the actual return \ntuple description match.\n\nNow when creating a function you can do:\n CREATE FUNCTION foo(text) RETURNS setof RECORD ...\n\nAnd when using it you can do, e.g.:\n SELECT * from foo(sqlstmt) AS (f1 int, f2 text, f3 timestamp)\n\nThis potentially also solves (or at least improves) the issue of builtin \nTable Functions. They can be declared as returning RECORD, and we can \nwrap system views around them with properly specified column defs. For \nexample:\n\nCREATE VIEW pg_settings AS\n SELECT s.name, s.setting\n FROM show_all_settings()AS s(name text, setting text);\n\nLikewise Neil's pg_locks could do the same.\n\nThen we can also add the UPDATE RULE that I previously posted to \npg_settings, and have pg_settings act like a virtual table, allowing \nsettings to be queried and set.\n\nComments, omissions, or objections?\n\nThanks,\n\nJoe\n\n", "msg_date": "Wed, 24 Jul 2002 09:51:10 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Proposal: anonymous composite types for Table Functions (aka SRFs)" }, { "msg_contents": "> 3. Modify FROM clause grammer to accept something like:\n> SELECT * FROM my_func() AS mtf(colname1 type1, colname2 type1, ...)\n> where mtf is the table alias, colname1, etc are the column names, and\n> type1, etc are the column types.\n\n...\n\n> Now when creating a function you can do:\n> CREATE FUNCTION foo(text) RETURNS setof RECORD ...\n>\n> And when using it you can do, e.g.:\n> SELECT * from foo(sqlstmt) AS (f1 int, f2 text, f3 timestamp)\n\nWhy is there the requirement to declare the type at SELECT time at all? Why\nnot just take what you get when you run the function?\n\nChris\n\n", "msg_date": "Thu, 25 Jul 2002 10:14:28 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Proposal: anonymous composite types for Table Functions (aka\n SRFs)" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> Why is there the requirement to declare the type at SELECT time at all? Why\n> not just take what you get when you run the function?\n\nThe column names and types are determined in the parser, and used in the \nplanner, optimizer, and executor. I'm not sure how the backend could \nplan a join or a where criteria otherwise.\n\nRemember that the function has to look just like a table or a subselect \n(i.e a RangeVar). With a table, the column names and types are \npredefined. With a subselect, parsing it yields the same information. \nWith a table function, we need some way of providing it -- i.e. either \nwith a predefined type, or now with a definition right in the FROM clause.\n\nJoe\n\n\n\n", "msg_date": "Thu, 25 Jul 2002 00:00:48 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: anonymous composite types for Table Functions" }, { "msg_contents": "> The column names and types are determined in the parser, and used in the\n> planner, optimizer, and executor. I'm not sure how the backend could\n> plan a join or a where criteria otherwise.\n>\n> Remember that the function has to look just like a table or a subselect\n> (i.e a RangeVar). With a table, the column names and types are\n> predefined. With a subselect, parsing it yields the same information.\n> With a table function, we need some way of providing it -- i.e. either\n> with a predefined type, or now with a definition right in the FROM clause.\n\nOr you could \"parse\" the function by retrieving the first row from the it\nand assuming that that's the function definition?\n\nChris\n\n", "msg_date": "Thu, 25 Jul 2002 15:16:14 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Proposal: anonymous composite types for Table Functions (aka\n SRFs)" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> Or you could \"parse\" the function by retrieving the first row from the it\n> and assuming that that's the function definition?\n> \n\nThere are a number of reasons why I don't think this is workable, but \nforemost, what happens if the function has side-effects, i.e. actually \nalters data somehow?\n\nJoe\n\n", "msg_date": "Thu, 25 Jul 2002 23:12:05 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: anonymous composite types for Table Functions" }, { "msg_contents": "Attached are two patches to implement and document anonymous composite \ntypes for Table Functions, as previously proposed on HACKERS. Here is a \nbrief explanation:\n\n1. Creates a new pg_type typtype: 'p' for pseudo type (currently either\n 'b' for base or 'c' for catalog, i.e. a class).\n\n2. Creates new builtin type of typtype='p' named RECORD. This is the\n first of potentially several pseudo types.\n\n3. Modify FROM clause grammer to accept:\n SELECT * FROM my_func() AS m(colname1 type1, colname2 type1, ...)\n where m is the table alias, colname1, etc are the column names, and\n type1, etc are the column types.\n\n4. When typtype == 'p' and the function return type is RECORD, a list\n of column defs is required, and when typtype != 'p', it is disallowed.\n\n5. A check was added to ensure that the tupdesc provide via the parser\n and the actual return tupdesc match in number and type of attributes.\n\nWhen creating a function you can do:\n CREATE FUNCTION foo(text) RETURNS setof RECORD ...\n\nWhen using it you can do:\n SELECT * from foo(sqlstmt) AS (f1 int, f2 text, f3 timestamp)\n or\n SELECT * from foo(sqlstmt) AS f(f1 int, f2 text, f3 timestamp)\n or\n SELECT * from foo(sqlstmt) f(f1 int, f2 text, f3 timestamp)\n\nIncluded in the patches are adjustments to the regression test sql and \nexpected files, and documentation.\n\nIf there are no objections, please apply.\n\nThanks,\n\nJoe\n\np.s.\n This potentially solves (or at least improves) the issue of builtin\n Table Functions. They can be bootstrapped as returning RECORD, and\n we can wrap system views around them with properly specified column\n defs. For example:\n\n CREATE VIEW pg_settings AS\n SELECT s.name, s.setting\n FROM show_all_settings()AS s(name text, setting text);\n\n Then we can also add the UPDATE RULE that I previously posted to\n pg_settings, and have pg_settings act like a virtual table, allowing\n settings to be queried and set.", "msg_date": "Sun, 28 Jul 2002 22:24:34 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "anonymous composite types for Table Functions (aka SRFs)" }, { "msg_contents": "On Sun, Jul 28, 2002 at 10:24:34PM -0700, Joe Conway wrote:\n> Attached are two patches to implement and document anonymous composite \n> types for Table Functions, as previously proposed on HACKERS.\n\nNice work!\n\n> 1. Creates a new pg_type typtype: 'p' for pseudo type (currently either\n> 'b' for base or 'c' for catalog, i.e. a class).\n\nI think you mentioned that typtype could be renamed to typkind -- that\nsounds good to me...\n\n> When creating a function you can do:\n> CREATE FUNCTION foo(text) RETURNS setof RECORD ...\n> \n> When using it you can do:\n> SELECT * from foo(sqlstmt) AS (f1 int, f2 text, f3 timestamp)\n\nIs there a reason why you can't specify the return type in the function\ndeclaration? ISTM that for most functions, the 'AS' clause will be the\nsame for every usage of the function.\n\nOn a related note, it is possible for the table function to examine the\nattributes it has been asked to return, right? Since the current patch\nrequires that every call specify the return type, it might be possible\nto take advantage of that to provide semi-\"polymorphic\" behavior\n(i.e. the function behaves differently depending on the type of data\nthe user asked for)\n\n> SELECT * from foo(sqlstmt) AS f(f1 int, f2 text, f3 timestamp)\n\nWhat does the 'f' indicate?\n\n> SELECT * from foo(sqlstmt) f(f1 int, f2 text, f3 timestamp)\n\nThis form of the syntax seems a bit unclear, IMHO. It seems a bit\nlike two function calls. Can the 'AS' be made mandatory?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Mon, 29 Jul 2002 10:57:40 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": false, "msg_subject": "Re: anonymous composite types for Table Functions (aka SRFs)" }, { "msg_contents": "nconway@klamath.dyndns.org (Neil Conway) writes:\n>> 1. Creates a new pg_type typtype: 'p' for pseudo type (currently either\n>> 'b' for base or 'c' for catalog, i.e. a class).\n\n> I think you mentioned that typtype could be renamed to typkind -- that\n> sounds good to me...\n\nIt sounds like a way to break client-side code for little gain to me...\n\n> Is there a reason why you can't specify the return type in the function\n> declaration? ISTM that for most functions, the 'AS' clause will be the\n> same for every usage of the function.\n\nThe particular functions Joe is worried about (dblink and such) do not\nhave a fixed return type. In any case that would be a separate\nmechanism with its own issues, because we'd have to store the anonymous\ntype in the system catalogs.\n\n>> SELECT * from foo(sqlstmt) AS f(f1 int, f2 text, f3 timestamp)\n\n> What does the 'f' indicate?\n\nIt's required by the SQL alias syntax.\n\n>> SELECT * from foo(sqlstmt) f(f1 int, f2 text, f3 timestamp)\n\n> This form of the syntax seems a bit unclear, IMHO. It seems a bit\n> like two function calls. Can the 'AS' be made mandatory?\n\nWhy? That just deviates even further from the spec syntax.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 Jul 2002 11:03:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: anonymous composite types for Table Functions (aka SRFs) " }, { "msg_contents": "On Mon, Jul 29, 2002 at 11:03:40AM -0400, Tom Lane wrote:\n> nconway@klamath.dyndns.org (Neil Conway) writes:\n> > Is there a reason why you can't specify the return type in the function\n> > declaration? ISTM that for most functions, the 'AS' clause will be the\n> > same for every usage of the function.\n> \n> The particular functions Joe is worried about (dblink and such) do not\n> have a fixed return type.\n\nRight -- so when you declare the SRF, you could be allowed to define\na composite type that will be used if the caller doesn't specify one\n(i.e. the default return type). This wouldn't get us a whole lot over\nthe existing 'CREATE VIEW' hack, except it would be cleaner.\n\n> In any case that would be a separate\n> mechanism with its own issues, because we'd have to store the anonymous\n> type in the system catalogs.\n\nOk -- it still seems worthwhile to me.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Mon, 29 Jul 2002 11:24:06 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": false, "msg_subject": "Re: anonymous composite types for Table Functions (aka SRFs)" }, { "msg_contents": "Neil Conway wrote:\n> On Sun, Jul 28, 2002 at 10:24:34PM -0700, Joe Conway wrote:\n>>Attached are two patches to implement and document anonymous composite \n>>types for Table Functions, as previously proposed on HACKERS.\n> \n> Nice work!\n\nThanks!\n\n\n>>1. Creates a new pg_type typtype: 'p' for pseudo type (currently either\n>> 'b' for base or 'c' for catalog, i.e. a class).\n> I think you mentioned that typtype could be renamed to typkind -- that\n> sounds good to me...\n\nI didn't get any feedback on that idea, so I decided to leave it alone \nfor now.\n\n\n>>When creating a function you can do:\n>> CREATE FUNCTION foo(text) RETURNS setof RECORD ...\n>>When using it you can do:\n>> SELECT * from foo(sqlstmt) AS (f1 int, f2 text, f3 timestamp)\n> \n> Is there a reason why you can't specify the return type in the function\n> declaration? ISTM that for most functions, the 'AS' clause will be the\n> same for every usage of the function.\n\nAhh, that's the next proposal ;-)\n\nFor functions such as dblink and crosstab, mentioned in the original \npost, the 'AS' clause needed may be *different* every time the function \nis called.\n\nBut specifying the composite type on function declaration would also be \na good thing. In order to do that we first need to be able to create \nnamed, but stand-alone, composite types. Once we can do that, then \ncreating an implicit named composite type (with a system generated name) \non function creation should be easy. I plan to follow up with a specific \nproposal this week.\n\n\n> On a related note, it is possible for the table function to examine the\n> attributes it has been asked to return, right? Since the current patch\n> requires that every call specify the return type, it might be possible\n> to take advantage of that to provide semi-\"polymorphic\" behavior\n> (i.e. the function behaves differently depending on the type of data\n> the user asked for)\n\nHmmm, I'll have to think about that. I don't think the current \nimplementation has a way to pass that information to the function. We \nwould need to modify fcinfo to carry along the run-time tupdesc if we \nwanted to do this. Not sure how hard it would be to do -- I'll have to look.\n\nNote that this syntax is only required when the function has been \ndeclared as returning RECORD -- if the function is declared to return \nsome named composite type, it doesn't need or use the runtime specified \nstructure. In that case the function *can* derive it's own tupdesc using \nthe function return type to get the return type relid. That is how the \nsubmitted crosstab() function works.\n\n\n>> SELECT * from foo(sqlstmt) AS f(f1 int, f2 text, f3 timestamp)\n> \n> \n> What does the 'f' indicate?\n\n'f' here is the table alias. Probably would have been more clear like this:\n\n SELECT f.* from foo(sqlstmt) AS f(f1 int, f2 text, f3 timestamp)\n\n\n\n>> SELECT * from foo(sqlstmt) f(f1 int, f2 text, f3 timestamp)\n> This form of the syntax seems a bit unclear, IMHO. It seems a bit\n> like two function calls. Can the 'AS' be made mandatory?\n\nWell this is currently one acceptable syntax for an alias clause, e.g.\n SELECT * from foo f(f1, f2, f3);\n\nis allowed. I was trying to be consistent. Anyone else have thoughts on \nthis?\n\nThank you for your review and comments!\n\nJoe\n\n", "msg_date": "Mon, 29 Jul 2002 08:30:59 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: anonymous composite types for Table Functions (aka" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nJoe Conway wrote:\n> Attached are two patches to implement and document anonymous composite \n> types for Table Functions, as previously proposed on HACKERS. Here is a \n> brief explanation:\n> \n> 1. Creates a new pg_type typtype: 'p' for pseudo type (currently either\n> 'b' for base or 'c' for catalog, i.e. a class).\n> \n> 2. Creates new builtin type of typtype='p' named RECORD. This is the\n> first of potentially several pseudo types.\n> \n> 3. Modify FROM clause grammer to accept:\n> SELECT * FROM my_func() AS m(colname1 type1, colname2 type1, ...)\n> where m is the table alias, colname1, etc are the column names, and\n> type1, etc are the column types.\n> \n> 4. When typtype == 'p' and the function return type is RECORD, a list\n> of column defs is required, and when typtype != 'p', it is disallowed.\n> \n> 5. A check was added to ensure that the tupdesc provide via the parser\n> and the actual return tupdesc match in number and type of attributes.\n> \n> When creating a function you can do:\n> CREATE FUNCTION foo(text) RETURNS setof RECORD ...\n> \n> When using it you can do:\n> SELECT * from foo(sqlstmt) AS (f1 int, f2 text, f3 timestamp)\n> or\n> SELECT * from foo(sqlstmt) AS f(f1 int, f2 text, f3 timestamp)\n> or\n> SELECT * from foo(sqlstmt) f(f1 int, f2 text, f3 timestamp)\n> \n> Included in the patches are adjustments to the regression test sql and \n> expected files, and documentation.\n> \n> If there are no objections, please apply.\n> \n> Thanks,\n> \n> Joe\n> \n> p.s.\n> This potentially solves (or at least improves) the issue of builtin\n> Table Functions. They can be bootstrapped as returning RECORD, and\n> we can wrap system views around them with properly specified column\n> defs. For example:\n> \n> CREATE VIEW pg_settings AS\n> SELECT s.name, s.setting\n> FROM show_all_settings()AS s(name text, setting text);\n> \n> Then we can also add the UPDATE RULE that I previously posted to\n> pg_settings, and have pg_settings act like a virtual table, allowing\n> settings to be queried and set.\n> \n\n> Index: src/backend/access/common/tupdesc.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/backend/access/common/tupdesc.c,v\n> retrieving revision 1.81\n> diff -c -r1.81 tupdesc.c\n> *** src/backend/access/common/tupdesc.c\t20 Jul 2002 05:16:56 -0000\t1.81\n> --- src/backend/access/common/tupdesc.c\t28 Jul 2002 01:33:30 -0000\n> ***************\n> *** 24,29 ****\n> --- 24,30 ----\n> #include \"catalog/namespace.h\"\n> #include \"catalog/pg_type.h\"\n> #include \"nodes/parsenodes.h\"\n> + #include \"parser/parse_relation.h\"\n> #include \"parser/parse_type.h\"\n> #include \"utils/builtins.h\"\n> #include \"utils/syscache.h\"\n> ***************\n> *** 597,642 ****\n> TupleDesc\n> TypeGetTupleDesc(Oid typeoid, List *colaliases)\n> {\n> ! \tOid\t\t\trelid = typeidTypeRelid(typeoid);\n> ! \tTupleDesc\ttupdesc;\n> \n> \t/*\n> \t * Build a suitable tupledesc representing the output rows\n> \t */\n> ! \tif (OidIsValid(relid))\n> \t{\n> \t\t/* Composite data type, i.e. a table's row type */\n> ! \t\tRelation\trel;\n> ! \t\tint\t\t\tnatts;\n> ! \n> ! \t\trel = relation_open(relid, AccessShareLock);\n> ! \t\ttupdesc = CreateTupleDescCopy(RelationGetDescr(rel));\n> ! \t\tnatts = tupdesc->natts;\n> ! \t\trelation_close(rel, AccessShareLock);\n> \n> ! \t\t/* check to see if we've given column aliases */\n> ! \t\tif(colaliases != NIL)\n> \t\t{\n> ! \t\t\tchar\t *label;\n> ! \t\t\tint\t\t\tvarattno;\n> \n> ! \t\t\t/* does the List length match the number of attributes */\n> ! \t\t\tif (length(colaliases) != natts)\n> ! \t\t\t\telog(ERROR, \"TypeGetTupleDesc: number of aliases does not match number of attributes\");\n> \n> ! \t\t\t/* OK, use the aliases instead */\n> ! \t\t\tfor (varattno = 0; varattno < natts; varattno++)\n> \t\t\t{\n> ! \t\t\t\tlabel = strVal(nth(varattno, colaliases));\n> \n> ! \t\t\t\tif (label != NULL)\n> ! \t\t\t\t\tnamestrcpy(&(tupdesc->attrs[varattno]->attname), label);\n> ! \t\t\t\telse\n> ! \t\t\t\t\tMemSet(NameStr(tupdesc->attrs[varattno]->attname), 0, NAMEDATALEN);\n> \t\t\t}\n> \t\t}\n> \t}\n> ! \telse\n> \t{\n> \t\t/* Must be a base data type, i.e. scalar */\n> \t\tchar\t *attname;\n> --- 598,650 ----\n> TupleDesc\n> TypeGetTupleDesc(Oid typeoid, List *colaliases)\n> {\n> ! \tchar\t\tfunctyptype = typeid_get_typtype(typeoid);\n> ! \tTupleDesc\ttupdesc = NULL;\n> \n> \t/*\n> \t * Build a suitable tupledesc representing the output rows\n> \t */\n> ! \tif (functyptype == 'c')\n> \t{\n> \t\t/* Composite data type, i.e. a table's row type */\n> ! \t\tOid\t\t\trelid = typeidTypeRelid(typeoid);\n> \n> ! \t\tif (OidIsValid(relid))\n> \t\t{\n> ! \t\t\tRelation\trel;\n> ! \t\t\tint\t\t\tnatts;\n> \n> ! \t\t\trel = relation_open(relid, AccessShareLock);\n> ! \t\t\ttupdesc = CreateTupleDescCopy(RelationGetDescr(rel));\n> ! \t\t\tnatts = tupdesc->natts;\n> ! \t\t\trelation_close(rel, AccessShareLock);\n> \n> ! \t\t\t/* check to see if we've given column aliases */\n> ! \t\t\tif(colaliases != NIL)\n> \t\t\t{\n> ! \t\t\t\tchar\t *label;\n> ! \t\t\t\tint\t\t\tvarattno;\n> \n> ! \t\t\t\t/* does the List length match the number of attributes */\n> ! \t\t\t\tif (length(colaliases) != natts)\n> ! \t\t\t\t\telog(ERROR, \"TypeGetTupleDesc: number of aliases does not match number of attributes\");\n> ! \n> ! \t\t\t\t/* OK, use the aliases instead */\n> ! \t\t\t\tfor (varattno = 0; varattno < natts; varattno++)\n> ! \t\t\t\t{\n> ! \t\t\t\t\tlabel = strVal(nth(varattno, colaliases));\n> ! \n> ! \t\t\t\t\tif (label != NULL)\n> ! \t\t\t\t\t\tnamestrcpy(&(tupdesc->attrs[varattno]->attname), label);\n> ! \t\t\t\t\telse\n> ! \t\t\t\t\t\tMemSet(NameStr(tupdesc->attrs[varattno]->attname), 0, NAMEDATALEN);\n> ! \t\t\t\t}\n> \t\t\t}\n> \t\t}\n> + \t\telse\n> + \t\t\telog(ERROR, \"Invalid return relation specified for function\");\n> \t}\n> ! \telse if (functyptype == 'b')\n> \t{\n> \t\t/* Must be a base data type, i.e. scalar */\n> \t\tchar\t *attname;\n> ***************\n> *** 661,666 ****\n> --- 669,679 ----\n> \t\t\t\t\t\t 0,\n> \t\t\t\t\t\t false);\n> \t}\n> + \telse if (functyptype == 'p' && typeoid == RECORDOID)\n> + \t\telog(ERROR, \"Unable to determine tuple description for function\"\n> + \t\t\t\t\t\t\" returning \\\"record\\\"\");\n> + \telse\n> + \t\telog(ERROR, \"Unknown kind of return type specified for function\");\n> \n> \treturn tupdesc;\n> }\n> Index: src/backend/catalog/pg_proc.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/backend/catalog/pg_proc.c,v\n> retrieving revision 1.81\n> diff -c -r1.81 pg_proc.c\n> *** src/backend/catalog/pg_proc.c\t24 Jul 2002 19:11:09 -0000\t1.81\n> --- src/backend/catalog/pg_proc.c\t29 Jul 2002 02:02:31 -0000\n> ***************\n> *** 25,30 ****\n> --- 25,31 ----\n> #include \"miscadmin.h\"\n> #include \"parser/parse_coerce.h\"\n> #include \"parser/parse_expr.h\"\n> + #include \"parser/parse_relation.h\"\n> #include \"parser/parse_type.h\"\n> #include \"tcop/tcopprot.h\"\n> #include \"utils/builtins.h\"\n> ***************\n> *** 33,39 ****\n> #include \"utils/syscache.h\"\n> \n> \n> ! static void checkretval(Oid rettype, List *queryTreeList);\n> Datum fmgr_internal_validator(PG_FUNCTION_ARGS);\n> Datum fmgr_c_validator(PG_FUNCTION_ARGS);\n> Datum fmgr_sql_validator(PG_FUNCTION_ARGS);\n> --- 34,40 ----\n> #include \"utils/syscache.h\"\n> \n> \n> ! static void checkretval(Oid rettype, char fn_typtype, List *queryTreeList);\n> Datum fmgr_internal_validator(PG_FUNCTION_ARGS);\n> Datum fmgr_c_validator(PG_FUNCTION_ARGS);\n> Datum fmgr_sql_validator(PG_FUNCTION_ARGS);\n> ***************\n> *** 317,323 ****\n> * type he claims.\n> */\n> static void\n> ! checkretval(Oid rettype, List *queryTreeList)\n> {\n> \tQuery\t *parse;\n> \tint\t\t\tcmd;\n> --- 318,324 ----\n> * type he claims.\n> */\n> static void\n> ! checkretval(Oid rettype, char fn_typtype, List *queryTreeList)\n> {\n> \tQuery\t *parse;\n> \tint\t\t\tcmd;\n> ***************\n> *** 367,447 ****\n> \t */\n> \ttlistlen = ExecCleanTargetListLength(tlist);\n> \n> - \t/*\n> - \t * For base-type returns, the target list should have exactly one\n> - \t * entry, and its type should agree with what the user declared. (As\n> - \t * of Postgres 7.2, we accept binary-compatible types too.)\n> - \t */\n> \ttyperelid = typeidTypeRelid(rettype);\n> - \tif (typerelid == InvalidOid)\n> - \t{\n> - \t\tif (tlistlen != 1)\n> - \t\t\telog(ERROR, \"function declared to return %s returns multiple columns in final SELECT\",\n> - \t\t\t\t format_type_be(rettype));\n> \n> ! \t\trestype = ((TargetEntry *) lfirst(tlist))->resdom->restype;\n> ! \t\tif (!IsBinaryCompatible(restype, rettype))\n> ! \t\t\telog(ERROR, \"return type mismatch in function: declared to return %s, returns %s\",\n> ! \t\t\t\t format_type_be(rettype), format_type_be(restype));\n> \n> ! \t\treturn;\n> ! \t}\n> \n> - \t/*\n> - \t * If the target list is of length 1, and the type of the varnode in\n> - \t * the target list matches the declared return type, this is okay.\n> - \t * This can happen, for example, where the body of the function is\n> - \t * 'SELECT func2()', where func2 has the same return type as the\n> - \t * function that's calling it.\n> - \t */\n> - \tif (tlistlen == 1)\n> - \t{\n> - \t\trestype = ((TargetEntry *) lfirst(tlist))->resdom->restype;\n> - \t\tif (IsBinaryCompatible(restype, rettype))\n> \t\t\treturn;\n> \t}\n> \n> ! \t/*\n> ! \t * By here, the procedure returns a tuple or set of tuples. This part\n> ! \t * of the typechecking is a hack. We look up the relation that is the\n> ! \t * declared return type, and be sure that attributes 1 .. n in the\n> ! \t * target list match the declared types.\n> ! \t */\n> ! \treln = heap_open(typerelid, AccessShareLock);\n> ! \trelid = reln->rd_id;\n> ! \trelnatts = reln->rd_rel->relnatts;\n> ! \n> ! \tif (tlistlen != relnatts)\n> ! \t\telog(ERROR, \"function declared to return %s does not SELECT the right number of columns (%d)\",\n> ! \t\t\t format_type_be(rettype), relnatts);\n> \n> ! \t/* expect attributes 1 .. n in order */\n> ! \ti = 0;\n> ! \tforeach(tlistitem, tlist)\n> ! \t{\n> ! \t\tTargetEntry *tle = (TargetEntry *) lfirst(tlistitem);\n> ! \t\tOid\t\t\ttletype;\n> ! \t\tOid\t\t\tatttype;\n> ! \n> ! \t\tif (tle->resdom->resjunk)\n> ! \t\t\tcontinue;\n> ! \t\ttletype = exprType(tle->expr);\n> ! \t\tatttype = reln->rd_att->attrs[i]->atttypid;\n> ! \t\tif (!IsBinaryCompatible(tletype, atttype))\n> ! \t\t\telog(ERROR, \"function declared to return %s returns %s instead of %s at column %d\",\n> ! \t\t\t\t format_type_be(rettype),\n> ! \t\t\t\t format_type_be(tletype),\n> ! \t\t\t\t format_type_be(atttype),\n> ! \t\t\t\t i + 1);\n> ! \t\ti++;\n> ! \t}\n> ! \n> ! \t/* this shouldn't happen, but let's just check... */\n> ! \tif (i != relnatts)\n> ! \t\telog(ERROR, \"function declared to return %s does not SELECT the right number of columns (%d)\",\n> ! \t\t\t format_type_be(rettype), relnatts);\n> \n> ! \theap_close(reln, AccessShareLock);\n> }\n> \n> \n> --- 368,467 ----\n> \t */\n> \ttlistlen = ExecCleanTargetListLength(tlist);\n> \n> \ttyperelid = typeidTypeRelid(rettype);\n> \n> ! \tif (fn_typtype == 'b')\n> ! \t{\n> ! \t\t/*\n> ! \t\t * For base-type returns, the target list should have exactly one\n> ! \t\t * entry, and its type should agree with what the user declared. (As\n> ! \t\t * of Postgres 7.2, we accept binary-compatible types too.)\n> ! \t\t */\n> \n> ! \t\tif (typerelid == InvalidOid)\n> ! \t\t{\n> ! \t\t\tif (tlistlen != 1)\n> ! \t\t\t\telog(ERROR, \"function declared to return %s returns multiple columns in final SELECT\",\n> ! \t\t\t\t\t format_type_be(rettype));\n> ! \n> ! \t\t\trestype = ((TargetEntry *) lfirst(tlist))->resdom->restype;\n> ! \t\t\tif (!IsBinaryCompatible(restype, rettype))\n> ! \t\t\t\telog(ERROR, \"return type mismatch in function: declared to return %s, returns %s\",\n> ! \t\t\t\t\t format_type_be(rettype), format_type_be(restype));\n> \n> \t\t\treturn;\n> + \t\t}\n> + \n> + \t\t/*\n> + \t\t * If the target list is of length 1, and the type of the varnode in\n> + \t\t * the target list matches the declared return type, this is okay.\n> + \t\t * This can happen, for example, where the body of the function is\n> + \t\t * 'SELECT func2()', where func2 has the same return type as the\n> + \t\t * function that's calling it.\n> + \t\t */\n> + \t\tif (tlistlen == 1)\n> + \t\t{\n> + \t\t\trestype = ((TargetEntry *) lfirst(tlist))->resdom->restype;\n> + \t\t\tif (IsBinaryCompatible(restype, rettype))\n> + \t\t\t\treturn;\n> + \t\t}\n> \t}\n> + \telse if (fn_typtype == 'c')\n> + \t{\n> + \t\t/*\n> + \t\t * By here, the procedure returns a tuple or set of tuples. This part\n> + \t\t * of the typechecking is a hack. We look up the relation that is the\n> + \t\t * declared return type, and be sure that attributes 1 .. n in the\n> + \t\t * target list match the declared types.\n> + \t\t */\n> + \t\treln = heap_open(typerelid, AccessShareLock);\n> + \t\trelid = reln->rd_id;\n> + \t\trelnatts = reln->rd_rel->relnatts;\n> + \n> + \t\tif (tlistlen != relnatts)\n> + \t\t\telog(ERROR, \"function declared to return %s does not SELECT the right number of columns (%d)\",\n> + \t\t\t\t format_type_be(rettype), relnatts);\n> + \n> + \t\t/* expect attributes 1 .. n in order */\n> + \t\ti = 0;\n> + \t\tforeach(tlistitem, tlist)\n> + \t\t{\n> + \t\t\tTargetEntry *tle = (TargetEntry *) lfirst(tlistitem);\n> + \t\t\tOid\t\t\ttletype;\n> + \t\t\tOid\t\t\tatttype;\n> + \n> + \t\t\tif (tle->resdom->resjunk)\n> + \t\t\t\tcontinue;\n> + \t\t\ttletype = exprType(tle->expr);\n> + \t\t\tatttype = reln->rd_att->attrs[i]->atttypid;\n> + \t\t\tif (!IsBinaryCompatible(tletype, atttype))\n> + \t\t\t\telog(ERROR, \"function declared to return %s returns %s instead of %s at column %d\",\n> + \t\t\t\t\t format_type_be(rettype),\n> + \t\t\t\t\t format_type_be(tletype),\n> + \t\t\t\t\t format_type_be(atttype),\n> + \t\t\t\t\t i + 1);\n> + \t\t\ti++;\n> + \t\t}\n> \n> ! \t\t/* this shouldn't happen, but let's just check... */\n> ! \t\tif (i != relnatts)\n> ! \t\t\telog(ERROR, \"function declared to return %s does not SELECT the right number of columns (%d)\",\n> ! \t\t\t\t format_type_be(rettype), relnatts);\n> \n> ! \t\theap_close(reln, AccessShareLock);\n> \n> ! \t\treturn;\n> ! \t}\n> ! \telse if (fn_typtype == 'p' && rettype == RECORDOID)\n> ! \t{\n> ! \t\t/*\n> ! \t\t * For RECORD return type, defer this check until we get the\n> ! \t\t * first tuple.\n> ! \t\t */\n> ! \t\treturn;\n> ! \t}\n> ! \telse\n> ! \t\telog(ERROR, \"Unknown kind of return type specified for function\");\n> }\n> \n> \n> ***************\n> *** 540,545 ****\n> --- 560,566 ----\n> \tbool\t\tisnull;\n> \tDatum\t\ttmp;\n> \tchar\t *prosrc;\n> + \tchar\t\tfunctyptype;\n> \n> \ttuple = SearchSysCache(PROCOID, funcoid, 0, 0, 0);\n> \tif (!HeapTupleIsValid(tuple))\n> ***************\n> *** 556,563 ****\n> \n> \tprosrc = DatumGetCString(DirectFunctionCall1(textout, tmp));\n> \n> \tquerytree_list = pg_parse_and_rewrite(prosrc, proc->proargtypes, proc->pronargs);\n> ! \tcheckretval(proc->prorettype, querytree_list);\n> \n> \tReleaseSysCache(tuple);\n> \tPG_RETURN_BOOL(true);\n> --- 577,587 ----\n> \n> \tprosrc = DatumGetCString(DirectFunctionCall1(textout, tmp));\n> \n> + \t/* check typtype to see if we have a predetermined return type */\n> + \tfunctyptype = typeid_get_typtype(proc->prorettype);\n> + \n> \tquerytree_list = pg_parse_and_rewrite(prosrc, proc->proargtypes, proc->pronargs);\n> ! \tcheckretval(proc->prorettype, functyptype, querytree_list);\n> \n> \tReleaseSysCache(tuple);\n> \tPG_RETURN_BOOL(true);\n> Index: src/backend/executor/functions.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/backend/executor/functions.c,v\n> retrieving revision 1.52\n> diff -c -r1.52 functions.c\n> *** src/backend/executor/functions.c\t20 Jun 2002 20:29:28 -0000\t1.52\n> --- src/backend/executor/functions.c\t27 Jul 2002 23:44:38 -0000\n> ***************\n> *** 194,200 ****\n> \t * get the type length and by-value flag from the type tuple\n> \t */\n> \tfcache->typlen = typeStruct->typlen;\n> ! \tif (typeStruct->typrelid == InvalidOid)\n> \t{\n> \t\t/* The return type is not a relation, so just use byval */\n> \t\tfcache->typbyval = typeStruct->typbyval;\n> --- 194,201 ----\n> \t * get the type length and by-value flag from the type tuple\n> \t */\n> \tfcache->typlen = typeStruct->typlen;\n> ! \n> ! \tif (typeStruct->typtype == 'b')\n> \t{\n> \t\t/* The return type is not a relation, so just use byval */\n> \t\tfcache->typbyval = typeStruct->typbyval;\n> Index: src/backend/executor/nodeFunctionscan.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/backend/executor/nodeFunctionscan.c,v\n> retrieving revision 1.3\n> diff -c -r1.3 nodeFunctionscan.c\n> *** src/backend/executor/nodeFunctionscan.c\t20 Jul 2002 05:16:58 -0000\t1.3\n> --- src/backend/executor/nodeFunctionscan.c\t29 Jul 2002 02:05:14 -0000\n> ***************\n> *** 31,36 ****\n> --- 31,37 ----\n> #include \"executor/nodeFunctionscan.h\"\n> #include \"parser/parsetree.h\"\n> #include \"parser/parse_expr.h\"\n> + #include \"parser/parse_relation.h\"\n> #include \"parser/parse_type.h\"\n> #include \"storage/lmgr.h\"\n> #include \"tcop/pquery.h\"\n> ***************\n> *** 39,52 ****\n> #include \"utils/tuplestore.h\"\n> \n> static TupleTableSlot *FunctionNext(FunctionScan *node);\n> ! static TupleTableSlot *function_getonetuple(TupleTableSlot *slot,\n> ! \t\t\t\t\t\t\t\t\t\t\tNode *expr,\n> ! \t\t\t\t\t\t\t\t\t\t\tExprContext *econtext,\n> ! \t\t\t\t\t\t\t\t\t\t\tTupleDesc tupdesc,\n> ! \t\t\t\t\t\t\t\t\t\t\tbool returnsTuple,\n> \t\t\t\t\t\t\t\t\t\t\tbool *isNull,\n> \t\t\t\t\t\t\t\t\t\t\tExprDoneCond *isDone);\n> static FunctionMode get_functionmode(Node *expr);\n> \n> /* ----------------------------------------------------------------\n> *\t\t\t\t\t\tScan Support\n> --- 40,50 ----\n> #include \"utils/tuplestore.h\"\n> \n> static TupleTableSlot *FunctionNext(FunctionScan *node);\n> ! static TupleTableSlot *function_getonetuple(FunctionScanState *scanstate,\n> \t\t\t\t\t\t\t\t\t\t\tbool *isNull,\n> \t\t\t\t\t\t\t\t\t\t\tExprDoneCond *isDone);\n> static FunctionMode get_functionmode(Node *expr);\n> + static bool tupledesc_mismatch(TupleDesc tupdesc1, TupleDesc tupdesc2);\n> \n> /* ----------------------------------------------------------------\n> *\t\t\t\t\t\tScan Support\n> ***************\n> *** 62,70 ****\n> FunctionNext(FunctionScan *node)\n> {\n> \tTupleTableSlot\t *slot;\n> - \tNode\t\t\t *expr;\n> - \tExprContext\t\t *econtext;\n> - \tTupleDesc\t\t\ttupdesc;\n> \tEState\t\t\t *estate;\n> \tScanDirection\t\tdirection;\n> \tTuplestorestate\t *tuplestorestate;\n> --- 60,65 ----\n> ***************\n> *** 78,88 ****\n> \tscanstate = (FunctionScanState *) node->scan.scanstate;\n> \testate = node->scan.plan.state;\n> \tdirection = estate->es_direction;\n> - \tecontext = scanstate->csstate.cstate.cs_ExprContext;\n> \n> \ttuplestorestate = scanstate->tuplestorestate;\n> - \ttupdesc = scanstate->tupdesc;\n> - \texpr = scanstate->funcexpr;\n> \n> \t/*\n> \t * If first time through, read all tuples from function and pass them to\n> --- 73,80 ----\n> ***************\n> *** 108,117 ****\n> \n> \t\t\tisNull = false;\n> \t\t\tisDone = ExprSingleResult;\n> ! \t\t\tslot = function_getonetuple(scanstate->csstate.css_ScanTupleSlot,\n> ! \t\t\t\t\t\t\t\t\t\texpr, econtext, tupdesc,\n> ! \t\t\t\t\t\t\t\t\t\tscanstate->returnsTuple,\n> ! \t\t\t\t\t\t\t\t\t\t&isNull, &isDone);\n> \t\t\tif (TupIsNull(slot))\n> \t\t\t\tbreak;\n> \n> --- 100,106 ----\n> \n> \t\t\tisNull = false;\n> \t\t\tisDone = ExprSingleResult;\n> ! \t\t\tslot = function_getonetuple(scanstate, &isNull, &isDone);\n> \t\t\tif (TupIsNull(slot))\n> \t\t\t\tbreak;\n> \n> ***************\n> *** 169,175 ****\n> \tRangeTblEntry\t *rte;\n> \tOid\t\t\t\t\tfuncrettype;\n> \tOid\t\t\t\t\tfuncrelid;\n> ! \tTupleDesc\t\t\ttupdesc;\n> \n> \t/*\n> \t * FunctionScan should not have any children.\n> --- 158,165 ----\n> \tRangeTblEntry\t *rte;\n> \tOid\t\t\t\t\tfuncrettype;\n> \tOid\t\t\t\t\tfuncrelid;\n> ! \tchar\t\t\t\tfunctyptype;\n> ! \tTupleDesc\t\t\ttupdesc = NULL;\n> \n> \t/*\n> \t * FunctionScan should not have any children.\n> ***************\n> *** 209,233 ****\n> \trte = rt_fetch(node->scan.scanrelid, estate->es_range_table);\n> \tAssert(rte->rtekind == RTE_FUNCTION);\n> \tfuncrettype = exprType(rte->funcexpr);\n> ! \tfuncrelid = typeidTypeRelid(funcrettype);\n> \n> \t/*\n> \t * Build a suitable tupledesc representing the output rows\n> \t */\n> ! \tif (OidIsValid(funcrelid))\n> \t{\n> ! \t\t/*\n> ! \t\t * Composite data type, i.e. a table's row type\n> ! \t\t * Same as ordinary relation RTE\n> ! \t\t */\n> ! \t\tRelation\trel;\n> \n> ! \t\trel = relation_open(funcrelid, AccessShareLock);\n> ! \t\ttupdesc = CreateTupleDescCopy(RelationGetDescr(rel));\n> ! \t\trelation_close(rel, AccessShareLock);\n> ! \t\tscanstate->returnsTuple = true;\n> \t}\n> ! \telse\n> \t{\n> \t\t/*\n> \t\t * Must be a base data type, i.e. scalar\n> --- 199,234 ----\n> \trte = rt_fetch(node->scan.scanrelid, estate->es_range_table);\n> \tAssert(rte->rtekind == RTE_FUNCTION);\n> \tfuncrettype = exprType(rte->funcexpr);\n> ! \n> ! \t/*\n> ! \t * Now determine if the function returns a simple or composite type,\n> ! \t * and check/add column aliases.\n> ! \t */\n> ! \tfunctyptype = typeid_get_typtype(funcrettype);\n> \n> \t/*\n> \t * Build a suitable tupledesc representing the output rows\n> \t */\n> ! \tif (functyptype == 'c')\n> \t{\n> ! \t\tfuncrelid = typeidTypeRelid(funcrettype);\n> ! \t\tif (OidIsValid(funcrelid))\n> ! \t\t{\n> ! \t\t\t/*\n> ! \t\t\t * Composite data type, i.e. a table's row type\n> ! \t\t\t * Same as ordinary relation RTE\n> ! \t\t\t */\n> ! \t\t\tRelation\trel;\n> \n> ! \t\t\trel = relation_open(funcrelid, AccessShareLock);\n> ! \t\t\ttupdesc = CreateTupleDescCopy(RelationGetDescr(rel));\n> ! \t\t\trelation_close(rel, AccessShareLock);\n> ! \t\t\tscanstate->returnsTuple = true;\n> ! \t\t}\n> ! \t\telse\n> ! \t\t\telog(ERROR, \"Invalid return relation specified for function\");\n> \t}\n> ! \telse if (functyptype == 'b')\n> \t{\n> \t\t/*\n> \t\t * Must be a base data type, i.e. scalar\n> ***************\n> *** 244,249 ****\n> --- 245,265 ----\n> \t\t\t\t\t\t false);\n> \t\tscanstate->returnsTuple = false;\n> \t}\n> + \telse if (functyptype == 'p' && funcrettype == RECORDOID)\n> + \t{\n> + \t\t/*\n> + \t\t * Must be a pseudo type, i.e. record\n> + \t\t */\n> + \t\tList *coldeflist = rte->coldeflist;\n> + \n> + \t\ttupdesc = BuildDescForRelation(coldeflist);\n> + \t\tscanstate->returnsTuple = true;\n> + \t}\n> + \telse\n> + \t\telog(ERROR, \"Unknown kind of return type specified for function\");\n> + \n> + \tscanstate->fn_typeid = funcrettype;\n> + \tscanstate->fn_typtype = functyptype;\n> \tscanstate->tupdesc = tupdesc;\n> \tExecSetSlotDescriptor(scanstate->csstate.css_ScanTupleSlot,\n> \t\t\t\t\t\t tupdesc, false);\n> ***************\n> *** 404,420 ****\n> * Run the underlying function to get the next tuple\n> */\n> static TupleTableSlot *\n> ! function_getonetuple(TupleTableSlot *slot,\n> ! \t\t\t\t\t Node *expr,\n> ! \t\t\t\t\t ExprContext *econtext,\n> ! \t\t\t\t\t TupleDesc tupdesc,\n> ! \t\t\t\t\t bool returnsTuple,\n> \t\t\t\t\t bool *isNull,\n> \t\t\t\t\t ExprDoneCond *isDone)\n> {\n> ! \tHeapTuple\t\t\ttuple;\n> ! \tDatum\t\t\t\tretDatum;\n> ! \tchar\t\t\t\tnullflag;\n> \n> \t/*\n> \t * get the next Datum from the function\n> --- 420,439 ----\n> * Run the underlying function to get the next tuple\n> */\n> static TupleTableSlot *\n> ! function_getonetuple(FunctionScanState *scanstate,\n> \t\t\t\t\t bool *isNull,\n> \t\t\t\t\t ExprDoneCond *isDone)\n> {\n> ! \tHeapTuple\t\ttuple;\n> ! \tDatum\t\t\tretDatum;\n> ! \tchar\t\t\tnullflag;\n> ! \tTupleDesc\t\ttupdesc = scanstate->tupdesc;\n> ! \tbool\t\t\treturnsTuple = scanstate->returnsTuple;\n> ! \tNode\t\t *expr = scanstate->funcexpr;\n> ! \tOid\t\t\t\tfn_typeid = scanstate->fn_typeid;\n> ! \tchar\t\t\tfn_typtype = scanstate->fn_typtype;\n> ! \tExprContext\t *econtext = scanstate->csstate.cstate.cs_ExprContext;\n> ! \tTupleTableSlot *slot = scanstate->csstate.css_ScanTupleSlot;\n> \n> \t/*\n> \t * get the next Datum from the function\n> ***************\n> *** 435,440 ****\n> --- 454,469 ----\n> \t\t\t * function returns pointer to tts??\n> \t\t\t */\n> \t\t\tslot = (TupleTableSlot *) retDatum;\n> + \n> + \t\t\t/*\n> + \t\t\t * if function return type was RECORD, we need to check to be\n> + \t\t\t * sure the structure from the query matches the actual return\n> + \t\t\t * structure\n> + \t\t\t */\n> + \t\t\tif (fn_typtype == 'p' && fn_typeid == RECORDOID)\n> + \t\t\t\tif (tupledesc_mismatch(tupdesc, slot->ttc_tupleDescriptor))\n> + \t\t\t\t\telog(ERROR, \"Query specified return tuple and actual\"\n> + \t\t\t\t\t\t\t\t\t\" function return tuple do not match\");\n> \t\t}\n> \t\telse\n> \t\t{\n> ***************\n> *** 466,469 ****\n> --- 495,521 ----\n> \t * for the moment, hardwire this\n> \t */\n> \treturn PM_REPEATEDCALL;\n> + }\n> + \n> + static bool\n> + tupledesc_mismatch(TupleDesc tupdesc1, TupleDesc tupdesc2)\n> + {\n> + \tint\t\t\ti;\n> + \n> + \tif (tupdesc1->natts != tupdesc2->natts)\n> + \t\treturn true;\n> + \n> + \tfor (i = 0; i < tupdesc1->natts; i++)\n> + \t{\n> + \t\tForm_pg_attribute attr1 = tupdesc1->attrs[i];\n> + \t\tForm_pg_attribute attr2 = tupdesc2->attrs[i];\n> + \n> + \t\t/*\n> + \t\t * We really only care about number of attributes and data type\n> + \t\t */\n> + \t\tif (attr1->atttypid != attr2->atttypid)\n> + \t\t\treturn true;\n> + \t}\n> + \n> + \treturn false;\n> }\n> Index: src/backend/nodes/copyfuncs.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/backend/nodes/copyfuncs.c,v\n> retrieving revision 1.197\n> diff -c -r1.197 copyfuncs.c\n> *** src/backend/nodes/copyfuncs.c\t24 Jul 2002 19:11:10 -0000\t1.197\n> --- src/backend/nodes/copyfuncs.c\t27 Jul 2002 19:21:36 -0000\n> ***************\n> *** 1482,1487 ****\n> --- 1482,1488 ----\n> \tnewnode->relid = from->relid;\n> \tNode_Copy(from, newnode, subquery);\n> \tNode_Copy(from, newnode, funcexpr);\n> + \tNode_Copy(from, newnode, coldeflist);\n> \tnewnode->jointype = from->jointype;\n> \tNode_Copy(from, newnode, joinaliasvars);\n> \tNode_Copy(from, newnode, alias);\n> ***************\n> *** 1707,1712 ****\n> --- 1708,1714 ----\n> \n> \tNode_Copy(from, newnode, funccallnode);\n> \tNode_Copy(from, newnode, alias);\n> + \tNode_Copy(from, newnode, coldeflist);\n> \n> \treturn newnode;\n> }\n> Index: src/backend/nodes/equalfuncs.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/backend/nodes/equalfuncs.c,v\n> retrieving revision 1.144\n> diff -c -r1.144 equalfuncs.c\n> *** src/backend/nodes/equalfuncs.c\t24 Jul 2002 19:11:10 -0000\t1.144\n> --- src/backend/nodes/equalfuncs.c\t27 Jul 2002 19:21:36 -0000\n> ***************\n> *** 1579,1584 ****\n> --- 1579,1586 ----\n> \t\treturn false;\n> \tif (!equal(a->alias, b->alias))\n> \t\treturn false;\n> + \tif (!equal(a->coldeflist, b->coldeflist))\n> + \t\treturn false;\n> \n> \treturn true;\n> }\n> ***************\n> *** 1691,1696 ****\n> --- 1693,1700 ----\n> \tif (!equal(a->subquery, b->subquery))\n> \t\treturn false;\n> \tif (!equal(a->funcexpr, b->funcexpr))\n> + \t\treturn false;\n> + \tif (!equal(a->coldeflist, b->coldeflist))\n> \t\treturn false;\n> \tif (a->jointype != b->jointype)\n> \t\treturn false;\n> Index: src/backend/nodes/outfuncs.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/backend/nodes/outfuncs.c,v\n> retrieving revision 1.165\n> diff -c -r1.165 outfuncs.c\n> *** src/backend/nodes/outfuncs.c\t18 Jul 2002 17:14:19 -0000\t1.165\n> --- src/backend/nodes/outfuncs.c\t27 Jul 2002 19:21:36 -0000\n> ***************\n> *** 1004,1009 ****\n> --- 1004,1011 ----\n> \t\tcase RTE_FUNCTION:\n> \t\t\tappendStringInfo(str, \":funcexpr \");\n> \t\t\t_outNode(str, node->funcexpr);\n> + \t\t\tappendStringInfo(str, \":coldeflist \");\n> + \t\t\t_outNode(str, node->coldeflist);\n> \t\t\tbreak;\n> \t\tcase RTE_JOIN:\n> \t\t\tappendStringInfo(str, \":jointype %d :joinaliasvars \",\n> Index: src/backend/nodes/readfuncs.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/backend/nodes/readfuncs.c,v\n> retrieving revision 1.126\n> diff -c -r1.126 readfuncs.c\n> *** src/backend/nodes/readfuncs.c\t18 Jul 2002 17:14:19 -0000\t1.126\n> --- src/backend/nodes/readfuncs.c\t27 Jul 2002 19:21:36 -0000\n> ***************\n> *** 1545,1550 ****\n> --- 1545,1554 ----\n> \t\tcase RTE_FUNCTION:\n> \t\t\ttoken = pg_strtok(&length); /* eat :funcexpr */\n> \t\t\tlocal_node->funcexpr = nodeRead(true);\t\t/* now read it */\n> + \n> + \t\t\ttoken = pg_strtok(&length); /* eat :coldeflist */\n> + \t\t\tlocal_node->coldeflist = nodeRead(true);\t/* now read it */\n> + \n> \t\t\tbreak;\n> \n> \t\tcase RTE_JOIN:\n> Index: src/backend/parser/gram.y\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/backend/parser/gram.y,v\n> retrieving revision 2.349\n> diff -c -r2.349 gram.y\n> *** src/backend/parser/gram.y\t24 Jul 2002 19:11:10 -0000\t2.349\n> --- src/backend/parser/gram.y\t27 Jul 2002 19:21:36 -0000\n> ***************\n> *** 218,224 ****\n> \t\t\t\ttarget_list, update_target_list, insert_column_list,\n> \t\t\t\tinsert_target_list, def_list, opt_indirection,\n> \t\t\t\tgroup_clause, TriggerFuncArgs, select_limit,\n> ! \t\t\t\topt_select_limit\n> \n> %type <range>\tinto_clause, OptTempTableName\n> \n> --- 218,224 ----\n> \t\t\t\ttarget_list, update_target_list, insert_column_list,\n> \t\t\t\tinsert_target_list, def_list, opt_indirection,\n> \t\t\t\tgroup_clause, TriggerFuncArgs, select_limit,\n> ! \t\t\t\topt_select_limit, tableFuncElementList\n> \n> %type <range>\tinto_clause, OptTempTableName\n> \n> ***************\n> *** 259,266 ****\n> \n> %type <vsetstmt> set_rest\n> \n> ! %type <node>\tOptTableElement, ConstraintElem\n> ! %type <node>\tcolumnDef\n> %type <defelt>\tdef_elem\n> %type <node>\tdef_arg, columnElem, where_clause, insert_column_item,\n> \t\t\t\ta_expr, b_expr, c_expr, r_expr, AexprConst,\n> --- 259,266 ----\n> \n> %type <vsetstmt> set_rest\n> \n> ! %type <node>\tOptTableElement, ConstraintElem, tableFuncElement\n> ! %type <node>\tcolumnDef, tableFuncColumnDef\n> %type <defelt>\tdef_elem\n> %type <node>\tdef_arg, columnElem, where_clause, insert_column_item,\n> \t\t\t\ta_expr, b_expr, c_expr, r_expr, AexprConst,\n> ***************\n> *** 4373,4378 ****\n> --- 4373,4406 ----\n> \t\t\t\t{\n> \t\t\t\t\tRangeFunction *n = makeNode(RangeFunction);\n> \t\t\t\t\tn->funccallnode = $1;\n> + \t\t\t\t\tn->coldeflist = NIL;\n> + \t\t\t\t\t$$ = (Node *) n;\n> + \t\t\t\t}\n> + \t\t\t| func_table AS '(' tableFuncElementList ')'\n> + \t\t\t\t{\n> + \t\t\t\t\tRangeFunction *n = makeNode(RangeFunction);\n> + \t\t\t\t\tn->funccallnode = $1;\n> + \t\t\t\t\tn->coldeflist = $4;\n> + \t\t\t\t\t$$ = (Node *) n;\n> + \t\t\t\t}\n> + \t\t\t| func_table AS ColId '(' tableFuncElementList ')'\n> + \t\t\t\t{\n> + \t\t\t\t\tRangeFunction *n = makeNode(RangeFunction);\n> + \t\t\t\t\tAlias *a = makeNode(Alias);\n> + \t\t\t\t\tn->funccallnode = $1;\n> + \t\t\t\t\ta->aliasname = $3;\n> + \t\t\t\t\tn->alias = a;\n> + \t\t\t\t\tn->coldeflist = $5;\n> + \t\t\t\t\t$$ = (Node *) n;\n> + \t\t\t\t}\n> + \t\t\t| func_table ColId '(' tableFuncElementList ')'\n> + \t\t\t\t{\n> + \t\t\t\t\tRangeFunction *n = makeNode(RangeFunction);\n> + \t\t\t\t\tAlias *a = makeNode(Alias);\n> + \t\t\t\t\tn->funccallnode = $1;\n> + \t\t\t\t\ta->aliasname = $2;\n> + \t\t\t\t\tn->alias = a;\n> + \t\t\t\t\tn->coldeflist = $4;\n> \t\t\t\t\t$$ = (Node *) n;\n> \t\t\t\t}\n> \t\t\t| func_table alias_clause\n> ***************\n> *** 4380,4385 ****\n> --- 4408,4414 ----\n> \t\t\t\t\tRangeFunction *n = makeNode(RangeFunction);\n> \t\t\t\t\tn->funccallnode = $1;\n> \t\t\t\t\tn->alias = $2;\n> + \t\t\t\t\tn->coldeflist = NIL;\n> \t\t\t\t\t$$ = (Node *) n;\n> \t\t\t\t}\n> \t\t\t| select_with_parens\n> ***************\n> *** 4620,4625 ****\n> --- 4649,4687 ----\n> \t\t\t| /*EMPTY*/\t\t\t\t\t\t\t\t{ $$ = NULL; }\n> \t\t;\n> \n> + \n> + tableFuncElementList:\n> + \t\t\ttableFuncElementList ',' tableFuncElement\n> + \t\t\t\t{\n> + \t\t\t\t\tif ($3 != NULL)\n> + \t\t\t\t\t\t$$ = lappend($1, $3);\n> + \t\t\t\t\telse\n> + \t\t\t\t\t\t$$ = $1;\n> + \t\t\t\t}\n> + \t\t\t| tableFuncElement\n> + \t\t\t\t{\n> + \t\t\t\t\tif ($1 != NULL)\n> + \t\t\t\t\t\t$$ = makeList1($1);\n> + \t\t\t\t\telse\n> + \t\t\t\t\t\t$$ = NIL;\n> + \t\t\t\t}\n> + \t\t\t| /*EMPTY*/\t\t\t\t\t\t\t{ $$ = NIL; }\n> + \t\t;\n> + \n> + tableFuncElement:\n> + \t\t\ttableFuncColumnDef\t\t\t\t\t{ $$ = $1; }\n> + \t\t;\n> + \n> + tableFuncColumnDef:\tColId Typename\n> + \t\t\t\t{\n> + \t\t\t\t\tColumnDef *n = makeNode(ColumnDef);\n> + \t\t\t\t\tn->colname = $1;\n> + \t\t\t\t\tn->typename = $2;\n> + \t\t\t\t\tn->constraints = NIL;\n> + \n> + \t\t\t\t\t$$ = (Node *)n;\n> + \t\t\t\t}\n> + \t\t;\n> \n> /*****************************************************************************\n> *\n> Index: src/backend/parser/parse_clause.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/backend/parser/parse_clause.c,v\n> retrieving revision 1.94\n> diff -c -r1.94 parse_clause.c\n> *** src/backend/parser/parse_clause.c\t20 Jun 2002 20:29:32 -0000\t1.94\n> --- src/backend/parser/parse_clause.c\t27 Jul 2002 19:21:36 -0000\n> ***************\n> *** 515,521 ****\n> \t * OK, build an RTE for the function.\n> \t */\n> \trte = addRangeTableEntryForFunction(pstate, funcname, funcexpr,\n> ! \t\t\t\t\t\t\t\t\t\tr->alias, true);\n> \n> \t/*\n> \t * We create a RangeTblRef, but we do not add it to the joinlist or\n> --- 515,521 ----\n> \t * OK, build an RTE for the function.\n> \t */\n> \trte = addRangeTableEntryForFunction(pstate, funcname, funcexpr,\n> ! \t\t\t\t\t\t\t\t\t\tr, true);\n> \n> \t/*\n> \t * We create a RangeTblRef, but we do not add it to the joinlist or\n> Index: src/backend/parser/parse_relation.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/backend/parser/parse_relation.c,v\n> retrieving revision 1.70\n> diff -c -r1.70 parse_relation.c\n> *** src/backend/parser/parse_relation.c\t20 Jun 2002 20:29:33 -0000\t1.70\n> --- src/backend/parser/parse_relation.c\t27 Jul 2002 20:00:42 -0000\n> ***************\n> *** 681,692 ****\n> addRangeTableEntryForFunction(ParseState *pstate,\n> \t\t\t\t\t\t\t char *funcname,\n> \t\t\t\t\t\t\t Node *funcexpr,\n> ! \t\t\t\t\t\t\t Alias *alias,\n> \t\t\t\t\t\t\t bool inFromCl)\n> {\n> \tRangeTblEntry *rte = makeNode(RangeTblEntry);\n> \tOid\t\t\tfuncrettype = exprType(funcexpr);\n> ! \tOid\t\t\tfuncrelid;\n> \tAlias\t *eref;\n> \tint\t\t\tnumaliases;\n> \tint\t\t\tvarattno;\n> --- 681,694 ----\n> addRangeTableEntryForFunction(ParseState *pstate,\n> \t\t\t\t\t\t\t char *funcname,\n> \t\t\t\t\t\t\t Node *funcexpr,\n> ! \t\t\t\t\t\t\t RangeFunction *rangefunc,\n> \t\t\t\t\t\t\t bool inFromCl)\n> {\n> \tRangeTblEntry *rte = makeNode(RangeTblEntry);\n> \tOid\t\t\tfuncrettype = exprType(funcexpr);\n> ! \tchar\t\tfunctyptype;\n> ! \tAlias\t *alias = rangefunc->alias;\n> ! \tList\t *coldeflist = rangefunc->coldeflist;\n> \tAlias\t *eref;\n> \tint\t\t\tnumaliases;\n> \tint\t\t\tvarattno;\n> ***************\n> *** 695,700 ****\n> --- 697,703 ----\n> \trte->relid = InvalidOid;\n> \trte->subquery = NULL;\n> \trte->funcexpr = funcexpr;\n> + \trte->coldeflist = coldeflist;\n> \trte->alias = alias;\n> \n> \teref = alias ? (Alias *) copyObject(alias) : makeAlias(funcname, NIL);\n> ***************\n> *** 706,752 ****\n> \t * Now determine if the function returns a simple or composite type,\n> \t * and check/add column aliases.\n> \t */\n> ! \tfuncrelid = typeidTypeRelid(funcrettype);\n> \n> ! \tif (OidIsValid(funcrelid))\n> \t{\n> \t\t/*\n> ! \t\t * Composite data type, i.e. a table's row type\n> ! \t\t *\n> ! \t\t * Get the rel's relcache entry. This access ensures that we have an\n> ! \t\t * up-to-date relcache entry for the rel.\n> \t\t */\n> ! \t\tRelation\trel;\n> ! \t\tint\t\t\tmaxattrs;\n> \n> ! \t\trel = heap_open(funcrelid, AccessShareLock);\n> \n> ! \t\t/*\n> ! \t\t * Since the rel is open anyway, let's check that the number of column\n> ! \t\t * aliases is reasonable.\n> ! \t\t */\n> ! \t\tmaxattrs = RelationGetNumberOfAttributes(rel);\n> ! \t\tif (maxattrs < numaliases)\n> ! \t\t\telog(ERROR, \"Table \\\"%s\\\" has %d columns available but %d columns specified\",\n> ! \t\t\t\t RelationGetRelationName(rel), maxattrs, numaliases);\n> \n> ! \t\t/* fill in alias columns using actual column names */\n> ! \t\tfor (varattno = numaliases; varattno < maxattrs; varattno++)\n> ! \t\t{\n> ! \t\t\tchar\t *attrname;\n> \n> ! \t\t\tattrname = pstrdup(NameStr(rel->rd_att->attrs[varattno]->attname));\n> ! \t\t\teref->colnames = lappend(eref->colnames, makeString(attrname));\n> \t\t}\n> ! \n> ! \t\t/*\n> ! \t\t * Drop the rel refcount, but keep the access lock till end of\n> ! \t\t * transaction so that the table can't be deleted or have its schema\n> ! \t\t * modified underneath us.\n> ! \t\t */\n> ! \t\theap_close(rel, NoLock);\n> \t}\n> ! \telse\n> \t{\n> \t\t/*\n> \t\t * Must be a base data type, i.e. scalar.\n> --- 709,764 ----\n> \t * Now determine if the function returns a simple or composite type,\n> \t * and check/add column aliases.\n> \t */\n> ! \tfunctyptype = typeid_get_typtype(funcrettype);\n> \n> ! \tif (functyptype == 'c')\n> \t{\n> \t\t/*\n> ! \t\t * Named composite data type, i.e. a table's row type\n> \t\t */\n> ! \t\tOid\t\t\tfuncrelid = typeidTypeRelid(funcrettype);\n> \n> ! \t\tif (OidIsValid(funcrelid))\n> ! \t\t{\n> ! \t\t\t/*\n> ! \t\t\t * Get the rel's relcache entry. This access ensures that we have an\n> ! \t\t\t * up-to-date relcache entry for the rel.\n> ! \t\t\t */\n> ! \t\t\tRelation\trel;\n> ! \t\t\tint\t\t\tmaxattrs;\n> ! \n> ! \t\t\trel = heap_open(funcrelid, AccessShareLock);\n> ! \n> ! \t\t\t/*\n> ! \t\t\t * Since the rel is open anyway, let's check that the number of column\n> ! \t\t\t * aliases is reasonable.\n> ! \t\t\t */\n> ! \t\t\tmaxattrs = RelationGetNumberOfAttributes(rel);\n> ! \t\t\tif (maxattrs < numaliases)\n> ! \t\t\t\telog(ERROR, \"Table \\\"%s\\\" has %d columns available but %d columns specified\",\n> ! \t\t\t\t\t RelationGetRelationName(rel), maxattrs, numaliases);\n> \n> ! \t\t\t/* fill in alias columns using actual column names */\n> ! \t\t\tfor (varattno = numaliases; varattno < maxattrs; varattno++)\n> ! \t\t\t{\n> ! \t\t\t\tchar\t *attrname;\n> \n> ! \t\t\t\tattrname = pstrdup(NameStr(rel->rd_att->attrs[varattno]->attname));\n> ! \t\t\t\teref->colnames = lappend(eref->colnames, makeString(attrname));\n> ! \t\t\t}\n> \n> ! \t\t\t/*\n> ! \t\t\t * Drop the rel refcount, but keep the access lock till end of\n> ! \t\t\t * transaction so that the table can't be deleted or have its schema\n> ! \t\t\t * modified underneath us.\n> ! \t\t\t */\n> ! \t\t\theap_close(rel, NoLock);\n> \t\t}\n> ! \t\telse\n> ! \t\t\telog(ERROR, \"Invalid return relation specified for function %s\",\n> ! \t\t\t\t funcname);\n> \t}\n> ! \telse if (functyptype == 'b')\n> \t{\n> \t\t/*\n> \t\t * Must be a base data type, i.e. scalar.\n> ***************\n> *** 758,763 ****\n> --- 770,791 ----\n> \t\tif (numaliases == 0)\n> \t\t\teref->colnames = makeList1(makeString(funcname));\n> \t}\n> + \telse if (functyptype == 'p' && funcrettype == RECORDOID)\n> + \t{\n> + \t\tList\t *col;\n> + \n> + \t\tforeach(col, coldeflist)\n> + \t\t{\n> + \t\t\tchar\t *attrname;\n> + \t\t\tColumnDef *n = lfirst(col);\n> + \n> + \t\t\tattrname = pstrdup(n->colname);\n> + \t\t\teref->colnames = lappend(eref->colnames, makeString(attrname));\n> + \t\t}\n> + \t}\n> + \telse\n> + \t\telog(ERROR, \"Unknown kind of return type specified for function %s\",\n> + \t\t\t funcname);\n> \n> \t/*----------\n> \t * Flags:\n> ***************\n> *** 1030,1082 ****\n> \t\tcase RTE_FUNCTION:\n> \t\t\t{\n> \t\t\t\t/* Function RTE */\n> ! \t\t\t\tOid\t\t\tfuncrettype = exprType(rte->funcexpr);\n> ! \t\t\t\tOid\t\t\tfuncrelid = typeidTypeRelid(funcrettype);\n> ! \n> ! \t\t\t\tif (OidIsValid(funcrelid))\n> \t\t\t\t{\n> ! \t\t\t\t\t/*\n> ! \t\t\t\t\t * Composite data type, i.e. a table's row type\n> ! \t\t\t\t\t * Same as ordinary relation RTE\n> ! \t\t\t\t\t */\n> ! \t\t\t\t\tRelation\trel;\n> ! \t\t\t\t\tint\t\t\tmaxattrs;\n> ! \t\t\t\t\tint\t\t\tnumaliases;\n> ! \n> ! \t\t\t\t\trel = heap_open(funcrelid, AccessShareLock);\n> ! \t\t\t\t\tmaxattrs = RelationGetNumberOfAttributes(rel);\n> ! \t\t\t\t\tnumaliases = length(rte->eref->colnames);\n> ! \n> ! \t\t\t\t\tfor (varattno = 0; varattno < maxattrs; varattno++)\n> \t\t\t\t\t{\n> - \t\t\t\t\t\tForm_pg_attribute attr = rel->rd_att->attrs[varattno];\n> \n> ! \t\t\t\t\t\tif (colnames)\n> ! \t\t\t\t\t\t{\n> ! \t\t\t\t\t\t\tchar\t *label;\n> ! \n> ! \t\t\t\t\t\t\tif (varattno < numaliases)\n> ! \t\t\t\t\t\t\t\tlabel = strVal(nth(varattno, rte->eref->colnames));\n> ! \t\t\t\t\t\t\telse\n> ! \t\t\t\t\t\t\t\tlabel = NameStr(attr->attname);\n> ! \t\t\t\t\t\t\t*colnames = lappend(*colnames, makeString(pstrdup(label)));\n> ! \t\t\t\t\t\t}\n> \n> ! \t\t\t\t\t\tif (colvars)\n> \t\t\t\t\t\t{\n> ! \t\t\t\t\t\t\tVar\t\t *varnode;\n> \n> ! \t\t\t\t\t\t\tvarnode = makeVar(rtindex, attr->attnum,\n> ! \t\t\t\t\t\t\t\t\t\t\t attr->atttypid, attr->atttypmod,\n> ! \t\t\t\t\t\t\t\t\t\t\t sublevels_up);\n> \n> ! \t\t\t\t\t\t\t*colvars = lappend(*colvars, varnode);\n> \t\t\t\t\t\t}\n> - \t\t\t\t\t}\n> \n> ! \t\t\t\t\theap_close(rel, AccessShareLock);\n> \t\t\t\t}\n> ! \t\t\t\telse\n> \t\t\t\t{\n> \t\t\t\t\t/*\n> \t\t\t\t\t * Must be a base data type, i.e. scalar\n> --- 1058,1124 ----\n> \t\tcase RTE_FUNCTION:\n> \t\t\t{\n> \t\t\t\t/* Function RTE */\n> ! \t\t\t\tOid\tfuncrettype = exprType(rte->funcexpr);\n> ! \t\t\t\tchar functyptype = typeid_get_typtype(funcrettype);\n> ! \t\t\t\tList *coldeflist = rte->coldeflist;\n> ! \n> ! \t\t\t\t/*\n> ! \t\t\t\t * Build a suitable tupledesc representing the output rows\n> ! \t\t\t\t */\n> ! \t\t\t\tif (functyptype == 'c')\n> \t\t\t\t{\n> ! \t\t\t\t\tOid\tfuncrelid = typeidTypeRelid(funcrettype);\n> ! \t\t\t\t\tif (OidIsValid(funcrelid))\n> \t\t\t\t\t{\n> \n> ! \t\t\t\t\t\t/*\n> ! \t\t\t\t\t\t * Composite data type, i.e. a table's row type\n> ! \t\t\t\t\t\t * Same as ordinary relation RTE\n> ! \t\t\t\t\t\t */\n> ! \t\t\t\t\t\tRelation\trel;\n> ! \t\t\t\t\t\tint\t\t\tmaxattrs;\n> ! \t\t\t\t\t\tint\t\t\tnumaliases;\n> ! \n> ! \t\t\t\t\t\trel = heap_open(funcrelid, AccessShareLock);\n> ! \t\t\t\t\t\tmaxattrs = RelationGetNumberOfAttributes(rel);\n> ! \t\t\t\t\t\tnumaliases = length(rte->eref->colnames);\n> \n> ! \t\t\t\t\t\tfor (varattno = 0; varattno < maxattrs; varattno++)\n> \t\t\t\t\t\t{\n> ! \t\t\t\t\t\t\tForm_pg_attribute attr = rel->rd_att->attrs[varattno];\n> \n> ! \t\t\t\t\t\t\tif (colnames)\n> ! \t\t\t\t\t\t\t{\n> ! \t\t\t\t\t\t\t\tchar\t *label;\n> ! \n> ! \t\t\t\t\t\t\t\tif (varattno < numaliases)\n> ! \t\t\t\t\t\t\t\t\tlabel = strVal(nth(varattno, rte->eref->colnames));\n> ! \t\t\t\t\t\t\t\telse\n> ! \t\t\t\t\t\t\t\t\tlabel = NameStr(attr->attname);\n> ! \t\t\t\t\t\t\t\t*colnames = lappend(*colnames, makeString(pstrdup(label)));\n> ! \t\t\t\t\t\t\t}\n> ! \n> ! \t\t\t\t\t\t\tif (colvars)\n> ! \t\t\t\t\t\t\t{\n> ! \t\t\t\t\t\t\t\tVar\t\t *varnode;\n> ! \n> ! \t\t\t\t\t\t\t\tvarnode = makeVar(rtindex,\n> ! \t\t\t\t\t\t\t\t\t\t\t\tattr->attnum,\n> ! \t\t\t\t\t\t\t\t\t\t\t\tattr->atttypid,\n> ! \t\t\t\t\t\t\t\t\t\t\t\tattr->atttypmod,\n> ! \t\t\t\t\t\t\t\t\t\t\t\tsublevels_up);\n> \n> ! \t\t\t\t\t\t\t\t*colvars = lappend(*colvars, varnode);\n> ! \t\t\t\t\t\t\t}\n> \t\t\t\t\t\t}\n> \n> ! \t\t\t\t\t\theap_close(rel, AccessShareLock);\n> ! \t\t\t\t\t}\n> ! \t\t\t\t\telse\n> ! \t\t\t\t\t\telog(ERROR, \"Invalid return relation specified\"\n> ! \t\t\t\t\t\t\t\t\t\" for function\");\n> \t\t\t\t}\n> ! \t\t\t\telse if (functyptype == 'b')\n> \t\t\t\t{\n> \t\t\t\t\t/*\n> \t\t\t\t\t * Must be a base data type, i.e. scalar\n> ***************\n> *** 1096,1101 ****\n> --- 1138,1184 ----\n> \t\t\t\t\t\t*colvars = lappend(*colvars, varnode);\n> \t\t\t\t\t}\n> \t\t\t\t}\n> + \t\t\t\telse if (functyptype == 'p' && funcrettype == RECORDOID)\n> + \t\t\t\t{\n> + \t\t\t\t\tList\t *col;\n> + \t\t\t\t\tint\t\t\tattnum = 0;\n> + \n> + \t\t\t\t\tforeach(col, coldeflist)\n> + \t\t\t\t\t{\n> + \t\t\t\t\t\tColumnDef *colDef = lfirst(col);\n> + \n> + \t\t\t\t\t\tattnum++;\n> + \t\t\t\t\t\tif (colnames)\n> + \t\t\t\t\t\t{\n> + \t\t\t\t\t\t\tchar\t *attrname;\n> + \n> + \t\t\t\t\t\t\tattrname = pstrdup(colDef->colname);\n> + \t\t\t\t\t\t\t*colnames = lappend(*colnames, makeString(attrname));\n> + \t\t\t\t\t\t}\n> + \n> + \t\t\t\t\t\tif (colvars)\n> + \t\t\t\t\t\t{\n> + \t\t\t\t\t\t\tVar\t\t *varnode;\n> + \t\t\t\t\t\t\tHeapTuple\ttypeTuple;\n> + \t\t\t\t\t\t\tOid\t\t\tatttypid;\n> + \n> + \t\t\t\t\t\t\ttypeTuple = typenameType(colDef->typename);\n> + \t\t\t\t\t\t\tatttypid = HeapTupleGetOid(typeTuple);\n> + \t\t\t\t\t\t\tReleaseSysCache(typeTuple);\n> + \n> + \t\t\t\t\t\t\tvarnode = makeVar(rtindex,\n> + \t\t\t\t\t\t\t\t\t\t\tattnum,\n> + \t\t\t\t\t\t\t\t\t\t\tatttypid,\n> + \t\t\t\t\t\t\t\t\t\t\t-1,\n> + \t\t\t\t\t\t\t\t\t\t\tsublevels_up);\n> + \n> + \t\t\t\t\t\t\t*colvars = lappend(*colvars, varnode);\n> + \t\t\t\t\t\t}\n> + \t\t\t\t\t}\n> + \t\t\t\t}\n> + \t\t\t\telse\n> + \t\t\t\t\telog(ERROR, \"Unknown kind of return type specified\"\n> + \t\t\t\t\t\t\t\t\" for function\");\n> \t\t\t}\n> \t\t\tbreak;\n> \t\tcase RTE_JOIN:\n> ***************\n> *** 1277,1308 ****\n> \t\tcase RTE_FUNCTION:\n> \t\t\t{\n> \t\t\t\t/* Function RTE */\n> ! \t\t\t\tOid\t\t\tfuncrettype = exprType(rte->funcexpr);\n> ! \t\t\t\tOid\t\t\tfuncrelid = typeidTypeRelid(funcrettype);\n> ! \n> ! \t\t\t\tif (OidIsValid(funcrelid))\n> \t\t\t\t{\n> \t\t\t\t\t/*\n> \t\t\t\t\t * Composite data type, i.e. a table's row type\n> \t\t\t\t\t * Same as ordinary relation RTE\n> \t\t\t\t\t */\n> ! \t\t\t\t\tHeapTuple\t\t\ttp;\n> ! \t\t\t\t\tForm_pg_attribute\tatt_tup;\n> \n> ! \t\t\t\t\ttp = SearchSysCache(ATTNUM,\n> ! \t\t\t\t\t\t\t\t\t\tObjectIdGetDatum(funcrelid),\n> ! \t\t\t\t\t\t\t\t\t\tInt16GetDatum(attnum),\n> ! \t\t\t\t\t\t\t\t\t\t0, 0);\n> ! \t\t\t\t\t/* this shouldn't happen... */\n> ! \t\t\t\t\tif (!HeapTupleIsValid(tp))\n> ! \t\t\t\t\t\telog(ERROR, \"Relation %s does not have attribute %d\",\n> ! \t\t\t\t\t\t\t get_rel_name(funcrelid), attnum);\n> ! \t\t\t\t\tatt_tup = (Form_pg_attribute) GETSTRUCT(tp);\n> ! \t\t\t\t\t*vartype = att_tup->atttypid;\n> ! \t\t\t\t\t*vartypmod = att_tup->atttypmod;\n> ! \t\t\t\t\tReleaseSysCache(tp);\n> \t\t\t\t}\n> ! \t\t\t\telse\n> \t\t\t\t{\n> \t\t\t\t\t/*\n> \t\t\t\t\t * Must be a base data type, i.e. scalar\n> --- 1360,1403 ----\n> \t\tcase RTE_FUNCTION:\n> \t\t\t{\n> \t\t\t\t/* Function RTE */\n> ! \t\t\t\tOid funcrettype = exprType(rte->funcexpr);\n> ! \t\t\t\tchar functyptype = typeid_get_typtype(funcrettype);\n> ! \t\t\t\tList *coldeflist = rte->coldeflist;\n> ! \n> ! \t\t\t\t/*\n> ! \t\t\t\t * Build a suitable tupledesc representing the output rows\n> ! \t\t\t\t */\n> ! \t\t\t\tif (functyptype == 'c')\n> \t\t\t\t{\n> \t\t\t\t\t/*\n> \t\t\t\t\t * Composite data type, i.e. a table's row type\n> \t\t\t\t\t * Same as ordinary relation RTE\n> \t\t\t\t\t */\n> ! \t\t\t\t\tOid funcrelid = typeidTypeRelid(funcrettype);\n> ! \n> ! \t\t\t\t\tif (OidIsValid(funcrelid))\n> ! \t\t\t\t\t{\n> ! \t\t\t\t\t\tHeapTuple\t\t\ttp;\n> ! \t\t\t\t\t\tForm_pg_attribute\tatt_tup;\n> \n> ! \t\t\t\t\t\ttp = SearchSysCache(ATTNUM,\n> ! \t\t\t\t\t\t\t\t\t\t\tObjectIdGetDatum(funcrelid),\n> ! \t\t\t\t\t\t\t\t\t\t\tInt16GetDatum(attnum),\n> ! \t\t\t\t\t\t\t\t\t\t\t0, 0);\n> ! \t\t\t\t\t\t/* this shouldn't happen... */\n> ! \t\t\t\t\t\tif (!HeapTupleIsValid(tp))\n> ! \t\t\t\t\t\t\telog(ERROR, \"Relation %s does not have attribute %d\",\n> ! \t\t\t\t\t\t\t\t get_rel_name(funcrelid), attnum);\n> ! \t\t\t\t\t\tatt_tup = (Form_pg_attribute) GETSTRUCT(tp);\n> ! \t\t\t\t\t\t*vartype = att_tup->atttypid;\n> ! \t\t\t\t\t\t*vartypmod = att_tup->atttypmod;\n> ! \t\t\t\t\t\tReleaseSysCache(tp);\n> ! \t\t\t\t\t}\n> ! \t\t\t\t\telse\n> ! \t\t\t\t\t\telog(ERROR, \"Invalid return relation specified\"\n> ! \t\t\t\t\t\t\t\t\t\" for function\");\n> \t\t\t\t}\n> ! \t\t\t\telse if (functyptype == 'b')\n> \t\t\t\t{\n> \t\t\t\t\t/*\n> \t\t\t\t\t * Must be a base data type, i.e. scalar\n> ***************\n> *** 1310,1315 ****\n> --- 1405,1426 ----\n> \t\t\t\t\t*vartype = funcrettype;\n> \t\t\t\t\t*vartypmod = -1;\n> \t\t\t\t}\n> + \t\t\t\telse if (functyptype == 'p' && funcrettype == RECORDOID)\n> + \t\t\t\t{\n> + \t\t\t\t\tColumnDef *colDef = nth(attnum - 1, coldeflist);\n> + \t\t\t\t\tHeapTuple\ttypeTuple;\n> + \t\t\t\t\tOid\t\t\tatttypid;\n> + \n> + \t\t\t\t\ttypeTuple = typenameType(colDef->typename);\n> + \t\t\t\t\tatttypid = HeapTupleGetOid(typeTuple);\n> + \t\t\t\t\tReleaseSysCache(typeTuple);\n> + \n> + \t\t\t\t\t*vartype = atttypid;\n> + \t\t\t\t\t*vartypmod = -1;\n> + \t\t\t\t}\n> + \t\t\t\telse\n> + \t\t\t\t\telog(ERROR, \"Unknown kind of return type specified\"\n> + \t\t\t\t\t\t\t\t\" for function\");\n> \t\t\t}\n> \t\t\tbreak;\n> \t\tcase RTE_JOIN:\n> ***************\n> *** 1448,1451 ****\n> --- 1559,1587 ----\n> \t\telog(NOTICE, \"Adding missing FROM-clause entry%s for table \\\"%s\\\"\",\n> \t\t\t pstate->parentParseState != NULL ? \" in subquery\" : \"\",\n> \t\t\t relation->relname);\n> + }\n> + \n> + char\n> + typeid_get_typtype(Oid typeid)\n> + {\n> + \tHeapTuple\t\ttypeTuple;\n> + \tForm_pg_type\ttypeStruct;\n> + \tchar\t\t\tresult;\n> + \n> + \t/*\n> + \t * determine if the function returns a simple, named composite,\n> + \t * or anonymous composite type\n> + \t */\n> + \ttypeTuple = SearchSysCache(TYPEOID,\n> + \t\t\t\t\t\t\t ObjectIdGetDatum(typeid),\n> + \t\t\t\t\t\t\t 0, 0, 0);\n> + \tif (!HeapTupleIsValid(typeTuple))\n> + \t\telog(ERROR, \"cache lookup for type %u failed\", typeid);\n> + \ttypeStruct = (Form_pg_type) GETSTRUCT(typeTuple);\n> + \n> + \tresult = typeStruct->typtype;\n> + \n> + \tReleaseSysCache(typeTuple);\n> + \n> + \treturn result;\n> }\n> Index: src/include/catalog/pg_type.h\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/include/catalog/pg_type.h,v\n> retrieving revision 1.125\n> diff -c -r1.125 pg_type.h\n> *** src/include/catalog/pg_type.h\t24 Jul 2002 19:11:13 -0000\t1.125\n> --- src/include/catalog/pg_type.h\t27 Jul 2002 19:58:03 -0000\n> ***************\n> *** 60,69 ****\n> \tbool\t\ttypbyval;\n> \n> \t/*\n> ! \t * typtype is 'b' for a basic type and 'c' for a catalog type (ie a\n> ! \t * class). If typtype is 'c', typrelid is the OID of the class' entry\n> ! \t * in pg_class. (Why do we need an entry in pg_type for classes,\n> ! \t * anyway?)\n> \t */\n> \tchar\t\ttyptype;\n> \n> --- 60,69 ----\n> \tbool\t\ttypbyval;\n> \n> \t/*\n> ! \t * typtype is 'b' for a basic type, 'c' for a catalog type (ie a\n> ! \t * class), or 'p' for a pseudo type. If typtype is 'c', typrelid is the\n> ! \t * OID of the class' entry in pg_class. (Why do we need an entry in\n> ! \t * pg_type for classes, anyway?)\n> \t */\n> \tchar\t\ttyptype;\n> \n> ***************\n> *** 501,506 ****\n> --- 501,516 ----\n> DATA(insert OID = 2210 ( _regclass PGNSP PGUID -1 f b t \\054 0 2205 array_in array_out i x f 0 -1 0 _null_ _null_ ));\n> DATA(insert OID = 2211 ( _regtype PGNSP PGUID -1 f b t \\054 0 2206 array_in array_out i x f 0 -1 0 _null_ _null_ ));\n> \n> + /*\n> + * pseudo-types \n> + *\n> + * types with typtype='p' are special types that represent classes of types\n> + * that are not easily defined in advance. Currently there is only one pseudo\n> + * type -- record. The record type is used to specify that the value is a\n> + * tuple, but of unknown structure until runtime. \n> + */\n> + DATA(insert OID = 2249 ( record PGNSP PGUID 4 t p t \\054 0 0 oidin oidout i p f 0 -1 0 _null_ _null_ ));\n> + #define RECORDOID\t\t2249\n> \n> /*\n> * prototypes for functions in pg_type.c\n> Index: src/include/nodes/execnodes.h\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/include/nodes/execnodes.h,v\n> retrieving revision 1.70\n> diff -c -r1.70 execnodes.h\n> *** src/include/nodes/execnodes.h\t20 Jun 2002 20:29:49 -0000\t1.70\n> --- src/include/nodes/execnodes.h\t28 Jul 2002 22:09:25 -0000\n> ***************\n> *** 509,519 ****\n> *\t\tFunction nodes are used to scan the results of a\n> *\t\tfunction appearing in FROM (typically a function returning set).\n> *\n> ! *\t\tfunctionmode\t\t\tfunction operating mode:\n> *\t\t\t\t\t\t\t- repeated call\n> *\t\t\t\t\t\t\t- materialize\n> *\t\t\t\t\t\t\t- return query\n> *\t\ttuplestorestate\t\tprivate state of tuplestore.c\n> * ----------------\n> */\n> typedef enum FunctionMode\n> --- 509,525 ----\n> *\t\tFunction nodes are used to scan the results of a\n> *\t\tfunction appearing in FROM (typically a function returning set).\n> *\n> ! *\t\tfunctionmode\t\tfunction operating mode:\n> *\t\t\t\t\t\t\t- repeated call\n> *\t\t\t\t\t\t\t- materialize\n> *\t\t\t\t\t\t\t- return query\n> + *\t\ttupdesc\t\t\t\tfunction's return tuple description\n> *\t\ttuplestorestate\t\tprivate state of tuplestore.c\n> + *\t\tfuncexpr\t\t\tfunction expression being evaluated\n> + *\t\treturnsTuple\t\tdoes function return tuples?\n> + *\t\tfn_typeid\t\t\tOID of function return type\n> + *\t\tfn_typtype\t\t\treturn Datum type, i.e. 'b'ase,\n> + *\t\t\t\t\t\t\t'c'atalog, or 'p'seudo\n> * ----------------\n> */\n> typedef enum FunctionMode\n> ***************\n> *** 525,536 ****\n> \n> typedef struct FunctionScanState\n> {\n> ! \tCommonScanState csstate;\t/* its first field is NodeTag */\n> \tFunctionMode\tfunctionmode;\n> \tTupleDesc\t\ttupdesc;\n> \tvoid\t\t *tuplestorestate;\n> ! \tNode\t\t *funcexpr;\t/* function expression being evaluated */\n> ! \tbool\t\t\treturnsTuple; /* does function return tuples? */\n> } FunctionScanState;\n> \n> /* ----------------------------------------------------------------\n> --- 531,544 ----\n> \n> typedef struct FunctionScanState\n> {\n> ! \tCommonScanState csstate;\t\t/* its first field is NodeTag */\n> \tFunctionMode\tfunctionmode;\n> \tTupleDesc\t\ttupdesc;\n> \tvoid\t\t *tuplestorestate;\n> ! \tNode\t\t *funcexpr;\n> ! \tbool\t\t\treturnsTuple;\n> ! \tOid\t\t\t\tfn_typeid;\n> ! \tchar\t\t\tfn_typtype;\n> } FunctionScanState;\n> \n> /* ----------------------------------------------------------------\n> Index: src/include/nodes/parsenodes.h\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/include/nodes/parsenodes.h,v\n> retrieving revision 1.194\n> diff -c -r1.194 parsenodes.h\n> *** src/include/nodes/parsenodes.h\t24 Jul 2002 19:11:14 -0000\t1.194\n> --- src/include/nodes/parsenodes.h\t27 Jul 2002 19:21:36 -0000\n> ***************\n> *** 400,405 ****\n> --- 400,407 ----\n> \tNodeTag\t\ttype;\n> \tNode\t *funccallnode;\t/* untransformed function call tree */\n> \tAlias\t *alias;\t\t\t/* table alias & optional column aliases */\n> + \tList\t *coldeflist;\t\t/* list of ColumnDef nodes for runtime\n> + \t\t\t\t\t\t\t\t * assignment of RECORD TupleDesc */\n> } RangeFunction;\n> \n> /*\n> ***************\n> *** 527,532 ****\n> --- 529,536 ----\n> \t * Fields valid for a function RTE (else NULL):\n> \t */\n> \tNode\t *funcexpr;\t\t/* expression tree for func call */\n> + \tList\t *coldeflist;\t\t/* list of ColumnDef nodes for runtime\n> + \t\t\t\t\t\t\t\t * assignment of RECORD TupleDesc */\n> \n> \t/*\n> \t * Fields valid for a join RTE (else NULL/zero):\n> Index: src/include/parser/parse_relation.h\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/include/parser/parse_relation.h,v\n> retrieving revision 1.34\n> diff -c -r1.34 parse_relation.h\n> *** src/include/parser/parse_relation.h\t20 Jun 2002 20:29:51 -0000\t1.34\n> --- src/include/parser/parse_relation.h\t27 Jul 2002 19:21:36 -0000\n> ***************\n> *** 44,50 ****\n> extern RangeTblEntry *addRangeTableEntryForFunction(ParseState *pstate,\n> \t\t\t\t\t\t\t\t\t\t\t\t\tchar *funcname,\n> \t\t\t\t\t\t\t\t\t\t\t\t\tNode *funcexpr,\n> ! \t\t\t\t\t\t\t\t\t\t\t\t\tAlias *alias,\n> \t\t\t\t\t\t\t\t\t\t\t\t\tbool inFromCl);\n> extern RangeTblEntry *addRangeTableEntryForJoin(ParseState *pstate,\n> \t\t\t\t\t\t List *colnames,\n> --- 44,50 ----\n> extern RangeTblEntry *addRangeTableEntryForFunction(ParseState *pstate,\n> \t\t\t\t\t\t\t\t\t\t\t\t\tchar *funcname,\n> \t\t\t\t\t\t\t\t\t\t\t\t\tNode *funcexpr,\n> ! \t\t\t\t\t\t\t\t\t\t\t\t\tRangeFunction *rangefunc,\n> \t\t\t\t\t\t\t\t\t\t\t\t\tbool inFromCl);\n> extern RangeTblEntry *addRangeTableEntryForJoin(ParseState *pstate,\n> \t\t\t\t\t\t List *colnames,\n> ***************\n> *** 61,65 ****\n> --- 61,66 ----\n> extern int\tattnameAttNum(Relation rd, char *a);\n> extern Name attnumAttName(Relation rd, int attid);\n> extern Oid\tattnumTypeId(Relation rd, int attid);\n> + extern char typeid_get_typtype(Oid typeid);\n> \n> #endif /* PARSE_RELATION_H */\n> Index: src/test/regress/expected/type_sanity.out\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/test/regress/expected/type_sanity.out,v\n> retrieving revision 1.9\n> diff -c -r1.9 type_sanity.out\n> *** src/test/regress/expected/type_sanity.out\t24 Jul 2002 19:11:14 -0000\t1.9\n> --- src/test/regress/expected/type_sanity.out\t29 Jul 2002 00:56:57 -0000\n> ***************\n> *** 16,22 ****\n> SELECT p1.oid, p1.typname\n> FROM pg_type as p1\n> WHERE (p1.typlen <= 0 AND p1.typlen != -1) OR\n> ! (p1.typtype != 'b' AND p1.typtype != 'c') OR\n> NOT p1.typisdefined OR\n> (p1.typalign != 'c' AND p1.typalign != 's' AND\n> p1.typalign != 'i' AND p1.typalign != 'd') OR\n> --- 16,22 ----\n> SELECT p1.oid, p1.typname\n> FROM pg_type as p1\n> WHERE (p1.typlen <= 0 AND p1.typlen != -1) OR\n> ! (p1.typtype != 'b' AND p1.typtype != 'c' AND p1.typtype != 'p') OR\n> NOT p1.typisdefined OR\n> (p1.typalign != 'c' AND p1.typalign != 's' AND\n> p1.typalign != 'i' AND p1.typalign != 'd') OR\n> ***************\n> *** 60,66 ****\n> -- NOTE: as of 7.3, this check finds SET, smgr, and unknown.\n> SELECT p1.oid, p1.typname\n> FROM pg_type as p1\n> ! WHERE p1.typtype != 'c' AND p1.typname NOT LIKE '\\\\_%' AND NOT EXISTS\n> (SELECT 1 FROM pg_type as p2\n> WHERE p2.typname = ('_' || p1.typname)::name AND\n> p2.typelem = p1.oid);\n> --- 60,66 ----\n> -- NOTE: as of 7.3, this check finds SET, smgr, and unknown.\n> SELECT p1.oid, p1.typname\n> FROM pg_type as p1\n> ! WHERE p1.typtype = 'b' AND p1.typname NOT LIKE '\\\\_%' AND NOT EXISTS\n> (SELECT 1 FROM pg_type as p2\n> WHERE p2.typname = ('_' || p1.typname)::name AND\n> p2.typelem = p1.oid);\n> Index: src/test/regress/sql/type_sanity.sql\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/test/regress/sql/type_sanity.sql,v\n> retrieving revision 1.9\n> diff -c -r1.9 type_sanity.sql\n> *** src/test/regress/sql/type_sanity.sql\t24 Jul 2002 19:11:14 -0000\t1.9\n> --- src/test/regress/sql/type_sanity.sql\t29 Jul 2002 00:52:41 -0000\n> ***************\n> *** 19,25 ****\n> SELECT p1.oid, p1.typname\n> FROM pg_type as p1\n> WHERE (p1.typlen <= 0 AND p1.typlen != -1) OR\n> ! (p1.typtype != 'b' AND p1.typtype != 'c') OR\n> NOT p1.typisdefined OR\n> (p1.typalign != 'c' AND p1.typalign != 's' AND\n> p1.typalign != 'i' AND p1.typalign != 'd') OR\n> --- 19,25 ----\n> SELECT p1.oid, p1.typname\n> FROM pg_type as p1\n> WHERE (p1.typlen <= 0 AND p1.typlen != -1) OR\n> ! (p1.typtype != 'b' AND p1.typtype != 'c' AND p1.typtype != 'p') OR\n> NOT p1.typisdefined OR\n> (p1.typalign != 'c' AND p1.typalign != 's' AND\n> p1.typalign != 'i' AND p1.typalign != 'd') OR\n> ***************\n> *** 55,61 ****\n> \n> SELECT p1.oid, p1.typname\n> FROM pg_type as p1\n> ! WHERE p1.typtype != 'c' AND p1.typname NOT LIKE '\\\\_%' AND NOT EXISTS\n> (SELECT 1 FROM pg_type as p2\n> WHERE p2.typname = ('_' || p1.typname)::name AND\n> p2.typelem = p1.oid);\n> --- 55,61 ----\n> \n> SELECT p1.oid, p1.typname\n> FROM pg_type as p1\n> ! WHERE p1.typtype = 'b' AND p1.typname NOT LIKE '\\\\_%' AND NOT EXISTS\n> (SELECT 1 FROM pg_type as p2\n> WHERE p2.typname = ('_' || p1.typname)::name AND\n> p2.typelem = p1.oid);\n\n> Index: doc/src/sgml/ref/select.sgml\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/doc/src/sgml/ref/select.sgml,v\n> retrieving revision 1.54\n> diff -c -r1.54 select.sgml\n> *** doc/src/sgml/ref/select.sgml\t23 Apr 2002 02:07:16 -0000\t1.54\n> --- doc/src/sgml/ref/select.sgml\t29 Jul 2002 04:16:51 -0000\n> ***************\n> *** 40,45 ****\n> --- 40,51 ----\n> ( <replaceable class=\"PARAMETER\">select</replaceable> )\n> [ AS ] <replaceable class=\"PARAMETER\">alias</replaceable> [ ( <replaceable class=\"PARAMETER\">column_alias_list</replaceable> ) ]\n> |\n> + <replaceable class=\"PARAMETER\">table_function_name</replaceable> ( [ <replaceable class=\"parameter\">argtype</replaceable> [, ...] ] )\n> + [ AS ] <replaceable class=\"PARAMETER\">alias</replaceable> [ ( <replaceable class=\"PARAMETER\">column_alias_list</replaceable> | <replaceable class=\"PARAMETER\">column_definition_list</replaceable> ) ]\n> + |\n> + <replaceable class=\"PARAMETER\">table_function_name</replaceable> ( [ <replaceable class=\"parameter\">argtype</replaceable> [, ...] ] )\n> + AS ( <replaceable class=\"PARAMETER\">column_definition_list</replaceable> )\n> + |\n> <replaceable class=\"PARAMETER\">from_item</replaceable> [ NATURAL ] <replaceable class=\"PARAMETER\">join_type</replaceable> <replaceable class=\"PARAMETER\">from_item</replaceable>\n> [ ON <replaceable class=\"PARAMETER\">join_condition</replaceable> | USING ( <replaceable class=\"PARAMETER\">join_column_list</replaceable> ) ]\n> </synopsis>\n> ***************\n> *** 82,88 ****\n> <term><replaceable class=\"PARAMETER\">from_item</replaceable></term>\n> <listitem>\n> <para>\n> ! A table reference, sub-SELECT, or JOIN clause. See below for details.\n> </para>\n> </listitem>\n> </varlistentry>\n> --- 88,94 ----\n> <term><replaceable class=\"PARAMETER\">from_item</replaceable></term>\n> <listitem>\n> <para>\n> ! A table reference, sub-SELECT, table function, or JOIN clause. See below for details.\n> </para>\n> </listitem>\n> </varlistentry>\n> ***************\n> *** 156,161 ****\n> --- 162,184 ----\n> </para>\n> </listitem>\n> </varlistentry>\n> + \n> + <varlistentry>\n> + <term><replaceable class=\"PARAMETER\">table function</replaceable></term>\n> + <listitem>\n> + <para>\n> + \tA table function can appear in the FROM clause. This acts as though\n> + \tits output were created as a temporary table for the duration of\n> + \tthis single SELECT command. An alias may also be used. If an alias is\n> + \twritten, a column alias list can also be written to provide\tsubstitute names\n> + \tfor one or more columns of the table function. If the table function has been\n> + \tdefined as returning the RECORD data type, an alias, or the keyword AS, must\n> + also be present, followed by a column definition list in the form\n> + \t( <replaceable class=\"PARAMETER\">column_name</replaceable> <replaceable class=\"PARAMETER\">data_type</replaceable> [, ... ] ).\n> + \tThe column definition list must match the actual number and types returned by the function.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> \n> <varlistentry>\n> <term><replaceable class=\"PARAMETER\">join_type</replaceable></term>\n> ***************\n> *** 381,386 ****\n> --- 404,422 ----\n> </para>\n> \n> <para>\n> + A FROM item can be a table function (i.e. a function that returns\n> + multiple rows and columns). When a table function is created, it may\n> + \tbe defined to return a named scalar or composite data type (an existing\n> + \tscalar data type, or a table or view name), or it may be defined to return\n> + \ta RECORD data type. When a table function is defined to return RECORD, it\n> + \tmust be followed in the FROM clause by an alias, or the keyword AS alone,\n> + \tand then by a parenthesized list of column names and types. This provides\n> + \ta query-time composite type definition. The FROM clause composite type\n> + \tmust match the actual composite type returned from the function or an\n> + \tERROR will be generated.\n> + </para>\n> + \n> + <para>\n> Finally, a FROM item can be a JOIN clause, which combines two simpler\n> FROM items. (Use parentheses if necessary to determine the order\n> of nesting.)\n> ***************\n> *** 925,930 ****\n> --- 961,1003 ----\n> Warren Beatty\n> Westward\n> Woody Allen\n> + </programlisting>\n> + </para>\n> + \n> + <para>\n> + This example shows how to use a table function, both with and without\n> + a column definition list.\n> + \n> + <programlisting>\n> + distributors:\n> + did | name\n> + -----+--------------\n> + 108 | Westward\n> + 111 | Walt Disney\n> + 112 | Warner Bros.\n> + ...\n> + \n> + CREATE FUNCTION distributors(int)\n> + RETURNS SETOF distributors AS '\n> + SELECT * FROM distributors WHERE did = $1;\n> + ' LANGUAGE SQL;\n> + \n> + SELECT * FROM distributors(111);\n> + did | name\n> + -----+-------------\n> + 111 | Walt Disney\n> + (1 row)\n> + \n> + CREATE FUNCTION distributors_2(int)\n> + RETURNS SETOF RECORD AS '\n> + SELECT * FROM distributors WHERE did = $1;\n> + ' LANGUAGE SQL;\n> + \n> + SELECT * FROM distributors_2(111) AS (f1 int, f2 text);\n> + f1 | f2\n> + -----+-------------\n> + 111 | Walt Disney\n> + (1 row)\n> </programlisting>\n> </para>\n> </refsect1>\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 1 Aug 2002 19:30:36 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: anonymous composite types for Table Functions (aka SRFs)" }, { "msg_contents": "I am sorry but I am unable to apply this patch because of the DROP\nCOLUMN patch that was applied since you submitted this. \n\nIt had rejections in gram.y and parse_relation.c, but those were easy to\nfix. The big problem is pg_proc.c, where the code changes can not be\nmerged.\n\nI am attaching the rejected part of the patch. If you can send me a\nfixed version of just that change, I can commit the rest.\n\nThanks.\n\n\n---------------------------------------------------------------------------\n\nJoe Conway wrote:\n> Attached are two patches to implement and document anonymous composite \n> types for Table Functions, as previously proposed on HACKERS. Here is a \n> brief explanation:\n> \n> 1. Creates a new pg_type typtype: 'p' for pseudo type (currently either\n> 'b' for base or 'c' for catalog, i.e. a class).\n> \n> 2. Creates new builtin type of typtype='p' named RECORD. This is the\n> first of potentially several pseudo types.\n> \n> 3. Modify FROM clause grammer to accept:\n> SELECT * FROM my_func() AS m(colname1 type1, colname2 type1, ...)\n> where m is the table alias, colname1, etc are the column names, and\n> type1, etc are the column types.\n> \n> 4. When typtype == 'p' and the function return type is RECORD, a list\n> of column defs is required, and when typtype != 'p', it is disallowed.\n> \n> 5. A check was added to ensure that the tupdesc provide via the parser\n> and the actual return tupdesc match in number and type of attributes.\n> \n> When creating a function you can do:\n> CREATE FUNCTION foo(text) RETURNS setof RECORD ...\n> \n> When using it you can do:\n> SELECT * from foo(sqlstmt) AS (f1 int, f2 text, f3 timestamp)\n> or\n> SELECT * from foo(sqlstmt) AS f(f1 int, f2 text, f3 timestamp)\n> or\n> SELECT * from foo(sqlstmt) f(f1 int, f2 text, f3 timestamp)\n> \n> Included in the patches are adjustments to the regression test sql and \n> expected files, and documentation.\n> \n> If there are no objections, please apply.\n> \n> Thanks,\n> \n> Joe\n> \n> p.s.\n> This potentially solves (or at least improves) the issue of builtin\n> Table Functions. They can be bootstrapped as returning RECORD, and\n> we can wrap system views around them with properly specified column\n> defs. For example:\n> \n> CREATE VIEW pg_settings AS\n> SELECT s.name, s.setting\n> FROM show_all_settings()AS s(name text, setting text);\n> \n> Then we can also add the UPDATE RULE that I previously posted to\n> pg_settings, and have pg_settings act like a virtual table, allowing\n> settings to be queried and set.\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n***************\n*** 367,447 ****\n \t */\n \ttlistlen = ExecCleanTargetListLength(tlist);\n \n- \t/*\n- \t * For base-type returns, the target list should have exactly one\n- \t * entry, and its type should agree with what the user declared. (As\n- \t * of Postgres 7.2, we accept binary-compatible types too.)\n- \t */\n \ttyperelid = typeidTypeRelid(rettype);\n- \tif (typerelid == InvalidOid)\n- \t{\n- \t\tif (tlistlen != 1)\n- \t\t\telog(ERROR, \"function declared to return %s returns multiple columns in final SELECT\",\n- \t\t\t\t format_type_be(rettype));\n \n! \t\trestype = ((TargetEntry *) lfirst(tlist))->resdom->restype;\n! \t\tif (!IsBinaryCompatible(restype, rettype))\n! \t\t\telog(ERROR, \"return type mismatch in function: declared to return %s, returns %s\",\n! \t\t\t\t format_type_be(rettype), format_type_be(restype));\n \n! \t\treturn;\n! \t}\n \n- \t/*\n- \t * If the target list is of length 1, and the type of the varnode in\n- \t * the target list matches the declared return type, this is okay.\n- \t * This can happen, for example, where the body of the function is\n- \t * 'SELECT func2()', where func2 has the same return type as the\n- \t * function that's calling it.\n- \t */\n- \tif (tlistlen == 1)\n- \t{\n- \t\trestype = ((TargetEntry *) lfirst(tlist))->resdom->restype;\n- \t\tif (IsBinaryCompatible(restype, rettype))\n \t\t\treturn;\n \t}\n \n! \t/*\n! \t * By here, the procedure returns a tuple or set of tuples. This part\n! \t * of the typechecking is a hack. We look up the relation that is the\n! \t * declared return type, and be sure that attributes 1 .. n in the\n! \t * target list match the declared types.\n! \t */\n! \treln = heap_open(typerelid, AccessShareLock);\n! \trelid = reln->rd_id;\n! \trelnatts = reln->rd_rel->relnatts;\n! \n! \tif (tlistlen != relnatts)\n! \t\telog(ERROR, \"function declared to return %s does not SELECT the right number of columns (%d)\",\n! \t\t\t format_type_be(rettype), relnatts);\n \n! \t/* expect attributes 1 .. n in order */\n! \ti = 0;\n! \tforeach(tlistitem, tlist)\n! \t{\n! \t\tTargetEntry *tle = (TargetEntry *) lfirst(tlistitem);\n! \t\tOid\t\t\ttletype;\n! \t\tOid\t\t\tatttype;\n! \n! \t\tif (tle->resdom->resjunk)\n! \t\t\tcontinue;\n! \t\ttletype = exprType(tle->expr);\n! \t\tatttype = reln->rd_att->attrs[i]->atttypid;\n! \t\tif (!IsBinaryCompatible(tletype, atttype))\n! \t\t\telog(ERROR, \"function declared to return %s returns %s instead of %s at column %d\",\n! \t\t\t\t format_type_be(rettype),\n! \t\t\t\t format_type_be(tletype),\n! \t\t\t\t format_type_be(atttype),\n! \t\t\t\t i + 1);\n! \t\ti++;\n! \t}\n! \n! \t/* this shouldn't happen, but let's just check... */\n! \tif (i != relnatts)\n! \t\telog(ERROR, \"function declared to return %s does not SELECT the right number of columns (%d)\",\n! \t\t\t format_type_be(rettype), relnatts);\n \n! \theap_close(reln, AccessShareLock);\n }\n \n \n--- 368,467 ----\n \t */\n \ttlistlen = ExecCleanTargetListLength(tlist);\n \n \ttyperelid = typeidTypeRelid(rettype);\n \n! \tif (fn_typtype == 'b')\n! \t{\n! \t\t/*\n! \t\t * For base-type returns, the target list should have exactly one\n! \t\t * entry, and its type should agree with what the user declared. (As\n! \t\t * of Postgres 7.2, we accept binary-compatible types too.)\n! \t\t */\n \n! \t\tif (typerelid == InvalidOid)\n! \t\t{\n! \t\t\tif (tlistlen != 1)\n! \t\t\t\telog(ERROR, \"function declared to return %s returns multiple columns in final SELECT\",\n! \t\t\t\t\t format_type_be(rettype));\n! \n! \t\t\trestype = ((TargetEntry *) lfirst(tlist))->resdom->restype;\n! \t\t\tif (!IsBinaryCompatible(restype, rettype))\n! \t\t\t\telog(ERROR, \"return type mismatch in function: declared to return %s, returns %s\",\n! \t\t\t\t\t format_type_be(rettype), format_type_be(restype));\n \n \t\t\treturn;\n+ \t\t}\n+ \n+ \t\t/*\n+ \t\t * If the target list is of length 1, and the type of the varnode in\n+ \t\t * the target list matches the declared return type, this is okay.\n+ \t\t * This can happen, for example, where the body of the function is\n+ \t\t * 'SELECT func2()', where func2 has the same return type as the\n+ \t\t * function that's calling it.\n+ \t\t */\n+ \t\tif (tlistlen == 1)\n+ \t\t{\n+ \t\t\trestype = ((TargetEntry *) lfirst(tlist))->resdom->restype;\n+ \t\t\tif (IsBinaryCompatible(restype, rettype))\n+ \t\t\t\treturn;\n+ \t\t}\n \t}\n+ \telse if (fn_typtype == 'c')\n+ \t{\n+ \t\t/*\n+ \t\t * By here, the procedure returns a tuple or set of tuples. This part\n+ \t\t * of the typechecking is a hack. We look up the relation that is the\n+ \t\t * declared return type, and be sure that attributes 1 .. n in the\n+ \t\t * target list match the declared types.\n+ \t\t */\n+ \t\treln = heap_open(typerelid, AccessShareLock);\n+ \t\trelid = reln->rd_id;\n+ \t\trelnatts = reln->rd_rel->relnatts;\n+ \n+ \t\tif (tlistlen != relnatts)\n+ \t\t\telog(ERROR, \"function declared to return %s does not SELECT the right number of columns (%d)\",\n+ \t\t\t\t format_type_be(rettype), relnatts);\n+ \n+ \t\t/* expect attributes 1 .. n in order */\n+ \t\ti = 0;\n+ \t\tforeach(tlistitem, tlist)\n+ \t\t{\n+ \t\t\tTargetEntry *tle = (TargetEntry *) lfirst(tlistitem);\n+ \t\t\tOid\t\t\ttletype;\n+ \t\t\tOid\t\t\tatttype;\n+ \n+ \t\t\tif (tle->resdom->resjunk)\n+ \t\t\t\tcontinue;\n+ \t\t\ttletype = exprType(tle->expr);\n+ \t\t\tatttype = reln->rd_att->attrs[i]->atttypid;\n+ \t\t\tif (!IsBinaryCompatible(tletype, atttype))\n+ \t\t\t\telog(ERROR, \"function declared to return %s returns %s instead of %s at column %d\",\n+ \t\t\t\t\t format_type_be(rettype),\n+ \t\t\t\t\t format_type_be(tletype),\n+ \t\t\t\t\t format_type_be(atttype),\n+ \t\t\t\t\t i + 1);\n+ \t\t\ti++;\n+ \t\t}\n \n! \t\t/* this shouldn't happen, but let's just check... */\n! \t\tif (i != relnatts)\n! \t\t\telog(ERROR, \"function declared to return %s does not SELECT the right number of columns (%d)\",\n! \t\t\t\t format_type_be(rettype), relnatts);\n \n! \t\theap_close(reln, AccessShareLock);\n \n! \t\treturn;\n! \t}\n! \telse if (fn_typtype == 'p' && rettype == RECORDOID)\n! \t{\n! \t\t/*\n! \t\t * For RECORD return type, defer this check until we get the\n! \t\t * first tuple.\n! \t\t */\n! \t\treturn;\n! \t}\n! \telse\n! \t\telog(ERROR, \"Unknown kind of return type specified for function\");\n }", "msg_date": "Sun, 4 Aug 2002 00:58:46 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: anonymous composite types for Table Functions (aka SRFs)" }, { "msg_contents": "Bruce Momjian wrote:\n> I am sorry but I am unable to apply this patch because of the DROP\n> COLUMN patch that was applied since you submitted this. \n> \n> It had rejections in gram.y and parse_relation.c, but those were easy to\n> fix. The big problem is pg_proc.c, where the code changes can not be\n> merged.\n> \n> I am attaching the rejected part of the patch. If you can send me a\n> fixed version of just that change, I can commit the rest.\n> \n\nOK. Here is a patch against current cvs for just pg_proc.c. This \nincludes all the changes for that file (i.e. not just the one rejected \nhunk).\n\nThanks,\n\nJoe", "msg_date": "Sat, 03 Aug 2002 23:53:46 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: anonymous composite types for Table Functions (aka" }, { "msg_contents": "\n[ New version of pg_proc.c used for application.]\n\nPatch applied. Thanks. initdb forced.\n\n---------------------------------------------------------------------------\n\n\n\nJoe Conway wrote:\n> Attached are two patches to implement and document anonymous composite \n> types for Table Functions, as previously proposed on HACKERS. Here is a \n> brief explanation:\n> \n> 1. Creates a new pg_type typtype: 'p' for pseudo type (currently either\n> 'b' for base or 'c' for catalog, i.e. a class).\n> \n> 2. Creates new builtin type of typtype='p' named RECORD. This is the\n> first of potentially several pseudo types.\n> \n> 3. Modify FROM clause grammer to accept:\n> SELECT * FROM my_func() AS m(colname1 type1, colname2 type1, ...)\n> where m is the table alias, colname1, etc are the column names, and\n> type1, etc are the column types.\n> \n> 4. When typtype == 'p' and the function return type is RECORD, a list\n> of column defs is required, and when typtype != 'p', it is disallowed.\n> \n> 5. A check was added to ensure that the tupdesc provide via the parser\n> and the actual return tupdesc match in number and type of attributes.\n> \n> When creating a function you can do:\n> CREATE FUNCTION foo(text) RETURNS setof RECORD ...\n> \n> When using it you can do:\n> SELECT * from foo(sqlstmt) AS (f1 int, f2 text, f3 timestamp)\n> or\n> SELECT * from foo(sqlstmt) AS f(f1 int, f2 text, f3 timestamp)\n> or\n> SELECT * from foo(sqlstmt) f(f1 int, f2 text, f3 timestamp)\n> \n> Included in the patches are adjustments to the regression test sql and \n> expected files, and documentation.\n> \n> If there are no objections, please apply.\n> \n> Thanks,\n> \n> Joe\n> \n> p.s.\n> This potentially solves (or at least improves) the issue of builtin\n> Table Functions. They can be bootstrapped as returning RECORD, and\n> we can wrap system views around them with properly specified column\n> defs. For example:\n> \n> CREATE VIEW pg_settings AS\n> SELECT s.name, s.setting\n> FROM show_all_settings()AS s(name text, setting text);\n> \n> Then we can also add the UPDATE RULE that I previously posted to\n> pg_settings, and have pg_settings act like a virtual table, allowing\n> settings to be queried and set.\n> \n\n> Index: src/backend/access/common/tupdesc.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/backend/access/common/tupdesc.c,v\n> retrieving revision 1.81\n> diff -c -r1.81 tupdesc.c\n> *** src/backend/access/common/tupdesc.c\t20 Jul 2002 05:16:56 -0000\t1.81\n> --- src/backend/access/common/tupdesc.c\t28 Jul 2002 01:33:30 -0000\n> ***************\n> *** 24,29 ****\n> --- 24,30 ----\n> #include \"catalog/namespace.h\"\n> #include \"catalog/pg_type.h\"\n> #include \"nodes/parsenodes.h\"\n> + #include \"parser/parse_relation.h\"\n> #include \"parser/parse_type.h\"\n> #include \"utils/builtins.h\"\n> #include \"utils/syscache.h\"\n> ***************\n> *** 597,642 ****\n> TupleDesc\n> TypeGetTupleDesc(Oid typeoid, List *colaliases)\n> {\n> ! \tOid\t\t\trelid = typeidTypeRelid(typeoid);\n> ! \tTupleDesc\ttupdesc;\n> \n> \t/*\n> \t * Build a suitable tupledesc representing the output rows\n> \t */\n> ! \tif (OidIsValid(relid))\n> \t{\n> \t\t/* Composite data type, i.e. a table's row type */\n> ! \t\tRelation\trel;\n> ! \t\tint\t\t\tnatts;\n> ! \n> ! \t\trel = relation_open(relid, AccessShareLock);\n> ! \t\ttupdesc = CreateTupleDescCopy(RelationGetDescr(rel));\n> ! \t\tnatts = tupdesc->natts;\n> ! \t\trelation_close(rel, AccessShareLock);\n> \n> ! \t\t/* check to see if we've given column aliases */\n> ! \t\tif(colaliases != NIL)\n> \t\t{\n> ! \t\t\tchar\t *label;\n> ! \t\t\tint\t\t\tvarattno;\n> \n> ! \t\t\t/* does the List length match the number of attributes */\n> ! \t\t\tif (length(colaliases) != natts)\n> ! \t\t\t\telog(ERROR, \"TypeGetTupleDesc: number of aliases does not match number of attributes\");\n> \n> ! \t\t\t/* OK, use the aliases instead */\n> ! \t\t\tfor (varattno = 0; varattno < natts; varattno++)\n> \t\t\t{\n> ! \t\t\t\tlabel = strVal(nth(varattno, colaliases));\n> \n> ! \t\t\t\tif (label != NULL)\n> ! \t\t\t\t\tnamestrcpy(&(tupdesc->attrs[varattno]->attname), label);\n> ! \t\t\t\telse\n> ! \t\t\t\t\tMemSet(NameStr(tupdesc->attrs[varattno]->attname), 0, NAMEDATALEN);\n> \t\t\t}\n> \t\t}\n> \t}\n> ! \telse\n> \t{\n> \t\t/* Must be a base data type, i.e. scalar */\n> \t\tchar\t *attname;\n> --- 598,650 ----\n> TupleDesc\n> TypeGetTupleDesc(Oid typeoid, List *colaliases)\n> {\n> ! \tchar\t\tfunctyptype = typeid_get_typtype(typeoid);\n> ! \tTupleDesc\ttupdesc = NULL;\n> \n> \t/*\n> \t * Build a suitable tupledesc representing the output rows\n> \t */\n> ! \tif (functyptype == 'c')\n> \t{\n> \t\t/* Composite data type, i.e. a table's row type */\n> ! \t\tOid\t\t\trelid = typeidTypeRelid(typeoid);\n> \n> ! \t\tif (OidIsValid(relid))\n> \t\t{\n> ! \t\t\tRelation\trel;\n> ! \t\t\tint\t\t\tnatts;\n> \n> ! \t\t\trel = relation_open(relid, AccessShareLock);\n> ! \t\t\ttupdesc = CreateTupleDescCopy(RelationGetDescr(rel));\n> ! \t\t\tnatts = tupdesc->natts;\n> ! \t\t\trelation_close(rel, AccessShareLock);\n> \n> ! \t\t\t/* check to see if we've given column aliases */\n> ! \t\t\tif(colaliases != NIL)\n> \t\t\t{\n> ! \t\t\t\tchar\t *label;\n> ! \t\t\t\tint\t\t\tvarattno;\n> \n> ! \t\t\t\t/* does the List length match the number of attributes */\n> ! \t\t\t\tif (length(colaliases) != natts)\n> ! \t\t\t\t\telog(ERROR, \"TypeGetTupleDesc: number of aliases does not match number of attributes\");\n> ! \n> ! \t\t\t\t/* OK, use the aliases instead */\n> ! \t\t\t\tfor (varattno = 0; varattno < natts; varattno++)\n> ! \t\t\t\t{\n> ! \t\t\t\t\tlabel = strVal(nth(varattno, colaliases));\n> ! \n> ! \t\t\t\t\tif (label != NULL)\n> ! \t\t\t\t\t\tnamestrcpy(&(tupdesc->attrs[varattno]->attname), label);\n> ! \t\t\t\t\telse\n> ! \t\t\t\t\t\tMemSet(NameStr(tupdesc->attrs[varattno]->attname), 0, NAMEDATALEN);\n> ! \t\t\t\t}\n> \t\t\t}\n> \t\t}\n> + \t\telse\n> + \t\t\telog(ERROR, \"Invalid return relation specified for function\");\n> \t}\n> ! \telse if (functyptype == 'b')\n> \t{\n> \t\t/* Must be a base data type, i.e. scalar */\n> \t\tchar\t *attname;\n> ***************\n> *** 661,666 ****\n> --- 669,679 ----\n> \t\t\t\t\t\t 0,\n> \t\t\t\t\t\t false);\n> \t}\n> + \telse if (functyptype == 'p' && typeoid == RECORDOID)\n> + \t\telog(ERROR, \"Unable to determine tuple description for function\"\n> + \t\t\t\t\t\t\" returning \\\"record\\\"\");\n> + \telse\n> + \t\telog(ERROR, \"Unknown kind of return type specified for function\");\n> \n> \treturn tupdesc;\n> }\n> Index: src/backend/catalog/pg_proc.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/backend/catalog/pg_proc.c,v\n> retrieving revision 1.81\n> diff -c -r1.81 pg_proc.c\n> *** src/backend/catalog/pg_proc.c\t24 Jul 2002 19:11:09 -0000\t1.81\n> --- src/backend/catalog/pg_proc.c\t29 Jul 2002 02:02:31 -0000\n> ***************\n> *** 25,30 ****\n> --- 25,31 ----\n> #include \"miscadmin.h\"\n> #include \"parser/parse_coerce.h\"\n> #include \"parser/parse_expr.h\"\n> + #include \"parser/parse_relation.h\"\n> #include \"parser/parse_type.h\"\n> #include \"tcop/tcopprot.h\"\n> #include \"utils/builtins.h\"\n> ***************\n> *** 33,39 ****\n> #include \"utils/syscache.h\"\n> \n> \n> ! static void checkretval(Oid rettype, List *queryTreeList);\n> Datum fmgr_internal_validator(PG_FUNCTION_ARGS);\n> Datum fmgr_c_validator(PG_FUNCTION_ARGS);\n> Datum fmgr_sql_validator(PG_FUNCTION_ARGS);\n> --- 34,40 ----\n> #include \"utils/syscache.h\"\n> \n> \n> ! static void checkretval(Oid rettype, char fn_typtype, List *queryTreeList);\n> Datum fmgr_internal_validator(PG_FUNCTION_ARGS);\n> Datum fmgr_c_validator(PG_FUNCTION_ARGS);\n> Datum fmgr_sql_validator(PG_FUNCTION_ARGS);\n> ***************\n> *** 317,323 ****\n> * type he claims.\n> */\n> static void\n> ! checkretval(Oid rettype, List *queryTreeList)\n> {\n> \tQuery\t *parse;\n> \tint\t\t\tcmd;\n> --- 318,324 ----\n> * type he claims.\n> */\n> static void\n> ! checkretval(Oid rettype, char fn_typtype, List *queryTreeList)\n> {\n> \tQuery\t *parse;\n> \tint\t\t\tcmd;\n> ***************\n> *** 367,447 ****\n> \t */\n> \ttlistlen = ExecCleanTargetListLength(tlist);\n> \n> - \t/*\n> - \t * For base-type returns, the target list should have exactly one\n> - \t * entry, and its type should agree with what the user declared. (As\n> - \t * of Postgres 7.2, we accept binary-compatible types too.)\n> - \t */\n> \ttyperelid = typeidTypeRelid(rettype);\n> - \tif (typerelid == InvalidOid)\n> - \t{\n> - \t\tif (tlistlen != 1)\n> - \t\t\telog(ERROR, \"function declared to return %s returns multiple columns in final SELECT\",\n> - \t\t\t\t format_type_be(rettype));\n> \n> ! \t\trestype = ((TargetEntry *) lfirst(tlist))->resdom->restype;\n> ! \t\tif (!IsBinaryCompatible(restype, rettype))\n> ! \t\t\telog(ERROR, \"return type mismatch in function: declared to return %s, returns %s\",\n> ! \t\t\t\t format_type_be(rettype), format_type_be(restype));\n> \n> ! \t\treturn;\n> ! \t}\n> \n> - \t/*\n> - \t * If the target list is of length 1, and the type of the varnode in\n> - \t * the target list matches the declared return type, this is okay.\n> - \t * This can happen, for example, where the body of the function is\n> - \t * 'SELECT func2()', where func2 has the same return type as the\n> - \t * function that's calling it.\n> - \t */\n> - \tif (tlistlen == 1)\n> - \t{\n> - \t\trestype = ((TargetEntry *) lfirst(tlist))->resdom->restype;\n> - \t\tif (IsBinaryCompatible(restype, rettype))\n> \t\t\treturn;\n> \t}\n> \n> ! \t/*\n> ! \t * By here, the procedure returns a tuple or set of tuples. This part\n> ! \t * of the typechecking is a hack. We look up the relation that is the\n> ! \t * declared return type, and be sure that attributes 1 .. n in the\n> ! \t * target list match the declared types.\n> ! \t */\n> ! \treln = heap_open(typerelid, AccessShareLock);\n> ! \trelid = reln->rd_id;\n> ! \trelnatts = reln->rd_rel->relnatts;\n> ! \n> ! \tif (tlistlen != relnatts)\n> ! \t\telog(ERROR, \"function declared to return %s does not SELECT the right number of columns (%d)\",\n> ! \t\t\t format_type_be(rettype), relnatts);\n> \n> ! \t/* expect attributes 1 .. n in order */\n> ! \ti = 0;\n> ! \tforeach(tlistitem, tlist)\n> ! \t{\n> ! \t\tTargetEntry *tle = (TargetEntry *) lfirst(tlistitem);\n> ! \t\tOid\t\t\ttletype;\n> ! \t\tOid\t\t\tatttype;\n> ! \n> ! \t\tif (tle->resdom->resjunk)\n> ! \t\t\tcontinue;\n> ! \t\ttletype = exprType(tle->expr);\n> ! \t\tatttype = reln->rd_att->attrs[i]->atttypid;\n> ! \t\tif (!IsBinaryCompatible(tletype, atttype))\n> ! \t\t\telog(ERROR, \"function declared to return %s returns %s instead of %s at column %d\",\n> ! \t\t\t\t format_type_be(rettype),\n> ! \t\t\t\t format_type_be(tletype),\n> ! \t\t\t\t format_type_be(atttype),\n> ! \t\t\t\t i + 1);\n> ! \t\ti++;\n> ! \t}\n> ! \n> ! \t/* this shouldn't happen, but let's just check... */\n> ! \tif (i != relnatts)\n> ! \t\telog(ERROR, \"function declared to return %s does not SELECT the right number of columns (%d)\",\n> ! \t\t\t format_type_be(rettype), relnatts);\n> \n> ! \theap_close(reln, AccessShareLock);\n> }\n> \n> \n> --- 368,467 ----\n> \t */\n> \ttlistlen = ExecCleanTargetListLength(tlist);\n> \n> \ttyperelid = typeidTypeRelid(rettype);\n> \n> ! \tif (fn_typtype == 'b')\n> ! \t{\n> ! \t\t/*\n> ! \t\t * For base-type returns, the target list should have exactly one\n> ! \t\t * entry, and its type should agree with what the user declared. (As\n> ! \t\t * of Postgres 7.2, we accept binary-compatible types too.)\n> ! \t\t */\n> \n> ! \t\tif (typerelid == InvalidOid)\n> ! \t\t{\n> ! \t\t\tif (tlistlen != 1)\n> ! \t\t\t\telog(ERROR, \"function declared to return %s returns multiple columns in final SELECT\",\n> ! \t\t\t\t\t format_type_be(rettype));\n> ! \n> ! \t\t\trestype = ((TargetEntry *) lfirst(tlist))->resdom->restype;\n> ! \t\t\tif (!IsBinaryCompatible(restype, rettype))\n> ! \t\t\t\telog(ERROR, \"return type mismatch in function: declared to return %s, returns %s\",\n> ! \t\t\t\t\t format_type_be(rettype), format_type_be(restype));\n> \n> \t\t\treturn;\n> + \t\t}\n> + \n> + \t\t/*\n> + \t\t * If the target list is of length 1, and the type of the varnode in\n> + \t\t * the target list matches the declared return type, this is okay.\n> + \t\t * This can happen, for example, where the body of the function is\n> + \t\t * 'SELECT func2()', where func2 has the same return type as the\n> + \t\t * function that's calling it.\n> + \t\t */\n> + \t\tif (tlistlen == 1)\n> + \t\t{\n> + \t\t\trestype = ((TargetEntry *) lfirst(tlist))->resdom->restype;\n> + \t\t\tif (IsBinaryCompatible(restype, rettype))\n> + \t\t\t\treturn;\n> + \t\t}\n> \t}\n> + \telse if (fn_typtype == 'c')\n> + \t{\n> + \t\t/*\n> + \t\t * By here, the procedure returns a tuple or set of tuples. This part\n> + \t\t * of the typechecking is a hack. We look up the relation that is the\n> + \t\t * declared return type, and be sure that attributes 1 .. n in the\n> + \t\t * target list match the declared types.\n> + \t\t */\n> + \t\treln = heap_open(typerelid, AccessShareLock);\n> + \t\trelid = reln->rd_id;\n> + \t\trelnatts = reln->rd_rel->relnatts;\n> + \n> + \t\tif (tlistlen != relnatts)\n> + \t\t\telog(ERROR, \"function declared to return %s does not SELECT the right number of columns (%d)\",\n> + \t\t\t\t format_type_be(rettype), relnatts);\n> + \n> + \t\t/* expect attributes 1 .. n in order */\n> + \t\ti = 0;\n> + \t\tforeach(tlistitem, tlist)\n> + \t\t{\n> + \t\t\tTargetEntry *tle = (TargetEntry *) lfirst(tlistitem);\n> + \t\t\tOid\t\t\ttletype;\n> + \t\t\tOid\t\t\tatttype;\n> + \n> + \t\t\tif (tle->resdom->resjunk)\n> + \t\t\t\tcontinue;\n> + \t\t\ttletype = exprType(tle->expr);\n> + \t\t\tatttype = reln->rd_att->attrs[i]->atttypid;\n> + \t\t\tif (!IsBinaryCompatible(tletype, atttype))\n> + \t\t\t\telog(ERROR, \"function declared to return %s returns %s instead of %s at column %d\",\n> + \t\t\t\t\t format_type_be(rettype),\n> + \t\t\t\t\t format_type_be(tletype),\n> + \t\t\t\t\t format_type_be(atttype),\n> + \t\t\t\t\t i + 1);\n> + \t\t\ti++;\n> + \t\t}\n> \n> ! \t\t/* this shouldn't happen, but let's just check... */\n> ! \t\tif (i != relnatts)\n> ! \t\t\telog(ERROR, \"function declared to return %s does not SELECT the right number of columns (%d)\",\n> ! \t\t\t\t format_type_be(rettype), relnatts);\n> \n> ! \t\theap_close(reln, AccessShareLock);\n> \n> ! \t\treturn;\n> ! \t}\n> ! \telse if (fn_typtype == 'p' && rettype == RECORDOID)\n> ! \t{\n> ! \t\t/*\n> ! \t\t * For RECORD return type, defer this check until we get the\n> ! \t\t * first tuple.\n> ! \t\t */\n> ! \t\treturn;\n> ! \t}\n> ! \telse\n> ! \t\telog(ERROR, \"Unknown kind of return type specified for function\");\n> }\n> \n> \n> ***************\n> *** 540,545 ****\n> --- 560,566 ----\n> \tbool\t\tisnull;\n> \tDatum\t\ttmp;\n> \tchar\t *prosrc;\n> + \tchar\t\tfunctyptype;\n> \n> \ttuple = SearchSysCache(PROCOID, funcoid, 0, 0, 0);\n> \tif (!HeapTupleIsValid(tuple))\n> ***************\n> *** 556,563 ****\n> \n> \tprosrc = DatumGetCString(DirectFunctionCall1(textout, tmp));\n> \n> \tquerytree_list = pg_parse_and_rewrite(prosrc, proc->proargtypes, proc->pronargs);\n> ! \tcheckretval(proc->prorettype, querytree_list);\n> \n> \tReleaseSysCache(tuple);\n> \tPG_RETURN_BOOL(true);\n> --- 577,587 ----\n> \n> \tprosrc = DatumGetCString(DirectFunctionCall1(textout, tmp));\n> \n> + \t/* check typtype to see if we have a predetermined return type */\n> + \tfunctyptype = typeid_get_typtype(proc->prorettype);\n> + \n> \tquerytree_list = pg_parse_and_rewrite(prosrc, proc->proargtypes, proc->pronargs);\n> ! \tcheckretval(proc->prorettype, functyptype, querytree_list);\n> \n> \tReleaseSysCache(tuple);\n> \tPG_RETURN_BOOL(true);\n> Index: src/backend/executor/functions.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/backend/executor/functions.c,v\n> retrieving revision 1.52\n> diff -c -r1.52 functions.c\n> *** src/backend/executor/functions.c\t20 Jun 2002 20:29:28 -0000\t1.52\n> --- src/backend/executor/functions.c\t27 Jul 2002 23:44:38 -0000\n> ***************\n> *** 194,200 ****\n> \t * get the type length and by-value flag from the type tuple\n> \t */\n> \tfcache->typlen = typeStruct->typlen;\n> ! \tif (typeStruct->typrelid == InvalidOid)\n> \t{\n> \t\t/* The return type is not a relation, so just use byval */\n> \t\tfcache->typbyval = typeStruct->typbyval;\n> --- 194,201 ----\n> \t * get the type length and by-value flag from the type tuple\n> \t */\n> \tfcache->typlen = typeStruct->typlen;\n> ! \n> ! \tif (typeStruct->typtype == 'b')\n> \t{\n> \t\t/* The return type is not a relation, so just use byval */\n> \t\tfcache->typbyval = typeStruct->typbyval;\n> Index: src/backend/executor/nodeFunctionscan.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/backend/executor/nodeFunctionscan.c,v\n> retrieving revision 1.3\n> diff -c -r1.3 nodeFunctionscan.c\n> *** src/backend/executor/nodeFunctionscan.c\t20 Jul 2002 05:16:58 -0000\t1.3\n> --- src/backend/executor/nodeFunctionscan.c\t29 Jul 2002 02:05:14 -0000\n> ***************\n> *** 31,36 ****\n> --- 31,37 ----\n> #include \"executor/nodeFunctionscan.h\"\n> #include \"parser/parsetree.h\"\n> #include \"parser/parse_expr.h\"\n> + #include \"parser/parse_relation.h\"\n> #include \"parser/parse_type.h\"\n> #include \"storage/lmgr.h\"\n> #include \"tcop/pquery.h\"\n> ***************\n> *** 39,52 ****\n> #include \"utils/tuplestore.h\"\n> \n> static TupleTableSlot *FunctionNext(FunctionScan *node);\n> ! static TupleTableSlot *function_getonetuple(TupleTableSlot *slot,\n> ! \t\t\t\t\t\t\t\t\t\t\tNode *expr,\n> ! \t\t\t\t\t\t\t\t\t\t\tExprContext *econtext,\n> ! \t\t\t\t\t\t\t\t\t\t\tTupleDesc tupdesc,\n> ! \t\t\t\t\t\t\t\t\t\t\tbool returnsTuple,\n> \t\t\t\t\t\t\t\t\t\t\tbool *isNull,\n> \t\t\t\t\t\t\t\t\t\t\tExprDoneCond *isDone);\n> static FunctionMode get_functionmode(Node *expr);\n> \n> /* ----------------------------------------------------------------\n> *\t\t\t\t\t\tScan Support\n> --- 40,50 ----\n> #include \"utils/tuplestore.h\"\n> \n> static TupleTableSlot *FunctionNext(FunctionScan *node);\n> ! static TupleTableSlot *function_getonetuple(FunctionScanState *scanstate,\n> \t\t\t\t\t\t\t\t\t\t\tbool *isNull,\n> \t\t\t\t\t\t\t\t\t\t\tExprDoneCond *isDone);\n> static FunctionMode get_functionmode(Node *expr);\n> + static bool tupledesc_mismatch(TupleDesc tupdesc1, TupleDesc tupdesc2);\n> \n> /* ----------------------------------------------------------------\n> *\t\t\t\t\t\tScan Support\n> ***************\n> *** 62,70 ****\n> FunctionNext(FunctionScan *node)\n> {\n> \tTupleTableSlot\t *slot;\n> - \tNode\t\t\t *expr;\n> - \tExprContext\t\t *econtext;\n> - \tTupleDesc\t\t\ttupdesc;\n> \tEState\t\t\t *estate;\n> \tScanDirection\t\tdirection;\n> \tTuplestorestate\t *tuplestorestate;\n> --- 60,65 ----\n> ***************\n> *** 78,88 ****\n> \tscanstate = (FunctionScanState *) node->scan.scanstate;\n> \testate = node->scan.plan.state;\n> \tdirection = estate->es_direction;\n> - \tecontext = scanstate->csstate.cstate.cs_ExprContext;\n> \n> \ttuplestorestate = scanstate->tuplestorestate;\n> - \ttupdesc = scanstate->tupdesc;\n> - \texpr = scanstate->funcexpr;\n> \n> \t/*\n> \t * If first time through, read all tuples from function and pass them to\n> --- 73,80 ----\n> ***************\n> *** 108,117 ****\n> \n> \t\t\tisNull = false;\n> \t\t\tisDone = ExprSingleResult;\n> ! \t\t\tslot = function_getonetuple(scanstate->csstate.css_ScanTupleSlot,\n> ! \t\t\t\t\t\t\t\t\t\texpr, econtext, tupdesc,\n> ! \t\t\t\t\t\t\t\t\t\tscanstate->returnsTuple,\n> ! \t\t\t\t\t\t\t\t\t\t&isNull, &isDone);\n> \t\t\tif (TupIsNull(slot))\n> \t\t\t\tbreak;\n> \n> --- 100,106 ----\n> \n> \t\t\tisNull = false;\n> \t\t\tisDone = ExprSingleResult;\n> ! \t\t\tslot = function_getonetuple(scanstate, &isNull, &isDone);\n> \t\t\tif (TupIsNull(slot))\n> \t\t\t\tbreak;\n> \n> ***************\n> *** 169,175 ****\n> \tRangeTblEntry\t *rte;\n> \tOid\t\t\t\t\tfuncrettype;\n> \tOid\t\t\t\t\tfuncrelid;\n> ! \tTupleDesc\t\t\ttupdesc;\n> \n> \t/*\n> \t * FunctionScan should not have any children.\n> --- 158,165 ----\n> \tRangeTblEntry\t *rte;\n> \tOid\t\t\t\t\tfuncrettype;\n> \tOid\t\t\t\t\tfuncrelid;\n> ! \tchar\t\t\t\tfunctyptype;\n> ! \tTupleDesc\t\t\ttupdesc = NULL;\n> \n> \t/*\n> \t * FunctionScan should not have any children.\n> ***************\n> *** 209,233 ****\n> \trte = rt_fetch(node->scan.scanrelid, estate->es_range_table);\n> \tAssert(rte->rtekind == RTE_FUNCTION);\n> \tfuncrettype = exprType(rte->funcexpr);\n> ! \tfuncrelid = typeidTypeRelid(funcrettype);\n> \n> \t/*\n> \t * Build a suitable tupledesc representing the output rows\n> \t */\n> ! \tif (OidIsValid(funcrelid))\n> \t{\n> ! \t\t/*\n> ! \t\t * Composite data type, i.e. a table's row type\n> ! \t\t * Same as ordinary relation RTE\n> ! \t\t */\n> ! \t\tRelation\trel;\n> \n> ! \t\trel = relation_open(funcrelid, AccessShareLock);\n> ! \t\ttupdesc = CreateTupleDescCopy(RelationGetDescr(rel));\n> ! \t\trelation_close(rel, AccessShareLock);\n> ! \t\tscanstate->returnsTuple = true;\n> \t}\n> ! \telse\n> \t{\n> \t\t/*\n> \t\t * Must be a base data type, i.e. scalar\n> --- 199,234 ----\n> \trte = rt_fetch(node->scan.scanrelid, estate->es_range_table);\n> \tAssert(rte->rtekind == RTE_FUNCTION);\n> \tfuncrettype = exprType(rte->funcexpr);\n> ! \n> ! \t/*\n> ! \t * Now determine if the function returns a simple or composite type,\n> ! \t * and check/add column aliases.\n> ! \t */\n> ! \tfunctyptype = typeid_get_typtype(funcrettype);\n> \n> \t/*\n> \t * Build a suitable tupledesc representing the output rows\n> \t */\n> ! \tif (functyptype == 'c')\n> \t{\n> ! \t\tfuncrelid = typeidTypeRelid(funcrettype);\n> ! \t\tif (OidIsValid(funcrelid))\n> ! \t\t{\n> ! \t\t\t/*\n> ! \t\t\t * Composite data type, i.e. a table's row type\n> ! \t\t\t * Same as ordinary relation RTE\n> ! \t\t\t */\n> ! \t\t\tRelation\trel;\n> \n> ! \t\t\trel = relation_open(funcrelid, AccessShareLock);\n> ! \t\t\ttupdesc = CreateTupleDescCopy(RelationGetDescr(rel));\n> ! \t\t\trelation_close(rel, AccessShareLock);\n> ! \t\t\tscanstate->returnsTuple = true;\n> ! \t\t}\n> ! \t\telse\n> ! \t\t\telog(ERROR, \"Invalid return relation specified for function\");\n> \t}\n> ! \telse if (functyptype == 'b')\n> \t{\n> \t\t/*\n> \t\t * Must be a base data type, i.e. scalar\n> ***************\n> *** 244,249 ****\n> --- 245,265 ----\n> \t\t\t\t\t\t false);\n> \t\tscanstate->returnsTuple = false;\n> \t}\n> + \telse if (functyptype == 'p' && funcrettype == RECORDOID)\n> + \t{\n> + \t\t/*\n> + \t\t * Must be a pseudo type, i.e. record\n> + \t\t */\n> + \t\tList *coldeflist = rte->coldeflist;\n> + \n> + \t\ttupdesc = BuildDescForRelation(coldeflist);\n> + \t\tscanstate->returnsTuple = true;\n> + \t}\n> + \telse\n> + \t\telog(ERROR, \"Unknown kind of return type specified for function\");\n> + \n> + \tscanstate->fn_typeid = funcrettype;\n> + \tscanstate->fn_typtype = functyptype;\n> \tscanstate->tupdesc = tupdesc;\n> \tExecSetSlotDescriptor(scanstate->csstate.css_ScanTupleSlot,\n> \t\t\t\t\t\t tupdesc, false);\n> ***************\n> *** 404,420 ****\n> * Run the underlying function to get the next tuple\n> */\n> static TupleTableSlot *\n> ! function_getonetuple(TupleTableSlot *slot,\n> ! \t\t\t\t\t Node *expr,\n> ! \t\t\t\t\t ExprContext *econtext,\n> ! \t\t\t\t\t TupleDesc tupdesc,\n> ! \t\t\t\t\t bool returnsTuple,\n> \t\t\t\t\t bool *isNull,\n> \t\t\t\t\t ExprDoneCond *isDone)\n> {\n> ! \tHeapTuple\t\t\ttuple;\n> ! \tDatum\t\t\t\tretDatum;\n> ! \tchar\t\t\t\tnullflag;\n> \n> \t/*\n> \t * get the next Datum from the function\n> --- 420,439 ----\n> * Run the underlying function to get the next tuple\n> */\n> static TupleTableSlot *\n> ! function_getonetuple(FunctionScanState *scanstate,\n> \t\t\t\t\t bool *isNull,\n> \t\t\t\t\t ExprDoneCond *isDone)\n> {\n> ! \tHeapTuple\t\ttuple;\n> ! \tDatum\t\t\tretDatum;\n> ! \tchar\t\t\tnullflag;\n> ! \tTupleDesc\t\ttupdesc = scanstate->tupdesc;\n> ! \tbool\t\t\treturnsTuple = scanstate->returnsTuple;\n> ! \tNode\t\t *expr = scanstate->funcexpr;\n> ! \tOid\t\t\t\tfn_typeid = scanstate->fn_typeid;\n> ! \tchar\t\t\tfn_typtype = scanstate->fn_typtype;\n> ! \tExprContext\t *econtext = scanstate->csstate.cstate.cs_ExprContext;\n> ! \tTupleTableSlot *slot = scanstate->csstate.css_ScanTupleSlot;\n> \n> \t/*\n> \t * get the next Datum from the function\n> ***************\n> *** 435,440 ****\n> --- 454,469 ----\n> \t\t\t * function returns pointer to tts??\n> \t\t\t */\n> \t\t\tslot = (TupleTableSlot *) retDatum;\n> + \n> + \t\t\t/*\n> + \t\t\t * if function return type was RECORD, we need to check to be\n> + \t\t\t * sure the structure from the query matches the actual return\n> + \t\t\t * structure\n> + \t\t\t */\n> + \t\t\tif (fn_typtype == 'p' && fn_typeid == RECORDOID)\n> + \t\t\t\tif (tupledesc_mismatch(tupdesc, slot->ttc_tupleDescriptor))\n> + \t\t\t\t\telog(ERROR, \"Query specified return tuple and actual\"\n> + \t\t\t\t\t\t\t\t\t\" function return tuple do not match\");\n> \t\t}\n> \t\telse\n> \t\t{\n> ***************\n> *** 466,469 ****\n> --- 495,521 ----\n> \t * for the moment, hardwire this\n> \t */\n> \treturn PM_REPEATEDCALL;\n> + }\n> + \n> + static bool\n> + tupledesc_mismatch(TupleDesc tupdesc1, TupleDesc tupdesc2)\n> + {\n> + \tint\t\t\ti;\n> + \n> + \tif (tupdesc1->natts != tupdesc2->natts)\n> + \t\treturn true;\n> + \n> + \tfor (i = 0; i < tupdesc1->natts; i++)\n> + \t{\n> + \t\tForm_pg_attribute attr1 = tupdesc1->attrs[i];\n> + \t\tForm_pg_attribute attr2 = tupdesc2->attrs[i];\n> + \n> + \t\t/*\n> + \t\t * We really only care about number of attributes and data type\n> + \t\t */\n> + \t\tif (attr1->atttypid != attr2->atttypid)\n> + \t\t\treturn true;\n> + \t}\n> + \n> + \treturn false;\n> }\n> Index: src/backend/nodes/copyfuncs.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/backend/nodes/copyfuncs.c,v\n> retrieving revision 1.197\n> diff -c -r1.197 copyfuncs.c\n> *** src/backend/nodes/copyfuncs.c\t24 Jul 2002 19:11:10 -0000\t1.197\n> --- src/backend/nodes/copyfuncs.c\t27 Jul 2002 19:21:36 -0000\n> ***************\n> *** 1482,1487 ****\n> --- 1482,1488 ----\n> \tnewnode->relid = from->relid;\n> \tNode_Copy(from, newnode, subquery);\n> \tNode_Copy(from, newnode, funcexpr);\n> + \tNode_Copy(from, newnode, coldeflist);\n> \tnewnode->jointype = from->jointype;\n> \tNode_Copy(from, newnode, joinaliasvars);\n> \tNode_Copy(from, newnode, alias);\n> ***************\n> *** 1707,1712 ****\n> --- 1708,1714 ----\n> \n> \tNode_Copy(from, newnode, funccallnode);\n> \tNode_Copy(from, newnode, alias);\n> + \tNode_Copy(from, newnode, coldeflist);\n> \n> \treturn newnode;\n> }\n> Index: src/backend/nodes/equalfuncs.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/backend/nodes/equalfuncs.c,v\n> retrieving revision 1.144\n> diff -c -r1.144 equalfuncs.c\n> *** src/backend/nodes/equalfuncs.c\t24 Jul 2002 19:11:10 -0000\t1.144\n> --- src/backend/nodes/equalfuncs.c\t27 Jul 2002 19:21:36 -0000\n> ***************\n> *** 1579,1584 ****\n> --- 1579,1586 ----\n> \t\treturn false;\n> \tif (!equal(a->alias, b->alias))\n> \t\treturn false;\n> + \tif (!equal(a->coldeflist, b->coldeflist))\n> + \t\treturn false;\n> \n> \treturn true;\n> }\n> ***************\n> *** 1691,1696 ****\n> --- 1693,1700 ----\n> \tif (!equal(a->subquery, b->subquery))\n> \t\treturn false;\n> \tif (!equal(a->funcexpr, b->funcexpr))\n> + \t\treturn false;\n> + \tif (!equal(a->coldeflist, b->coldeflist))\n> \t\treturn false;\n> \tif (a->jointype != b->jointype)\n> \t\treturn false;\n> Index: src/backend/nodes/outfuncs.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/backend/nodes/outfuncs.c,v\n> retrieving revision 1.165\n> diff -c -r1.165 outfuncs.c\n> *** src/backend/nodes/outfuncs.c\t18 Jul 2002 17:14:19 -0000\t1.165\n> --- src/backend/nodes/outfuncs.c\t27 Jul 2002 19:21:36 -0000\n> ***************\n> *** 1004,1009 ****\n> --- 1004,1011 ----\n> \t\tcase RTE_FUNCTION:\n> \t\t\tappendStringInfo(str, \":funcexpr \");\n> \t\t\t_outNode(str, node->funcexpr);\n> + \t\t\tappendStringInfo(str, \":coldeflist \");\n> + \t\t\t_outNode(str, node->coldeflist);\n> \t\t\tbreak;\n> \t\tcase RTE_JOIN:\n> \t\t\tappendStringInfo(str, \":jointype %d :joinaliasvars \",\n> Index: src/backend/nodes/readfuncs.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/backend/nodes/readfuncs.c,v\n> retrieving revision 1.126\n> diff -c -r1.126 readfuncs.c\n> *** src/backend/nodes/readfuncs.c\t18 Jul 2002 17:14:19 -0000\t1.126\n> --- src/backend/nodes/readfuncs.c\t27 Jul 2002 19:21:36 -0000\n> ***************\n> *** 1545,1550 ****\n> --- 1545,1554 ----\n> \t\tcase RTE_FUNCTION:\n> \t\t\ttoken = pg_strtok(&length); /* eat :funcexpr */\n> \t\t\tlocal_node->funcexpr = nodeRead(true);\t\t/* now read it */\n> + \n> + \t\t\ttoken = pg_strtok(&length); /* eat :coldeflist */\n> + \t\t\tlocal_node->coldeflist = nodeRead(true);\t/* now read it */\n> + \n> \t\t\tbreak;\n> \n> \t\tcase RTE_JOIN:\n> Index: src/backend/parser/gram.y\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/backend/parser/gram.y,v\n> retrieving revision 2.349\n> diff -c -r2.349 gram.y\n> *** src/backend/parser/gram.y\t24 Jul 2002 19:11:10 -0000\t2.349\n> --- src/backend/parser/gram.y\t27 Jul 2002 19:21:36 -0000\n> ***************\n> *** 218,224 ****\n> \t\t\t\ttarget_list, update_target_list, insert_column_list,\n> \t\t\t\tinsert_target_list, def_list, opt_indirection,\n> \t\t\t\tgroup_clause, TriggerFuncArgs, select_limit,\n> ! \t\t\t\topt_select_limit\n> \n> %type <range>\tinto_clause, OptTempTableName\n> \n> --- 218,224 ----\n> \t\t\t\ttarget_list, update_target_list, insert_column_list,\n> \t\t\t\tinsert_target_list, def_list, opt_indirection,\n> \t\t\t\tgroup_clause, TriggerFuncArgs, select_limit,\n> ! \t\t\t\topt_select_limit, tableFuncElementList\n> \n> %type <range>\tinto_clause, OptTempTableName\n> \n> ***************\n> *** 259,266 ****\n> \n> %type <vsetstmt> set_rest\n> \n> ! %type <node>\tOptTableElement, ConstraintElem\n> ! %type <node>\tcolumnDef\n> %type <defelt>\tdef_elem\n> %type <node>\tdef_arg, columnElem, where_clause, insert_column_item,\n> \t\t\t\ta_expr, b_expr, c_expr, r_expr, AexprConst,\n> --- 259,266 ----\n> \n> %type <vsetstmt> set_rest\n> \n> ! %type <node>\tOptTableElement, ConstraintElem, tableFuncElement\n> ! %type <node>\tcolumnDef, tableFuncColumnDef\n> %type <defelt>\tdef_elem\n> %type <node>\tdef_arg, columnElem, where_clause, insert_column_item,\n> \t\t\t\ta_expr, b_expr, c_expr, r_expr, AexprConst,\n> ***************\n> *** 4373,4378 ****\n> --- 4373,4406 ----\n> \t\t\t\t{\n> \t\t\t\t\tRangeFunction *n = makeNode(RangeFunction);\n> \t\t\t\t\tn->funccallnode = $1;\n> + \t\t\t\t\tn->coldeflist = NIL;\n> + \t\t\t\t\t$$ = (Node *) n;\n> + \t\t\t\t}\n> + \t\t\t| func_table AS '(' tableFuncElementList ')'\n> + \t\t\t\t{\n> + \t\t\t\t\tRangeFunction *n = makeNode(RangeFunction);\n> + \t\t\t\t\tn->funccallnode = $1;\n> + \t\t\t\t\tn->coldeflist = $4;\n> + \t\t\t\t\t$$ = (Node *) n;\n> + \t\t\t\t}\n> + \t\t\t| func_table AS ColId '(' tableFuncElementList ')'\n> + \t\t\t\t{\n> + \t\t\t\t\tRangeFunction *n = makeNode(RangeFunction);\n> + \t\t\t\t\tAlias *a = makeNode(Alias);\n> + \t\t\t\t\tn->funccallnode = $1;\n> + \t\t\t\t\ta->aliasname = $3;\n> + \t\t\t\t\tn->alias = a;\n> + \t\t\t\t\tn->coldeflist = $5;\n> + \t\t\t\t\t$$ = (Node *) n;\n> + \t\t\t\t}\n> + \t\t\t| func_table ColId '(' tableFuncElementList ')'\n> + \t\t\t\t{\n> + \t\t\t\t\tRangeFunction *n = makeNode(RangeFunction);\n> + \t\t\t\t\tAlias *a = makeNode(Alias);\n> + \t\t\t\t\tn->funccallnode = $1;\n> + \t\t\t\t\ta->aliasname = $2;\n> + \t\t\t\t\tn->alias = a;\n> + \t\t\t\t\tn->coldeflist = $4;\n> \t\t\t\t\t$$ = (Node *) n;\n> \t\t\t\t}\n> \t\t\t| func_table alias_clause\n> ***************\n> *** 4380,4385 ****\n> --- 4408,4414 ----\n> \t\t\t\t\tRangeFunction *n = makeNode(RangeFunction);\n> \t\t\t\t\tn->funccallnode = $1;\n> \t\t\t\t\tn->alias = $2;\n> + \t\t\t\t\tn->coldeflist = NIL;\n> \t\t\t\t\t$$ = (Node *) n;\n> \t\t\t\t}\n> \t\t\t| select_with_parens\n> ***************\n> *** 4620,4625 ****\n> --- 4649,4687 ----\n> \t\t\t| /*EMPTY*/\t\t\t\t\t\t\t\t{ $$ = NULL; }\n> \t\t;\n> \n> + \n> + tableFuncElementList:\n> + \t\t\ttableFuncElementList ',' tableFuncElement\n> + \t\t\t\t{\n> + \t\t\t\t\tif ($3 != NULL)\n> + \t\t\t\t\t\t$$ = lappend($1, $3);\n> + \t\t\t\t\telse\n> + \t\t\t\t\t\t$$ = $1;\n> + \t\t\t\t}\n> + \t\t\t| tableFuncElement\n> + \t\t\t\t{\n> + \t\t\t\t\tif ($1 != NULL)\n> + \t\t\t\t\t\t$$ = makeList1($1);\n> + \t\t\t\t\telse\n> + \t\t\t\t\t\t$$ = NIL;\n> + \t\t\t\t}\n> + \t\t\t| /*EMPTY*/\t\t\t\t\t\t\t{ $$ = NIL; }\n> + \t\t;\n> + \n> + tableFuncElement:\n> + \t\t\ttableFuncColumnDef\t\t\t\t\t{ $$ = $1; }\n> + \t\t;\n> + \n> + tableFuncColumnDef:\tColId Typename\n> + \t\t\t\t{\n> + \t\t\t\t\tColumnDef *n = makeNode(ColumnDef);\n> + \t\t\t\t\tn->colname = $1;\n> + \t\t\t\t\tn->typename = $2;\n> + \t\t\t\t\tn->constraints = NIL;\n> + \n> + \t\t\t\t\t$$ = (Node *)n;\n> + \t\t\t\t}\n> + \t\t;\n> \n> /*****************************************************************************\n> *\n> Index: src/backend/parser/parse_clause.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/backend/parser/parse_clause.c,v\n> retrieving revision 1.94\n> diff -c -r1.94 parse_clause.c\n> *** src/backend/parser/parse_clause.c\t20 Jun 2002 20:29:32 -0000\t1.94\n> --- src/backend/parser/parse_clause.c\t27 Jul 2002 19:21:36 -0000\n> ***************\n> *** 515,521 ****\n> \t * OK, build an RTE for the function.\n> \t */\n> \trte = addRangeTableEntryForFunction(pstate, funcname, funcexpr,\n> ! \t\t\t\t\t\t\t\t\t\tr->alias, true);\n> \n> \t/*\n> \t * We create a RangeTblRef, but we do not add it to the joinlist or\n> --- 515,521 ----\n> \t * OK, build an RTE for the function.\n> \t */\n> \trte = addRangeTableEntryForFunction(pstate, funcname, funcexpr,\n> ! \t\t\t\t\t\t\t\t\t\tr, true);\n> \n> \t/*\n> \t * We create a RangeTblRef, but we do not add it to the joinlist or\n> Index: src/backend/parser/parse_relation.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/backend/parser/parse_relation.c,v\n> retrieving revision 1.70\n> diff -c -r1.70 parse_relation.c\n> *** src/backend/parser/parse_relation.c\t20 Jun 2002 20:29:33 -0000\t1.70\n> --- src/backend/parser/parse_relation.c\t27 Jul 2002 20:00:42 -0000\n> ***************\n> *** 681,692 ****\n> addRangeTableEntryForFunction(ParseState *pstate,\n> \t\t\t\t\t\t\t char *funcname,\n> \t\t\t\t\t\t\t Node *funcexpr,\n> ! \t\t\t\t\t\t\t Alias *alias,\n> \t\t\t\t\t\t\t bool inFromCl)\n> {\n> \tRangeTblEntry *rte = makeNode(RangeTblEntry);\n> \tOid\t\t\tfuncrettype = exprType(funcexpr);\n> ! \tOid\t\t\tfuncrelid;\n> \tAlias\t *eref;\n> \tint\t\t\tnumaliases;\n> \tint\t\t\tvarattno;\n> --- 681,694 ----\n> addRangeTableEntryForFunction(ParseState *pstate,\n> \t\t\t\t\t\t\t char *funcname,\n> \t\t\t\t\t\t\t Node *funcexpr,\n> ! \t\t\t\t\t\t\t RangeFunction *rangefunc,\n> \t\t\t\t\t\t\t bool inFromCl)\n> {\n> \tRangeTblEntry *rte = makeNode(RangeTblEntry);\n> \tOid\t\t\tfuncrettype = exprType(funcexpr);\n> ! \tchar\t\tfunctyptype;\n> ! \tAlias\t *alias = rangefunc->alias;\n> ! \tList\t *coldeflist = rangefunc->coldeflist;\n> \tAlias\t *eref;\n> \tint\t\t\tnumaliases;\n> \tint\t\t\tvarattno;\n> ***************\n> *** 695,700 ****\n> --- 697,703 ----\n> \trte->relid = InvalidOid;\n> \trte->subquery = NULL;\n> \trte->funcexpr = funcexpr;\n> + \trte->coldeflist = coldeflist;\n> \trte->alias = alias;\n> \n> \teref = alias ? (Alias *) copyObject(alias) : makeAlias(funcname, NIL);\n> ***************\n> *** 706,752 ****\n> \t * Now determine if the function returns a simple or composite type,\n> \t * and check/add column aliases.\n> \t */\n> ! \tfuncrelid = typeidTypeRelid(funcrettype);\n> \n> ! \tif (OidIsValid(funcrelid))\n> \t{\n> \t\t/*\n> ! \t\t * Composite data type, i.e. a table's row type\n> ! \t\t *\n> ! \t\t * Get the rel's relcache entry. This access ensures that we have an\n> ! \t\t * up-to-date relcache entry for the rel.\n> \t\t */\n> ! \t\tRelation\trel;\n> ! \t\tint\t\t\tmaxattrs;\n> \n> ! \t\trel = heap_open(funcrelid, AccessShareLock);\n> \n> ! \t\t/*\n> ! \t\t * Since the rel is open anyway, let's check that the number of column\n> ! \t\t * aliases is reasonable.\n> ! \t\t */\n> ! \t\tmaxattrs = RelationGetNumberOfAttributes(rel);\n> ! \t\tif (maxattrs < numaliases)\n> ! \t\t\telog(ERROR, \"Table \\\"%s\\\" has %d columns available but %d columns specified\",\n> ! \t\t\t\t RelationGetRelationName(rel), maxattrs, numaliases);\n> \n> ! \t\t/* fill in alias columns using actual column names */\n> ! \t\tfor (varattno = numaliases; varattno < maxattrs; varattno++)\n> ! \t\t{\n> ! \t\t\tchar\t *attrname;\n> \n> ! \t\t\tattrname = pstrdup(NameStr(rel->rd_att->attrs[varattno]->attname));\n> ! \t\t\teref->colnames = lappend(eref->colnames, makeString(attrname));\n> \t\t}\n> ! \n> ! \t\t/*\n> ! \t\t * Drop the rel refcount, but keep the access lock till end of\n> ! \t\t * transaction so that the table can't be deleted or have its schema\n> ! \t\t * modified underneath us.\n> ! \t\t */\n> ! \t\theap_close(rel, NoLock);\n> \t}\n> ! \telse\n> \t{\n> \t\t/*\n> \t\t * Must be a base data type, i.e. scalar.\n> --- 709,764 ----\n> \t * Now determine if the function returns a simple or composite type,\n> \t * and check/add column aliases.\n> \t */\n> ! \tfunctyptype = typeid_get_typtype(funcrettype);\n> \n> ! \tif (functyptype == 'c')\n> \t{\n> \t\t/*\n> ! \t\t * Named composite data type, i.e. a table's row type\n> \t\t */\n> ! \t\tOid\t\t\tfuncrelid = typeidTypeRelid(funcrettype);\n> \n> ! \t\tif (OidIsValid(funcrelid))\n> ! \t\t{\n> ! \t\t\t/*\n> ! \t\t\t * Get the rel's relcache entry. This access ensures that we have an\n> ! \t\t\t * up-to-date relcache entry for the rel.\n> ! \t\t\t */\n> ! \t\t\tRelation\trel;\n> ! \t\t\tint\t\t\tmaxattrs;\n> ! \n> ! \t\t\trel = heap_open(funcrelid, AccessShareLock);\n> ! \n> ! \t\t\t/*\n> ! \t\t\t * Since the rel is open anyway, let's check that the number of column\n> ! \t\t\t * aliases is reasonable.\n> ! \t\t\t */\n> ! \t\t\tmaxattrs = RelationGetNumberOfAttributes(rel);\n> ! \t\t\tif (maxattrs < numaliases)\n> ! \t\t\t\telog(ERROR, \"Table \\\"%s\\\" has %d columns available but %d columns specified\",\n> ! \t\t\t\t\t RelationGetRelationName(rel), maxattrs, numaliases);\n> \n> ! \t\t\t/* fill in alias columns using actual column names */\n> ! \t\t\tfor (varattno = numaliases; varattno < maxattrs; varattno++)\n> ! \t\t\t{\n> ! \t\t\t\tchar\t *attrname;\n> \n> ! \t\t\t\tattrname = pstrdup(NameStr(rel->rd_att->attrs[varattno]->attname));\n> ! \t\t\t\teref->colnames = lappend(eref->colnames, makeString(attrname));\n> ! \t\t\t}\n> \n> ! \t\t\t/*\n> ! \t\t\t * Drop the rel refcount, but keep the access lock till end of\n> ! \t\t\t * transaction so that the table can't be deleted or have its schema\n> ! \t\t\t * modified underneath us.\n> ! \t\t\t */\n> ! \t\t\theap_close(rel, NoLock);\n> \t\t}\n> ! \t\telse\n> ! \t\t\telog(ERROR, \"Invalid return relation specified for function %s\",\n> ! \t\t\t\t funcname);\n> \t}\n> ! \telse if (functyptype == 'b')\n> \t{\n> \t\t/*\n> \t\t * Must be a base data type, i.e. scalar.\n> ***************\n> *** 758,763 ****\n> --- 770,791 ----\n> \t\tif (numaliases == 0)\n> \t\t\teref->colnames = makeList1(makeString(funcname));\n> \t}\n> + \telse if (functyptype == 'p' && funcrettype == RECORDOID)\n> + \t{\n> + \t\tList\t *col;\n> + \n> + \t\tforeach(col, coldeflist)\n> + \t\t{\n> + \t\t\tchar\t *attrname;\n> + \t\t\tColumnDef *n = lfirst(col);\n> + \n> + \t\t\tattrname = pstrdup(n->colname);\n> + \t\t\teref->colnames = lappend(eref->colnames, makeString(attrname));\n> + \t\t}\n> + \t}\n> + \telse\n> + \t\telog(ERROR, \"Unknown kind of return type specified for function %s\",\n> + \t\t\t funcname);\n> \n> \t/*----------\n> \t * Flags:\n> ***************\n> *** 1030,1082 ****\n> \t\tcase RTE_FUNCTION:\n> \t\t\t{\n> \t\t\t\t/* Function RTE */\n> ! \t\t\t\tOid\t\t\tfuncrettype = exprType(rte->funcexpr);\n> ! \t\t\t\tOid\t\t\tfuncrelid = typeidTypeRelid(funcrettype);\n> ! \n> ! \t\t\t\tif (OidIsValid(funcrelid))\n> \t\t\t\t{\n> ! \t\t\t\t\t/*\n> ! \t\t\t\t\t * Composite data type, i.e. a table's row type\n> ! \t\t\t\t\t * Same as ordinary relation RTE\n> ! \t\t\t\t\t */\n> ! \t\t\t\t\tRelation\trel;\n> ! \t\t\t\t\tint\t\t\tmaxattrs;\n> ! \t\t\t\t\tint\t\t\tnumaliases;\n> ! \n> ! \t\t\t\t\trel = heap_open(funcrelid, AccessShareLock);\n> ! \t\t\t\t\tmaxattrs = RelationGetNumberOfAttributes(rel);\n> ! \t\t\t\t\tnumaliases = length(rte->eref->colnames);\n> ! \n> ! \t\t\t\t\tfor (varattno = 0; varattno < maxattrs; varattno++)\n> \t\t\t\t\t{\n> - \t\t\t\t\t\tForm_pg_attribute attr = rel->rd_att->attrs[varattno];\n> \n> ! \t\t\t\t\t\tif (colnames)\n> ! \t\t\t\t\t\t{\n> ! \t\t\t\t\t\t\tchar\t *label;\n> ! \n> ! \t\t\t\t\t\t\tif (varattno < numaliases)\n> ! \t\t\t\t\t\t\t\tlabel = strVal(nth(varattno, rte->eref->colnames));\n> ! \t\t\t\t\t\t\telse\n> ! \t\t\t\t\t\t\t\tlabel = NameStr(attr->attname);\n> ! \t\t\t\t\t\t\t*colnames = lappend(*colnames, makeString(pstrdup(label)));\n> ! \t\t\t\t\t\t}\n> \n> ! \t\t\t\t\t\tif (colvars)\n> \t\t\t\t\t\t{\n> ! \t\t\t\t\t\t\tVar\t\t *varnode;\n> \n> ! \t\t\t\t\t\t\tvarnode = makeVar(rtindex, attr->attnum,\n> ! \t\t\t\t\t\t\t\t\t\t\t attr->atttypid, attr->atttypmod,\n> ! \t\t\t\t\t\t\t\t\t\t\t sublevels_up);\n> \n> ! \t\t\t\t\t\t\t*colvars = lappend(*colvars, varnode);\n> \t\t\t\t\t\t}\n> - \t\t\t\t\t}\n> \n> ! \t\t\t\t\theap_close(rel, AccessShareLock);\n> \t\t\t\t}\n> ! \t\t\t\telse\n> \t\t\t\t{\n> \t\t\t\t\t/*\n> \t\t\t\t\t * Must be a base data type, i.e. scalar\n> --- 1058,1124 ----\n> \t\tcase RTE_FUNCTION:\n> \t\t\t{\n> \t\t\t\t/* Function RTE */\n> ! \t\t\t\tOid\tfuncrettype = exprType(rte->funcexpr);\n> ! \t\t\t\tchar functyptype = typeid_get_typtype(funcrettype);\n> ! \t\t\t\tList *coldeflist = rte->coldeflist;\n> ! \n> ! \t\t\t\t/*\n> ! \t\t\t\t * Build a suitable tupledesc representing the output rows\n> ! \t\t\t\t */\n> ! \t\t\t\tif (functyptype == 'c')\n> \t\t\t\t{\n> ! \t\t\t\t\tOid\tfuncrelid = typeidTypeRelid(funcrettype);\n> ! \t\t\t\t\tif (OidIsValid(funcrelid))\n> \t\t\t\t\t{\n> \n> ! \t\t\t\t\t\t/*\n> ! \t\t\t\t\t\t * Composite data type, i.e. a table's row type\n> ! \t\t\t\t\t\t * Same as ordinary relation RTE\n> ! \t\t\t\t\t\t */\n> ! \t\t\t\t\t\tRelation\trel;\n> ! \t\t\t\t\t\tint\t\t\tmaxattrs;\n> ! \t\t\t\t\t\tint\t\t\tnumaliases;\n> ! \n> ! \t\t\t\t\t\trel = heap_open(funcrelid, AccessShareLock);\n> ! \t\t\t\t\t\tmaxattrs = RelationGetNumberOfAttributes(rel);\n> ! \t\t\t\t\t\tnumaliases = length(rte->eref->colnames);\n> \n> ! \t\t\t\t\t\tfor (varattno = 0; varattno < maxattrs; varattno++)\n> \t\t\t\t\t\t{\n> ! \t\t\t\t\t\t\tForm_pg_attribute attr = rel->rd_att->attrs[varattno];\n> \n> ! \t\t\t\t\t\t\tif (colnames)\n> ! \t\t\t\t\t\t\t{\n> ! \t\t\t\t\t\t\t\tchar\t *label;\n> ! \n> ! \t\t\t\t\t\t\t\tif (varattno < numaliases)\n> ! \t\t\t\t\t\t\t\t\tlabel = strVal(nth(varattno, rte->eref->colnames));\n> ! \t\t\t\t\t\t\t\telse\n> ! \t\t\t\t\t\t\t\t\tlabel = NameStr(attr->attname);\n> ! \t\t\t\t\t\t\t\t*colnames = lappend(*colnames, makeString(pstrdup(label)));\n> ! \t\t\t\t\t\t\t}\n> ! \n> ! \t\t\t\t\t\t\tif (colvars)\n> ! \t\t\t\t\t\t\t{\n> ! \t\t\t\t\t\t\t\tVar\t\t *varnode;\n> ! \n> ! \t\t\t\t\t\t\t\tvarnode = makeVar(rtindex,\n> ! \t\t\t\t\t\t\t\t\t\t\t\tattr->attnum,\n> ! \t\t\t\t\t\t\t\t\t\t\t\tattr->atttypid,\n> ! \t\t\t\t\t\t\t\t\t\t\t\tattr->atttypmod,\n> ! \t\t\t\t\t\t\t\t\t\t\t\tsublevels_up);\n> \n> ! \t\t\t\t\t\t\t\t*colvars = lappend(*colvars, varnode);\n> ! \t\t\t\t\t\t\t}\n> \t\t\t\t\t\t}\n> \n> ! \t\t\t\t\t\theap_close(rel, AccessShareLock);\n> ! \t\t\t\t\t}\n> ! \t\t\t\t\telse\n> ! \t\t\t\t\t\telog(ERROR, \"Invalid return relation specified\"\n> ! \t\t\t\t\t\t\t\t\t\" for function\");\n> \t\t\t\t}\n> ! \t\t\t\telse if (functyptype == 'b')\n> \t\t\t\t{\n> \t\t\t\t\t/*\n> \t\t\t\t\t * Must be a base data type, i.e. scalar\n> ***************\n> *** 1096,1101 ****\n> --- 1138,1184 ----\n> \t\t\t\t\t\t*colvars = lappend(*colvars, varnode);\n> \t\t\t\t\t}\n> \t\t\t\t}\n> + \t\t\t\telse if (functyptype == 'p' && funcrettype == RECORDOID)\n> + \t\t\t\t{\n> + \t\t\t\t\tList\t *col;\n> + \t\t\t\t\tint\t\t\tattnum = 0;\n> + \n> + \t\t\t\t\tforeach(col, coldeflist)\n> + \t\t\t\t\t{\n> + \t\t\t\t\t\tColumnDef *colDef = lfirst(col);\n> + \n> + \t\t\t\t\t\tattnum++;\n> + \t\t\t\t\t\tif (colnames)\n> + \t\t\t\t\t\t{\n> + \t\t\t\t\t\t\tchar\t *attrname;\n> + \n> + \t\t\t\t\t\t\tattrname = pstrdup(colDef->colname);\n> + \t\t\t\t\t\t\t*colnames = lappend(*colnames, makeString(attrname));\n> + \t\t\t\t\t\t}\n> + \n> + \t\t\t\t\t\tif (colvars)\n> + \t\t\t\t\t\t{\n> + \t\t\t\t\t\t\tVar\t\t *varnode;\n> + \t\t\t\t\t\t\tHeapTuple\ttypeTuple;\n> + \t\t\t\t\t\t\tOid\t\t\tatttypid;\n> + \n> + \t\t\t\t\t\t\ttypeTuple = typenameType(colDef->typename);\n> + \t\t\t\t\t\t\tatttypid = HeapTupleGetOid(typeTuple);\n> + \t\t\t\t\t\t\tReleaseSysCache(typeTuple);\n> + \n> + \t\t\t\t\t\t\tvarnode = makeVar(rtindex,\n> + \t\t\t\t\t\t\t\t\t\t\tattnum,\n> + \t\t\t\t\t\t\t\t\t\t\tatttypid,\n> + \t\t\t\t\t\t\t\t\t\t\t-1,\n> + \t\t\t\t\t\t\t\t\t\t\tsublevels_up);\n> + \n> + \t\t\t\t\t\t\t*colvars = lappend(*colvars, varnode);\n> + \t\t\t\t\t\t}\n> + \t\t\t\t\t}\n> + \t\t\t\t}\n> + \t\t\t\telse\n> + \t\t\t\t\telog(ERROR, \"Unknown kind of return type specified\"\n> + \t\t\t\t\t\t\t\t\" for function\");\n> \t\t\t}\n> \t\t\tbreak;\n> \t\tcase RTE_JOIN:\n> ***************\n> *** 1277,1308 ****\n> \t\tcase RTE_FUNCTION:\n> \t\t\t{\n> \t\t\t\t/* Function RTE */\n> ! \t\t\t\tOid\t\t\tfuncrettype = exprType(rte->funcexpr);\n> ! \t\t\t\tOid\t\t\tfuncrelid = typeidTypeRelid(funcrettype);\n> ! \n> ! \t\t\t\tif (OidIsValid(funcrelid))\n> \t\t\t\t{\n> \t\t\t\t\t/*\n> \t\t\t\t\t * Composite data type, i.e. a table's row type\n> \t\t\t\t\t * Same as ordinary relation RTE\n> \t\t\t\t\t */\n> ! \t\t\t\t\tHeapTuple\t\t\ttp;\n> ! \t\t\t\t\tForm_pg_attribute\tatt_tup;\n> \n> ! \t\t\t\t\ttp = SearchSysCache(ATTNUM,\n> ! \t\t\t\t\t\t\t\t\t\tObjectIdGetDatum(funcrelid),\n> ! \t\t\t\t\t\t\t\t\t\tInt16GetDatum(attnum),\n> ! \t\t\t\t\t\t\t\t\t\t0, 0);\n> ! \t\t\t\t\t/* this shouldn't happen... */\n> ! \t\t\t\t\tif (!HeapTupleIsValid(tp))\n> ! \t\t\t\t\t\telog(ERROR, \"Relation %s does not have attribute %d\",\n> ! \t\t\t\t\t\t\t get_rel_name(funcrelid), attnum);\n> ! \t\t\t\t\tatt_tup = (Form_pg_attribute) GETSTRUCT(tp);\n> ! \t\t\t\t\t*vartype = att_tup->atttypid;\n> ! \t\t\t\t\t*vartypmod = att_tup->atttypmod;\n> ! \t\t\t\t\tReleaseSysCache(tp);\n> \t\t\t\t}\n> ! \t\t\t\telse\n> \t\t\t\t{\n> \t\t\t\t\t/*\n> \t\t\t\t\t * Must be a base data type, i.e. scalar\n> --- 1360,1403 ----\n> \t\tcase RTE_FUNCTION:\n> \t\t\t{\n> \t\t\t\t/* Function RTE */\n> ! \t\t\t\tOid funcrettype = exprType(rte->funcexpr);\n> ! \t\t\t\tchar functyptype = typeid_get_typtype(funcrettype);\n> ! \t\t\t\tList *coldeflist = rte->coldeflist;\n> ! \n> ! \t\t\t\t/*\n> ! \t\t\t\t * Build a suitable tupledesc representing the output rows\n> ! \t\t\t\t */\n> ! \t\t\t\tif (functyptype == 'c')\n> \t\t\t\t{\n> \t\t\t\t\t/*\n> \t\t\t\t\t * Composite data type, i.e. a table's row type\n> \t\t\t\t\t * Same as ordinary relation RTE\n> \t\t\t\t\t */\n> ! \t\t\t\t\tOid funcrelid = typeidTypeRelid(funcrettype);\n> ! \n> ! \t\t\t\t\tif (OidIsValid(funcrelid))\n> ! \t\t\t\t\t{\n> ! \t\t\t\t\t\tHeapTuple\t\t\ttp;\n> ! \t\t\t\t\t\tForm_pg_attribute\tatt_tup;\n> \n> ! \t\t\t\t\t\ttp = SearchSysCache(ATTNUM,\n> ! \t\t\t\t\t\t\t\t\t\t\tObjectIdGetDatum(funcrelid),\n> ! \t\t\t\t\t\t\t\t\t\t\tInt16GetDatum(attnum),\n> ! \t\t\t\t\t\t\t\t\t\t\t0, 0);\n> ! \t\t\t\t\t\t/* this shouldn't happen... */\n> ! \t\t\t\t\t\tif (!HeapTupleIsValid(tp))\n> ! \t\t\t\t\t\t\telog(ERROR, \"Relation %s does not have attribute %d\",\n> ! \t\t\t\t\t\t\t\t get_rel_name(funcrelid), attnum);\n> ! \t\t\t\t\t\tatt_tup = (Form_pg_attribute) GETSTRUCT(tp);\n> ! \t\t\t\t\t\t*vartype = att_tup->atttypid;\n> ! \t\t\t\t\t\t*vartypmod = att_tup->atttypmod;\n> ! \t\t\t\t\t\tReleaseSysCache(tp);\n> ! \t\t\t\t\t}\n> ! \t\t\t\t\telse\n> ! \t\t\t\t\t\telog(ERROR, \"Invalid return relation specified\"\n> ! \t\t\t\t\t\t\t\t\t\" for function\");\n> \t\t\t\t}\n> ! \t\t\t\telse if (functyptype == 'b')\n> \t\t\t\t{\n> \t\t\t\t\t/*\n> \t\t\t\t\t * Must be a base data type, i.e. scalar\n> ***************\n> *** 1310,1315 ****\n> --- 1405,1426 ----\n> \t\t\t\t\t*vartype = funcrettype;\n> \t\t\t\t\t*vartypmod = -1;\n> \t\t\t\t}\n> + \t\t\t\telse if (functyptype == 'p' && funcrettype == RECORDOID)\n> + \t\t\t\t{\n> + \t\t\t\t\tColumnDef *colDef = nth(attnum - 1, coldeflist);\n> + \t\t\t\t\tHeapTuple\ttypeTuple;\n> + \t\t\t\t\tOid\t\t\tatttypid;\n> + \n> + \t\t\t\t\ttypeTuple = typenameType(colDef->typename);\n> + \t\t\t\t\tatttypid = HeapTupleGetOid(typeTuple);\n> + \t\t\t\t\tReleaseSysCache(typeTuple);\n> + \n> + \t\t\t\t\t*vartype = atttypid;\n> + \t\t\t\t\t*vartypmod = -1;\n> + \t\t\t\t}\n> + \t\t\t\telse\n> + \t\t\t\t\telog(ERROR, \"Unknown kind of return type specified\"\n> + \t\t\t\t\t\t\t\t\" for function\");\n> \t\t\t}\n> \t\t\tbreak;\n> \t\tcase RTE_JOIN:\n> ***************\n> *** 1448,1451 ****\n> --- 1559,1587 ----\n> \t\telog(NOTICE, \"Adding missing FROM-clause entry%s for table \\\"%s\\\"\",\n> \t\t\t pstate->parentParseState != NULL ? \" in subquery\" : \"\",\n> \t\t\t relation->relname);\n> + }\n> + \n> + char\n> + typeid_get_typtype(Oid typeid)\n> + {\n> + \tHeapTuple\t\ttypeTuple;\n> + \tForm_pg_type\ttypeStruct;\n> + \tchar\t\t\tresult;\n> + \n> + \t/*\n> + \t * determine if the function returns a simple, named composite,\n> + \t * or anonymous composite type\n> + \t */\n> + \ttypeTuple = SearchSysCache(TYPEOID,\n> + \t\t\t\t\t\t\t ObjectIdGetDatum(typeid),\n> + \t\t\t\t\t\t\t 0, 0, 0);\n> + \tif (!HeapTupleIsValid(typeTuple))\n> + \t\telog(ERROR, \"cache lookup for type %u failed\", typeid);\n> + \ttypeStruct = (Form_pg_type) GETSTRUCT(typeTuple);\n> + \n> + \tresult = typeStruct->typtype;\n> + \n> + \tReleaseSysCache(typeTuple);\n> + \n> + \treturn result;\n> }\n> Index: src/include/catalog/pg_type.h\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/include/catalog/pg_type.h,v\n> retrieving revision 1.125\n> diff -c -r1.125 pg_type.h\n> *** src/include/catalog/pg_type.h\t24 Jul 2002 19:11:13 -0000\t1.125\n> --- src/include/catalog/pg_type.h\t27 Jul 2002 19:58:03 -0000\n> ***************\n> *** 60,69 ****\n> \tbool\t\ttypbyval;\n> \n> \t/*\n> ! \t * typtype is 'b' for a basic type and 'c' for a catalog type (ie a\n> ! \t * class). If typtype is 'c', typrelid is the OID of the class' entry\n> ! \t * in pg_class. (Why do we need an entry in pg_type for classes,\n> ! \t * anyway?)\n> \t */\n> \tchar\t\ttyptype;\n> \n> --- 60,69 ----\n> \tbool\t\ttypbyval;\n> \n> \t/*\n> ! \t * typtype is 'b' for a basic type, 'c' for a catalog type (ie a\n> ! \t * class), or 'p' for a pseudo type. If typtype is 'c', typrelid is the\n> ! \t * OID of the class' entry in pg_class. (Why do we need an entry in\n> ! \t * pg_type for classes, anyway?)\n> \t */\n> \tchar\t\ttyptype;\n> \n> ***************\n> *** 501,506 ****\n> --- 501,516 ----\n> DATA(insert OID = 2210 ( _regclass PGNSP PGUID -1 f b t \\054 0 2205 array_in array_out i x f 0 -1 0 _null_ _null_ ));\n> DATA(insert OID = 2211 ( _regtype PGNSP PGUID -1 f b t \\054 0 2206 array_in array_out i x f 0 -1 0 _null_ _null_ ));\n> \n> + /*\n> + * pseudo-types \n> + *\n> + * types with typtype='p' are special types that represent classes of types\n> + * that are not easily defined in advance. Currently there is only one pseudo\n> + * type -- record. The record type is used to specify that the value is a\n> + * tuple, but of unknown structure until runtime. \n> + */\n> + DATA(insert OID = 2249 ( record PGNSP PGUID 4 t p t \\054 0 0 oidin oidout i p f 0 -1 0 _null_ _null_ ));\n> + #define RECORDOID\t\t2249\n> \n> /*\n> * prototypes for functions in pg_type.c\n> Index: src/include/nodes/execnodes.h\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/include/nodes/execnodes.h,v\n> retrieving revision 1.70\n> diff -c -r1.70 execnodes.h\n> *** src/include/nodes/execnodes.h\t20 Jun 2002 20:29:49 -0000\t1.70\n> --- src/include/nodes/execnodes.h\t28 Jul 2002 22:09:25 -0000\n> ***************\n> *** 509,519 ****\n> *\t\tFunction nodes are used to scan the results of a\n> *\t\tfunction appearing in FROM (typically a function returning set).\n> *\n> ! *\t\tfunctionmode\t\t\tfunction operating mode:\n> *\t\t\t\t\t\t\t- repeated call\n> *\t\t\t\t\t\t\t- materialize\n> *\t\t\t\t\t\t\t- return query\n> *\t\ttuplestorestate\t\tprivate state of tuplestore.c\n> * ----------------\n> */\n> typedef enum FunctionMode\n> --- 509,525 ----\n> *\t\tFunction nodes are used to scan the results of a\n> *\t\tfunction appearing in FROM (typically a function returning set).\n> *\n> ! *\t\tfunctionmode\t\tfunction operating mode:\n> *\t\t\t\t\t\t\t- repeated call\n> *\t\t\t\t\t\t\t- materialize\n> *\t\t\t\t\t\t\t- return query\n> + *\t\ttupdesc\t\t\t\tfunction's return tuple description\n> *\t\ttuplestorestate\t\tprivate state of tuplestore.c\n> + *\t\tfuncexpr\t\t\tfunction expression being evaluated\n> + *\t\treturnsTuple\t\tdoes function return tuples?\n> + *\t\tfn_typeid\t\t\tOID of function return type\n> + *\t\tfn_typtype\t\t\treturn Datum type, i.e. 'b'ase,\n> + *\t\t\t\t\t\t\t'c'atalog, or 'p'seudo\n> * ----------------\n> */\n> typedef enum FunctionMode\n> ***************\n> *** 525,536 ****\n> \n> typedef struct FunctionScanState\n> {\n> ! \tCommonScanState csstate;\t/* its first field is NodeTag */\n> \tFunctionMode\tfunctionmode;\n> \tTupleDesc\t\ttupdesc;\n> \tvoid\t\t *tuplestorestate;\n> ! \tNode\t\t *funcexpr;\t/* function expression being evaluated */\n> ! \tbool\t\t\treturnsTuple; /* does function return tuples? */\n> } FunctionScanState;\n> \n> /* ----------------------------------------------------------------\n> --- 531,544 ----\n> \n> typedef struct FunctionScanState\n> {\n> ! \tCommonScanState csstate;\t\t/* its first field is NodeTag */\n> \tFunctionMode\tfunctionmode;\n> \tTupleDesc\t\ttupdesc;\n> \tvoid\t\t *tuplestorestate;\n> ! \tNode\t\t *funcexpr;\n> ! \tbool\t\t\treturnsTuple;\n> ! \tOid\t\t\t\tfn_typeid;\n> ! \tchar\t\t\tfn_typtype;\n> } FunctionScanState;\n> \n> /* ----------------------------------------------------------------\n> Index: src/include/nodes/parsenodes.h\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/include/nodes/parsenodes.h,v\n> retrieving revision 1.194\n> diff -c -r1.194 parsenodes.h\n> *** src/include/nodes/parsenodes.h\t24 Jul 2002 19:11:14 -0000\t1.194\n> --- src/include/nodes/parsenodes.h\t27 Jul 2002 19:21:36 -0000\n> ***************\n> *** 400,405 ****\n> --- 400,407 ----\n> \tNodeTag\t\ttype;\n> \tNode\t *funccallnode;\t/* untransformed function call tree */\n> \tAlias\t *alias;\t\t\t/* table alias & optional column aliases */\n> + \tList\t *coldeflist;\t\t/* list of ColumnDef nodes for runtime\n> + \t\t\t\t\t\t\t\t * assignment of RECORD TupleDesc */\n> } RangeFunction;\n> \n> /*\n> ***************\n> *** 527,532 ****\n> --- 529,536 ----\n> \t * Fields valid for a function RTE (else NULL):\n> \t */\n> \tNode\t *funcexpr;\t\t/* expression tree for func call */\n> + \tList\t *coldeflist;\t\t/* list of ColumnDef nodes for runtime\n> + \t\t\t\t\t\t\t\t * assignment of RECORD TupleDesc */\n> \n> \t/*\n> \t * Fields valid for a join RTE (else NULL/zero):\n> Index: src/include/parser/parse_relation.h\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/include/parser/parse_relation.h,v\n> retrieving revision 1.34\n> diff -c -r1.34 parse_relation.h\n> *** src/include/parser/parse_relation.h\t20 Jun 2002 20:29:51 -0000\t1.34\n> --- src/include/parser/parse_relation.h\t27 Jul 2002 19:21:36 -0000\n> ***************\n> *** 44,50 ****\n> extern RangeTblEntry *addRangeTableEntryForFunction(ParseState *pstate,\n> \t\t\t\t\t\t\t\t\t\t\t\t\tchar *funcname,\n> \t\t\t\t\t\t\t\t\t\t\t\t\tNode *funcexpr,\n> ! \t\t\t\t\t\t\t\t\t\t\t\t\tAlias *alias,\n> \t\t\t\t\t\t\t\t\t\t\t\t\tbool inFromCl);\n> extern RangeTblEntry *addRangeTableEntryForJoin(ParseState *pstate,\n> \t\t\t\t\t\t List *colnames,\n> --- 44,50 ----\n> extern RangeTblEntry *addRangeTableEntryForFunction(ParseState *pstate,\n> \t\t\t\t\t\t\t\t\t\t\t\t\tchar *funcname,\n> \t\t\t\t\t\t\t\t\t\t\t\t\tNode *funcexpr,\n> ! \t\t\t\t\t\t\t\t\t\t\t\t\tRangeFunction *rangefunc,\n> \t\t\t\t\t\t\t\t\t\t\t\t\tbool inFromCl);\n> extern RangeTblEntry *addRangeTableEntryForJoin(ParseState *pstate,\n> \t\t\t\t\t\t List *colnames,\n> ***************\n> *** 61,65 ****\n> --- 61,66 ----\n> extern int\tattnameAttNum(Relation rd, char *a);\n> extern Name attnumAttName(Relation rd, int attid);\n> extern Oid\tattnumTypeId(Relation rd, int attid);\n> + extern char typeid_get_typtype(Oid typeid);\n> \n> #endif /* PARSE_RELATION_H */\n> Index: src/test/regress/expected/type_sanity.out\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/test/regress/expected/type_sanity.out,v\n> retrieving revision 1.9\n> diff -c -r1.9 type_sanity.out\n> *** src/test/regress/expected/type_sanity.out\t24 Jul 2002 19:11:14 -0000\t1.9\n> --- src/test/regress/expected/type_sanity.out\t29 Jul 2002 00:56:57 -0000\n> ***************\n> *** 16,22 ****\n> SELECT p1.oid, p1.typname\n> FROM pg_type as p1\n> WHERE (p1.typlen <= 0 AND p1.typlen != -1) OR\n> ! (p1.typtype != 'b' AND p1.typtype != 'c') OR\n> NOT p1.typisdefined OR\n> (p1.typalign != 'c' AND p1.typalign != 's' AND\n> p1.typalign != 'i' AND p1.typalign != 'd') OR\n> --- 16,22 ----\n> SELECT p1.oid, p1.typname\n> FROM pg_type as p1\n> WHERE (p1.typlen <= 0 AND p1.typlen != -1) OR\n> ! (p1.typtype != 'b' AND p1.typtype != 'c' AND p1.typtype != 'p') OR\n> NOT p1.typisdefined OR\n> (p1.typalign != 'c' AND p1.typalign != 's' AND\n> p1.typalign != 'i' AND p1.typalign != 'd') OR\n> ***************\n> *** 60,66 ****\n> -- NOTE: as of 7.3, this check finds SET, smgr, and unknown.\n> SELECT p1.oid, p1.typname\n> FROM pg_type as p1\n> ! WHERE p1.typtype != 'c' AND p1.typname NOT LIKE '\\\\_%' AND NOT EXISTS\n> (SELECT 1 FROM pg_type as p2\n> WHERE p2.typname = ('_' || p1.typname)::name AND\n> p2.typelem = p1.oid);\n> --- 60,66 ----\n> -- NOTE: as of 7.3, this check finds SET, smgr, and unknown.\n> SELECT p1.oid, p1.typname\n> FROM pg_type as p1\n> ! WHERE p1.typtype = 'b' AND p1.typname NOT LIKE '\\\\_%' AND NOT EXISTS\n> (SELECT 1 FROM pg_type as p2\n> WHERE p2.typname = ('_' || p1.typname)::name AND\n> p2.typelem = p1.oid);\n> Index: src/test/regress/sql/type_sanity.sql\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/test/regress/sql/type_sanity.sql,v\n> retrieving revision 1.9\n> diff -c -r1.9 type_sanity.sql\n> *** src/test/regress/sql/type_sanity.sql\t24 Jul 2002 19:11:14 -0000\t1.9\n> --- src/test/regress/sql/type_sanity.sql\t29 Jul 2002 00:52:41 -0000\n> ***************\n> *** 19,25 ****\n> SELECT p1.oid, p1.typname\n> FROM pg_type as p1\n> WHERE (p1.typlen <= 0 AND p1.typlen != -1) OR\n> ! (p1.typtype != 'b' AND p1.typtype != 'c') OR\n> NOT p1.typisdefined OR\n> (p1.typalign != 'c' AND p1.typalign != 's' AND\n> p1.typalign != 'i' AND p1.typalign != 'd') OR\n> --- 19,25 ----\n> SELECT p1.oid, p1.typname\n> FROM pg_type as p1\n> WHERE (p1.typlen <= 0 AND p1.typlen != -1) OR\n> ! (p1.typtype != 'b' AND p1.typtype != 'c' AND p1.typtype != 'p') OR\n> NOT p1.typisdefined OR\n> (p1.typalign != 'c' AND p1.typalign != 's' AND\n> p1.typalign != 'i' AND p1.typalign != 'd') OR\n> ***************\n> *** 55,61 ****\n> \n> SELECT p1.oid, p1.typname\n> FROM pg_type as p1\n> ! WHERE p1.typtype != 'c' AND p1.typname NOT LIKE '\\\\_%' AND NOT EXISTS\n> (SELECT 1 FROM pg_type as p2\n> WHERE p2.typname = ('_' || p1.typname)::name AND\n> p2.typelem = p1.oid);\n> --- 55,61 ----\n> \n> SELECT p1.oid, p1.typname\n> FROM pg_type as p1\n> ! WHERE p1.typtype = 'b' AND p1.typname NOT LIKE '\\\\_%' AND NOT EXISTS\n> (SELECT 1 FROM pg_type as p2\n> WHERE p2.typname = ('_' || p1.typname)::name AND\n> p2.typelem = p1.oid);\n\n> Index: doc/src/sgml/ref/select.sgml\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/doc/src/sgml/ref/select.sgml,v\n> retrieving revision 1.54\n> diff -c -r1.54 select.sgml\n> *** doc/src/sgml/ref/select.sgml\t23 Apr 2002 02:07:16 -0000\t1.54\n> --- doc/src/sgml/ref/select.sgml\t29 Jul 2002 04:16:51 -0000\n> ***************\n> *** 40,45 ****\n> --- 40,51 ----\n> ( <replaceable class=\"PARAMETER\">select</replaceable> )\n> [ AS ] <replaceable class=\"PARAMETER\">alias</replaceable> [ ( <replaceable class=\"PARAMETER\">column_alias_list</replaceable> ) ]\n> |\n> + <replaceable class=\"PARAMETER\">table_function_name</replaceable> ( [ <replaceable class=\"parameter\">argtype</replaceable> [, ...] ] )\n> + [ AS ] <replaceable class=\"PARAMETER\">alias</replaceable> [ ( <replaceable class=\"PARAMETER\">column_alias_list</replaceable> | <replaceable class=\"PARAMETER\">column_definition_list</replaceable> ) ]\n> + |\n> + <replaceable class=\"PARAMETER\">table_function_name</replaceable> ( [ <replaceable class=\"parameter\">argtype</replaceable> [, ...] ] )\n> + AS ( <replaceable class=\"PARAMETER\">column_definition_list</replaceable> )\n> + |\n> <replaceable class=\"PARAMETER\">from_item</replaceable> [ NATURAL ] <replaceable class=\"PARAMETER\">join_type</replaceable> <replaceable class=\"PARAMETER\">from_item</replaceable>\n> [ ON <replaceable class=\"PARAMETER\">join_condition</replaceable> | USING ( <replaceable class=\"PARAMETER\">join_column_list</replaceable> ) ]\n> </synopsis>\n> ***************\n> *** 82,88 ****\n> <term><replaceable class=\"PARAMETER\">from_item</replaceable></term>\n> <listitem>\n> <para>\n> ! A table reference, sub-SELECT, or JOIN clause. See below for details.\n> </para>\n> </listitem>\n> </varlistentry>\n> --- 88,94 ----\n> <term><replaceable class=\"PARAMETER\">from_item</replaceable></term>\n> <listitem>\n> <para>\n> ! A table reference, sub-SELECT, table function, or JOIN clause. See below for details.\n> </para>\n> </listitem>\n> </varlistentry>\n> ***************\n> *** 156,161 ****\n> --- 162,184 ----\n> </para>\n> </listitem>\n> </varlistentry>\n> + \n> + <varlistentry>\n> + <term><replaceable class=\"PARAMETER\">table function</replaceable></term>\n> + <listitem>\n> + <para>\n> + \tA table function can appear in the FROM clause. This acts as though\n> + \tits output were created as a temporary table for the duration of\n> + \tthis single SELECT command. An alias may also be used. If an alias is\n> + \twritten, a column alias list can also be written to provide\tsubstitute names\n> + \tfor one or more columns of the table function. If the table function has been\n> + \tdefined as returning the RECORD data type, an alias, or the keyword AS, must\n> + also be present, followed by a column definition list in the form\n> + \t( <replaceable class=\"PARAMETER\">column_name</replaceable> <replaceable class=\"PARAMETER\">data_type</replaceable> [, ... ] ).\n> + \tThe column definition list must match the actual number and types returned by the function.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> \n> <varlistentry>\n> <term><replaceable class=\"PARAMETER\">join_type</replaceable></term>\n> ***************\n> *** 381,386 ****\n> --- 404,422 ----\n> </para>\n> \n> <para>\n> + A FROM item can be a table function (i.e. a function that returns\n> + multiple rows and columns). When a table function is created, it may\n> + \tbe defined to return a named scalar or composite data type (an existing\n> + \tscalar data type, or a table or view name), or it may be defined to return\n> + \ta RECORD data type. When a table function is defined to return RECORD, it\n> + \tmust be followed in the FROM clause by an alias, or the keyword AS alone,\n> + \tand then by a parenthesized list of column names and types. This provides\n> + \ta query-time composite type definition. The FROM clause composite type\n> + \tmust match the actual composite type returned from the function or an\n> + \tERROR will be generated.\n> + </para>\n> + \n> + <para>\n> Finally, a FROM item can be a JOIN clause, which combines two simpler\n> FROM items. (Use parentheses if necessary to determine the order\n> of nesting.)\n> ***************\n> *** 925,930 ****\n> --- 961,1003 ----\n> Warren Beatty\n> Westward\n> Woody Allen\n> + </programlisting>\n> + </para>\n> + \n> + <para>\n> + This example shows how to use a table function, both with and without\n> + a column definition list.\n> + \n> + <programlisting>\n> + distributors:\n> + did | name\n> + -----+--------------\n> + 108 | Westward\n> + 111 | Walt Disney\n> + 112 | Warner Bros.\n> + ...\n> + \n> + CREATE FUNCTION distributors(int)\n> + RETURNS SETOF distributors AS '\n> + SELECT * FROM distributors WHERE did = $1;\n> + ' LANGUAGE SQL;\n> + \n> + SELECT * FROM distributors(111);\n> + did | name\n> + -----+-------------\n> + 111 | Walt Disney\n> + (1 row)\n> + \n> + CREATE FUNCTION distributors_2(int)\n> + RETURNS SETOF RECORD AS '\n> + SELECT * FROM distributors WHERE did = $1;\n> + ' LANGUAGE SQL;\n> + \n> + SELECT * FROM distributors_2(111) AS (f1 int, f2 text);\n> + f1 | f2\n> + -----+-------------\n> + 111 | Walt Disney\n> + (1 row)\n> </programlisting>\n> </para>\n> </refsect1>\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 4 Aug 2002 15:48:10 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: anonymous composite types for Table Functions (aka SRFs)" }, { "msg_contents": "\nGot it and applied. Thanks. This is a major feature now.\n\n---------------------------------------------------------------------------\n\nJoe Conway wrote:\n> Bruce Momjian wrote:\n> > I am sorry but I am unable to apply this patch because of the DROP\n> > COLUMN patch that was applied since you submitted this. \n> > \n> > It had rejections in gram.y and parse_relation.c, but those were easy to\n> > fix. The big problem is pg_proc.c, where the code changes can not be\n> > merged.\n> > \n> > I am attaching the rejected part of the patch. If you can send me a\n> > fixed version of just that change, I can commit the rest.\n> > \n> \n> OK. Here is a patch against current cvs for just pg_proc.c. This \n> includes all the changes for that file (i.e. not just the one rejected \n> hunk).\n> \n> Thanks,\n> \n> Joe\n> \n> \n\n> Index: src/backend/catalog/pg_proc.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/backend/catalog/pg_proc.c,v\n> retrieving revision 1.82\n> diff -c -r1.82 pg_proc.c\n> *** src/backend/catalog/pg_proc.c\t2 Aug 2002 18:15:05 -0000\t1.82\n> --- src/backend/catalog/pg_proc.c\t4 Aug 2002 06:21:51 -0000\n> ***************\n> *** 25,30 ****\n> --- 25,31 ----\n> #include \"miscadmin.h\"\n> #include \"parser/parse_coerce.h\"\n> #include \"parser/parse_expr.h\"\n> + #include \"parser/parse_relation.h\"\n> #include \"parser/parse_type.h\"\n> #include \"tcop/tcopprot.h\"\n> #include \"utils/builtins.h\"\n> ***************\n> *** 33,39 ****\n> #include \"utils/syscache.h\"\n> \n> \n> ! static void checkretval(Oid rettype, List *queryTreeList);\n> Datum fmgr_internal_validator(PG_FUNCTION_ARGS);\n> Datum fmgr_c_validator(PG_FUNCTION_ARGS);\n> Datum fmgr_sql_validator(PG_FUNCTION_ARGS);\n> --- 34,40 ----\n> #include \"utils/syscache.h\"\n> \n> \n> ! static void checkretval(Oid rettype, char fn_typtype, List *queryTreeList);\n> Datum fmgr_internal_validator(PG_FUNCTION_ARGS);\n> Datum fmgr_c_validator(PG_FUNCTION_ARGS);\n> Datum fmgr_sql_validator(PG_FUNCTION_ARGS);\n> ***************\n> *** 367,460 ****\n> \t */\n> \ttlistlen = ExecCleanTargetListLength(tlist);\n> \n> - \t/*\n> - \t * For base-type returns, the target list should have exactly one\n> - \t * entry, and its type should agree with what the user declared. (As\n> - \t * of Postgres 7.2, we accept binary-compatible types too.)\n> - \t */\n> \ttyperelid = typeidTypeRelid(rettype);\n> - \tif (typerelid == InvalidOid)\n> - \t{\n> - \t\tif (tlistlen != 1)\n> - \t\t\telog(ERROR, \"function declared to return %s returns multiple columns in final SELECT\",\n> - \t\t\t\t format_type_be(rettype));\n> \n> ! \t\trestype = ((TargetEntry *) lfirst(tlist))->resdom->restype;\n> ! \t\tif (!IsBinaryCompatible(restype, rettype))\n> ! \t\t\telog(ERROR, \"return type mismatch in function: declared to return %s, returns %s\",\n> ! \t\t\t\t format_type_be(rettype), format_type_be(restype));\n> \n> ! \t\treturn;\n> ! \t}\n> \n> - \t/*\n> - \t * If the target list is of length 1, and the type of the varnode in\n> - \t * the target list matches the declared return type, this is okay.\n> - \t * This can happen, for example, where the body of the function is\n> - \t * 'SELECT func2()', where func2 has the same return type as the\n> - \t * function that's calling it.\n> - \t */\n> - \tif (tlistlen == 1)\n> - \t{\n> - \t\trestype = ((TargetEntry *) lfirst(tlist))->resdom->restype;\n> - \t\tif (IsBinaryCompatible(restype, rettype))\n> \t\t\treturn;\n> \t}\n> \n> ! \t/*\n> ! \t * By here, the procedure returns a tuple or set of tuples. This part\n> ! \t * of the typechecking is a hack. We look up the relation that is the\n> ! \t * declared return type, and scan the non-deleted attributes to ensure\n> ! \t * that they match the datatypes of the non-resjunk columns.\n> ! \t */\n> ! \treln = heap_open(typerelid, AccessShareLock);\n> ! \trelnatts = reln->rd_rel->relnatts;\n> ! \trellogcols = 0;\t\t\t\t/* we'll count nondeleted cols as we go */\n> ! \tcolindex = 0;\n> ! \n> ! \tforeach(tlistitem, tlist)\n> ! \t{\n> ! \t\tTargetEntry *tle = (TargetEntry *) lfirst(tlistitem);\n> ! \t\tForm_pg_attribute attr;\n> ! \t\tOid\t\t\ttletype;\n> ! \t\tOid\t\t\tatttype;\n> \n> ! \t\tif (tle->resdom->resjunk)\n> ! \t\t\tcontinue;\n> \n> ! \t\tdo {\n> \t\t\tcolindex++;\n> \t\t\tif (colindex > relnatts)\n> ! \t\t\t\telog(ERROR, \"function declared to return %s does not SELECT the right number of columns (%d)\",\n> ! \t\t\t\t\t format_type_be(rettype), rellogcols);\n> ! \t\t\tattr = reln->rd_att->attrs[colindex - 1];\n> ! \t\t} while (attr->attisdropped);\n> ! \t\trellogcols++;\n> ! \n> ! \t\ttletype = exprType(tle->expr);\n> ! \t\tatttype = attr->atttypid;\n> ! \t\tif (!IsBinaryCompatible(tletype, atttype))\n> ! \t\t\telog(ERROR, \"function declared to return %s returns %s instead of %s at column %d\",\n> ! \t\t\t\t format_type_be(rettype),\n> ! \t\t\t\t format_type_be(tletype),\n> ! \t\t\t\t format_type_be(atttype),\n> ! \t\t\t\t rellogcols);\n> ! \t}\n> ! \n> ! \tfor (;;)\n> ! \t{\n> ! \t\tcolindex++;\n> ! \t\tif (colindex > relnatts)\n> ! \t\t\tbreak;\n> ! \t\tif (!reln->rd_att->attrs[colindex - 1]->attisdropped)\n> ! \t\t\trellogcols++;\n> ! \t}\n> \n> ! \tif (tlistlen != rellogcols)\n> ! \t\telog(ERROR, \"function declared to return %s does not SELECT the right number of columns (%d)\",\n> ! \t\t\t format_type_be(rettype), rellogcols);\n> \n> ! \theap_close(reln, AccessShareLock);\n> }\n> \n> \n> --- 368,480 ----\n> \t */\n> \ttlistlen = ExecCleanTargetListLength(tlist);\n> \n> \ttyperelid = typeidTypeRelid(rettype);\n> \n> ! \tif (fn_typtype == 'b')\n> ! \t{\n> ! \t\t/*\n> ! \t\t * For base-type returns, the target list should have exactly one\n> ! \t\t * entry, and its type should agree with what the user declared. (As\n> ! \t\t * of Postgres 7.2, we accept binary-compatible types too.)\n> ! \t\t */\n> \n> ! \t\tif (typerelid == InvalidOid)\n> ! \t\t{\n> ! \t\t\tif (tlistlen != 1)\n> ! \t\t\t\telog(ERROR, \"function declared to return %s returns multiple columns in final SELECT\",\n> ! \t\t\t\t\t format_type_be(rettype));\n> ! \n> ! \t\t\trestype = ((TargetEntry *) lfirst(tlist))->resdom->restype;\n> ! \t\t\tif (!IsBinaryCompatible(restype, rettype))\n> ! \t\t\t\telog(ERROR, \"return type mismatch in function: declared to return %s, returns %s\",\n> ! \t\t\t\t\t format_type_be(rettype), format_type_be(restype));\n> \n> \t\t\treturn;\n> + \t\t}\n> + \n> + \t\t/*\n> + \t\t * If the target list is of length 1, and the type of the varnode in\n> + \t\t * the target list matches the declared return type, this is okay.\n> + \t\t * This can happen, for example, where the body of the function is\n> + \t\t * 'SELECT func2()', where func2 has the same return type as the\n> + \t\t * function that's calling it.\n> + \t\t */\n> + \t\tif (tlistlen == 1)\n> + \t\t{\n> + \t\t\trestype = ((TargetEntry *) lfirst(tlist))->resdom->restype;\n> + \t\t\tif (IsBinaryCompatible(restype, rettype))\n> + \t\t\t\treturn;\n> + \t\t}\n> \t}\n> + \telse if (fn_typtype == 'c')\n> + \t{\n> + \t\t/*\n> + \t\t * By here, the procedure returns a tuple or set of tuples. This part\n> + \t\t * of the typechecking is a hack. We look up the relation that is the\n> + \t\t * declared return type, and scan the non-deleted attributes to ensure\n> + \t\t * that they match the datatypes of the non-resjunk columns.\n> + \t\t */\n> + \t\treln = heap_open(typerelid, AccessShareLock);\n> + \t\trelnatts = reln->rd_rel->relnatts;\n> + \t\trellogcols = 0;\t\t\t\t/* we'll count nondeleted cols as we go */\n> + \t\tcolindex = 0;\n> \n> ! \t\tforeach(tlistitem, tlist)\n> ! \t\t{\n> ! \t\t\tTargetEntry *tle = (TargetEntry *) lfirst(tlistitem);\n> ! \t\t\tForm_pg_attribute attr;\n> ! \t\t\tOid\t\t\ttletype;\n> ! \t\t\tOid\t\t\tatttype;\n> ! \n> ! \t\t\tif (tle->resdom->resjunk)\n> ! \t\t\t\tcontinue;\n> ! \n> ! \t\t\tdo {\n> ! \t\t\t\tcolindex++;\n> ! \t\t\t\tif (colindex > relnatts)\n> ! \t\t\t\t\telog(ERROR, \"function declared to return %s does not SELECT the right number of columns (%d)\",\n> ! \t\t\t\t\t\t format_type_be(rettype), rellogcols);\n> ! \t\t\t\tattr = reln->rd_att->attrs[colindex - 1];\n> ! \t\t\t} while (attr->attisdropped);\n> ! \t\t\trellogcols++;\n> \n> ! \t\t\ttletype = exprType(tle->expr);\n> ! \t\t\tatttype = attr->atttypid;\n> ! \t\t\tif (!IsBinaryCompatible(tletype, atttype))\n> ! \t\t\t\telog(ERROR, \"function declared to return %s returns %s instead of %s at column %d\",\n> ! \t\t\t\t\t format_type_be(rettype),\n> ! \t\t\t\t\t format_type_be(tletype),\n> ! \t\t\t\t\t format_type_be(atttype),\n> ! \t\t\t\t\t rellogcols);\n> ! \t\t}\n> \n> ! \t\tfor (;;)\n> ! \t\t{\n> \t\t\tcolindex++;\n> \t\t\tif (colindex > relnatts)\n> ! \t\t\t\tbreak;\n> ! \t\t\tif (!reln->rd_att->attrs[colindex - 1]->attisdropped)\n> ! \t\t\t\trellogcols++;\n> ! \t\t}\n> \n> ! \t\tif (tlistlen != rellogcols)\n> ! \t\t\telog(ERROR, \"function declared to return %s does not SELECT the right number of columns (%d)\",\n> ! \t\t\t\t format_type_be(rettype), rellogcols);\n> \n> ! \t\theap_close(reln, AccessShareLock);\n> ! \n> ! \t\treturn;\n> ! \t}\n> ! \telse if (fn_typtype == 'p' && rettype == RECORDOID)\n> ! \t{\n> ! \t\t/*\n> ! \t\t * For RECORD return type, defer this check until we get the\n> ! \t\t * first tuple.\n> ! \t\t */\n> ! \t\treturn;\n> ! \t}\n> ! \telse\n> ! \t\telog(ERROR, \"Unknown kind of return type specified for function\");\n> }\n> \n> \n> ***************\n> *** 553,558 ****\n> --- 573,579 ----\n> \tbool\t\tisnull;\n> \tDatum\t\ttmp;\n> \tchar\t *prosrc;\n> + \tchar\t\tfunctyptype;\n> \n> \ttuple = SearchSysCache(PROCOID, funcoid, 0, 0, 0);\n> \tif (!HeapTupleIsValid(tuple))\n> ***************\n> *** 569,576 ****\n> \n> \tprosrc = DatumGetCString(DirectFunctionCall1(textout, tmp));\n> \n> \tquerytree_list = pg_parse_and_rewrite(prosrc, proc->proargtypes, proc->pronargs);\n> ! \tcheckretval(proc->prorettype, querytree_list);\n> \n> \tReleaseSysCache(tuple);\n> \tPG_RETURN_BOOL(true);\n> --- 590,600 ----\n> \n> \tprosrc = DatumGetCString(DirectFunctionCall1(textout, tmp));\n> \n> + \t/* check typtype to see if we have a predetermined return type */\n> + \tfunctyptype = typeid_get_typtype(proc->prorettype);\n> + \n> \tquerytree_list = pg_parse_and_rewrite(prosrc, proc->proargtypes, proc->pronargs);\n> ! \tcheckretval(proc->prorettype, functyptype, querytree_list);\n> \n> \tReleaseSysCache(tuple);\n> \tPG_RETURN_BOOL(true);\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 4 Aug 2002 15:48:29 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: anonymous composite types for Table Functions (aka" }, { "msg_contents": "Woh, seems like there is a compile problem. Joe, my guess is that the\npg_proc patch you send me today didn't have _all_ of the needed changes.\n\nI am attaching a patch with the fix need to get it to compile, but it is\nclearly wrong. Would you submit a fix based on current CVS for this? I\nknow this gets confusing when patches conflict.\n\n\n---------------------------------------------------------------------------\n\nBruce Momjian wrote:\n> \n> [ New version of pg_proc.c used for application.]\n> \n> Patch applied. Thanks. initdb forced.\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/backend/catalog/pg_proc.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/catalog/pg_proc.c,v\nretrieving revision 1.83\ndiff -c -r1.83 pg_proc.c\n*** src/backend/catalog/pg_proc.c\t4 Aug 2002 19:48:09 -0000\t1.83\n--- src/backend/catalog/pg_proc.c\t4 Aug 2002 19:53:42 -0000\n***************\n*** 318,324 ****\n * type he claims.\n */\n static void\n! checkretval(Oid rettype, List *queryTreeList)\n {\n \tQuery\t *parse;\n \tint\t\t\tcmd;\n--- 318,324 ----\n * type he claims.\n */\n static void\n! checkretval(Oid rettype, char fn_typtype /* XXX FIX ME */, List *queryTreeList)\n {\n \tQuery\t *parse;\n \tint\t\t\tcmd;", "msg_date": "Sun, 4 Aug 2002 15:55:27 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: anonymous composite types for Table Functions (aka SRFs)" }, { "msg_contents": "Woh, again, I have another compile error:\n\t\n\tequalfuncs.c: In function `_equalRangeVar':\n\tequalfuncs.c:1610: structure has no member named `coldeflist'\n\tequalfuncs.c:1610: structure has no member named `coldeflist'\n\tgmake[2]: *** [equalfuncs.o] Error 1\n\nAgain, a patch is attached that clearly needs fixing. I should have\ntested this more, but I am heading out now and wanted to get it in\nbefore more code drifted.\n\n---------------------------------------------------------------------------\n\nBruce Momjian wrote:\n> \n> Woh, seems like there is a compile problem. Joe, my guess is that the\n> pg_proc patch you send me today didn't have _all_ of the needed changes.\n> \n> I am attaching a patch with the fix need to get it to compile, but it is\n> clearly wrong. Would you submit a fix based on current CVS for this? I\n> know this gets confusing when patches conflict.\n> \n> \n> ---------------------------------------------------------------------------\n> \n> Bruce Momjian wrote:\n> > \n> > [ New version of pg_proc.c used for application.]\n> > \n> > Patch applied. Thanks. initdb forced.\n> > \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/backend/catalog/pg_proc.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/catalog/pg_proc.c,v\nretrieving revision 1.83\ndiff -c -r1.83 pg_proc.c\n*** src/backend/catalog/pg_proc.c\t4 Aug 2002 19:48:09 -0000\t1.83\n--- src/backend/catalog/pg_proc.c\t4 Aug 2002 19:58:03 -0000\n***************\n*** 318,324 ****\n * type he claims.\n */\n static void\n! checkretval(Oid rettype, List *queryTreeList)\n {\n \tQuery\t *parse;\n \tint\t\t\tcmd;\n--- 318,324 ----\n * type he claims.\n */\n static void\n! checkretval(Oid rettype, char fn_typtype /* XXX FIX ME */, List *queryTreeList)\n {\n \tQuery\t *parse;\n \tint\t\t\tcmd;\nIndex: src/backend/nodes/equalfuncs.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/nodes/equalfuncs.c,v\nretrieving revision 1.147\ndiff -c -r1.147 equalfuncs.c\n*** src/backend/nodes/equalfuncs.c\t4 Aug 2002 19:48:09 -0000\t1.147\n--- src/backend/nodes/equalfuncs.c\t4 Aug 2002 19:58:03 -0000\n***************\n*** 1607,1615 ****\n \t\treturn false;\n \tif (!equal(a->alias, b->alias))\n \t\treturn false;\n \tif (!equal(a->coldeflist, b->coldeflist))\n \t\treturn false;\n! \n \treturn true;\n }\n \n--- 1607,1616 ----\n \t\treturn false;\n \tif (!equal(a->alias, b->alias))\n \t\treturn false;\n+ /* FIX ME XXX\n \tif (!equal(a->coldeflist, b->coldeflist))\n \t\treturn false;\n! */\n \treturn true;\n }", "msg_date": "Sun, 4 Aug 2002 15:59:31 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: anonymous composite types for Table Functions (aka SRFs)" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Again, a patch is attached that clearly needs fixing. I should have\n> tested this more, but I am heading out now and wanted to get it in\n> before more code drifted.\n\nWould you mind backing it out, instead? I've got major merge problems\nnow because I wasn't expecting that to be applied yet (it's still\ncompletely unreviewed AFAIK).\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 04 Aug 2002 17:08:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: anonymous composite types for Table Functions (aka SRFs) " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \n>>Again, a patch is attached that clearly needs fixing. I should have\n>>tested this more, but I am heading out now and wanted to get it in\n>>before more code drifted.\n> \n> \n> Would you mind backing it out, instead? I've got major merge problems\n> now because I wasn't expecting that to be applied yet (it's still\n> completely unreviewed AFAIK).\n\nI just got back -- sorry about the problem. By all means, back it out \nand I'll work up a new patch based on current cvs.\n\nJoe\n\n\n", "msg_date": "Sun, 04 Aug 2002 14:45:29 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: anonymous composite types for Table Functions (aka" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Again, a patch is attached that clearly needs fixing. I should have\n> > tested this more, but I am heading out now and wanted to get it in\n> > before more code drifted.\n> \n> Would you mind backing it out, instead? I've got major merge problems\n> now because I wasn't expecting that to be applied yet (it's still\n> completely unreviewed AFAIK).\n\nI can back it out, but it may cause the same problems when I try to\napply it later. It was submitted July 28, added to the patch queue\nAugust 1, and applied today, August 4. I don't remember anyone saying\nthey wanted to review it. It is an extension to an earlier patch.\n\nIf we back it out and delay it more, won't the patch become even harder\nto apply? Let me know. (It was actually your DROP COLUMN commit that\nmade it hard to apply yesterday.)\n\nOne trick I use for patch problems is this: If a patch doesn't apply,\nand it is too hard to manually merge, I generate a diff via CVS of the\nlast change to the file, save it, reverse out that diff, then apply the\nrejected part of the new patch, _then_ apply the CVS diff I generated. \nIn many cases, the rejections of that last CVS patch _will_ be able to\nbe manually applied, i.e. if a patch changes 100 lines, but a previous\npatch changed 3 lines, you can back out the 3-line change, apply the\n100-line change, then manually patch in the 3-line change based on the\nnew contents of the file.\n\nLet me know what you want me to do.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 4 Aug 2002 20:26:44 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: anonymous composite types for Table Functions (aka SRFs)" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I can back it out, but it may cause the same problems when I try to\n> apply it later. It was submitted July 28, added to the patch queue\n> August 1, and applied today, August 4. I don't remember anyone saying\n> they wanted to review it. It is an extension to an earlier patch.\n\nLet me just put down a marker now: anything that large I want to review.\n\nI'm currently trying to deal with the fact that the patch broke\nfunctions that return domain types. For example:\n\nregression=# create function foo() returns setof int as\nregression-# 'select unique1 from tenk1 limit 10' language sql;\nCREATE FUNCTION\nregression=# create domain myint as int;\nCREATE DOMAIN\nregression=# create function myfoo() returns setof myint as\nregression-# 'select unique1 from tenk1 limit 10' language sql;\nERROR: Unknown kind of return type specified for function\n\nThe alias hacking seems to have a few holes as well:\n\nregression=# select * from foo();\n foo\n------\n 8800\n 1891\n 3420\n ...\n\n(fine)\n\nregression=# select * from foo() as z;\n foo\n------\n 8800\n 1891\n 3420\n 9850\n ...\n\n(hm, what happened to the alias?)\n\nregression=# select * from foo() as z(a int);\n foo\n------\n 8800\n 1891\n 3420\n 9850\n 7164\n ...\n\n(again, what happened to the alias? Column name should be a)\n\nregression=# select * from foo() as z(a int8);\n foo\n------\n 8800\n 1891\n 3420\n 9850\n\n(definitely not cool)\n\n> If we back it out and delay it more, won't the patch become even harder\n> to apply? Let me know.\n\nRight now the last thing we need is more CVS churn. Let's see if it can\nbe fixed. Joe, you need to reconsider the relationship between alias\nclauses and this feature, and also reconsider checking of the types.\n\n> One trick I use for patch problems is this: If a patch doesn't apply,\n> and it is too hard to manually merge, I generate a diff via CVS of the\n> last change to the file, save it, reverse out that diff, then apply the\n> rejected part of the new patch, _then_ apply the CVS diff I\n> generated. \n\nIt'd be kind of nice if you actually tested the result. Once again\nyou've applied a large patch without bothering to see if it compiled,\nlet alone passed regression.\n\nYes, I'm feeling annoyed this evening...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 04 Aug 2002 20:41:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: anonymous composite types for Table Functions (aka SRFs) " }, { "msg_contents": "Tom Lane wrote:\n> Right now the last thing we need is more CVS churn. Let's see if it can\n> be fixed. Joe, you need to reconsider the relationship between alias\n> clauses and this feature, and also reconsider checking of the types.\n\nOK, I'm on it now. Sorry I missed those issues. I guess my own testing \nwas too myopic :(\n\nJoe\n\n", "msg_date": "Sun, 04 Aug 2002 17:47:51 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: anonymous composite types for Table Functions (aka" }, { "msg_contents": "Tom Lane wrote:\n> It'd be kind of nice if you actually tested the result. Once again\n> you've applied a large patch without bothering to see if it compiled,\n> let alone passed regression.\n> \n> Yes, I'm feeling annoyed this evening...\n\nYes, I hear you. I don't normally work on Sunday afternoon, especially\nwhen I am heading out, but Joe's patch had sat too long already that the\ncode had drifted so I thought I would get it in ASAP.\n\nSorry it caused you problems.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 4 Aug 2002 21:10:29 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: anonymous composite types for Table Functions (aka SRFs)" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> OK, I'm on it now. Sorry I missed those issues. I guess my own testing \n> was too myopic :(\n\nOkay. I have patches to fix the domain-type issues, and will commit\nas soon as I've finished testing 'em.\n\nI would suggest that either gram.y or someplace early in the analyzer\nshould transpose the names from the coldeflist into the \"user specified\nalias\" structure. That should fix the alias naming issues. The other\nissues indicate that if a coldeflist is provided, you should check it\nagainst the function return type in all cases not only RECORD. In the\nnon-RECORD cases it could be done in the parse analysis phase.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 04 Aug 2002 22:05:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: anonymous composite types for Table Functions (aka SRFs) " }, { "msg_contents": "Tom Lane wrote:\n> Okay. I have patches to fix the domain-type issues, and will commit\n> as soon as I've finished testing 'em.\n\nThanks.\n\n> \n> I would suggest that either gram.y or someplace early in the analyzer\n> should transpose the names from the coldeflist into the \"user specified\n> alias\" structure. That should fix the alias naming issues. The other\n> issues indicate that if a coldeflist is provided, you should check it\n> against the function return type in all cases not only RECORD. In the\n> non-RECORD cases it could be done in the parse analysis phase.\n\nActually, I was just looking at this and remembering that I wanted to \ndisallow a coldeflist for non-RECORD return types. Do you prefer to \nallow it (but properly apply the alias and enforce the type)?\n\nJoe\n\n\n\n", "msg_date": "Sun, 04 Aug 2002 19:10:20 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: anonymous composite types for Table Functions (aka" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Actually, I was just looking at this and remembering that I wanted to \n> disallow a coldeflist for non-RECORD return types. Do you prefer to \n> allow it (but properly apply the alias and enforce the type)?\n\nWe could do that too; what I was unhappy about was that the system\ntook the syntax and then didn't apply the type checking that the syntax\nseems to imply. I'd prefer to do the type checking ... but I don't\nwant to expend a heckuva lot of code on it, so maybe erroring out\nis the better answer.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 04 Aug 2002 22:58:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: anonymous composite types for Table Functions (aka SRFs) " }, { "msg_contents": "The attached patch disallows the use of coldeflists for functions that \ndon't return type RECORD. It also catches a core dump condition when a \nfunction returning RECORD had an alias list instead of a coldeflist.\n\nNow both conditions throw an ERROR.\n\nSample output below:\n\nTom Lane wrote:\n> regression=# select * from foo() as z;\n> foo\n> ------\n> 8800\n> 1891\n> 3420\n> 9850\n> ...\n> \n> (hm, what happened to the alias?)\n\nActually nothing wrong with this one. The z is the relation alias, not \nthe column alias. The column alias defaults to the function name for \nSRFs returning scalar. If you try:\n\ntest=# select myfoo1.* from myfoo1() as z;\nERROR: Relation \"myfoo1\" does not exist\n\nwhich is as expected.\n\n> \n> regression=# select * from foo() as z(a int);\n> foo\n> ------\n> 8800\n> 1891\n> 3420\n> 9850\n> 7164\n> ...\n> \n> (again, what happened to the alias? Column name should be a)\n\nThis one now throws an error:\ntest=# select * from myfoo1() as z(a int);\nERROR: A column definition list is only allowed for functions returning \nRECORD\n\n\n> \n> regression=# select * from foo() as z(a int8);\n> foo\n> ------\n> 8800\n> 1891\n> 3420\n> 9850\n> \n> (definitely not cool)\n\nSame here.\n\nOther change is like so:\ntest=# create function myfoo2() returns setof record as 'select * from \nct limit 10' language sql;\n\ntest=# select * from myfoo2() as z(a);\nERROR: A column definition list is required for functions returning RECORD\ntest=# select * from myfoo2();\nERROR: A column definition list is required for functions returning RECORD\ntest=# select * from myfoo2() as (a int, b text, c text, d text, e text);\n a | b | c | d | e\n----+--------+-------+------+------\n 1 | group1 | test1 | att1 | val1\n 2 | group1 | test1 | att2 | val2\n 3 | group1 | test1 | att3 | val3\n 4 | group1 | test1 | att4 | val4\n 5 | group1 | test2 | att1 | val5\n 6 | group1 | test2 | att2 | val6\n 7 | group1 | test2 | att3 | val7\n 8 | group1 | test2 | att4 | val8\n 9 | group2 | test3 | att1 | val1\n 10 | group2 | test3 | att2 | val2\n(10 rows)\n\ntest=# select * from myfoo2() as (a int8, b text, c text, d text, e text);\nERROR: Query-specified return tuple and actual function return tuple do \nnot match\n\n\nPlease apply if no objections.\n\nThanks,\n\nJoe", "msg_date": "Sun, 04 Aug 2002 21:00:31 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: anonymous composite types for Table Functions (aka" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Tom Lane wrote:\n>> regression=# select * from foo() as z;\n>> foo\n>> ------\n>> 8800\n>> ...\n>> \n>> (hm, what happened to the alias?)\n\n> Actually nothing wrong with this one. The z is the relation alias, not \n> the column alias. The column alias defaults to the function name for \n> SRFs returning scalar.\n\nHm. I'd sort of expect the \"z\" to become both the table and column\nalias in this case. What do you think?\n\nOther examples look good. Code style comment:\n\n> + \t\tif (functyptype != 'p' || (functyptype == 'p' && funcrettype != RECORDOID))\n\nThis test seems redundant, why not\n\n\t\tif (functyptype != 'p' || funcrettype != RECORDOID)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Aug 2002 00:08:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: anonymous composite types for Table Functions (aka SRFs) " }, { "msg_contents": "> test=# select * from myfoo2() as (a int8, b text, c text, d text, e text);\n> ERROR: Query-specified return tuple and actual function return tuple do\n> not match\n\nI wonder if that would read a little better (and perhaps be in the active\nvoice) if it was like below. The word 'actual' seems a little casual, and\ndo people really know what tuples are?\n\nERROR: Query-specified function result alias does not match defined function\nresult type.\n\nChris\n\n", "msg_date": "Mon, 5 Aug 2002 12:09:49 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: anonymous composite types for Table Functions (aka" }, { "msg_contents": "Tom Lane wrote:\n> Hm. I'd sort of expect the \"z\" to become both the table and column\n> alias in this case. What do you think?\n\nI guess that would make sense. I'll make a separate patch just for that \nchange if that's OK.\n\n\n> Other examples look good. Code style comment:\n> \n> \n>>+ \t\tif (functyptype != 'p' || (functyptype == 'p' && funcrettype != RECORDOID))\n> \n> \n> This test seems redundant, why not\n> \n> \t\tif (functyptype != 'p' || funcrettype != RECORDOID)\n> \n\nYou're correct, of course. But I started out with a much more convoluted \ntest than that, so at least this was an improvement ;-)\n\nJoe\n\n\n", "msg_date": "Sun, 04 Aug 2002 21:19:22 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: anonymous composite types for Table Functions (aka" }, { "msg_contents": "Joe Conway wrote:\n> Tom Lane wrote:\n>> Hm. I'd sort of expect the \"z\" to become both the table and column\n>> alias in this case. What do you think?\n> \n> I guess that would make sense. I'll make a separate patch just for that \n> change if that's OK.\n> \n\nSimple change -- patch attached.\n\ntest=# select * from myfoo1() as z;\n z\n----\n 1\n 2\n 3\n(3 rows)\n\ntest=# select * from myfoo1();\n myfoo1\n--------\n 1\n 2\n 3\n(3 rows)\n\n\ntest=# select * from myfoo1() as z(a);\n a\n----\n 1\n 2\n 3\n(3 rows)\n\n\nJoe", "msg_date": "Sun, 04 Aug 2002 22:57:26 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: anonymous composite types for Table Functions (aka" }, { "msg_contents": "Joe Conway wrote:\n> Simple change -- patch attached.\n\nOf course, the simple change has ripple effects! Here's a patch for the \nrangefunc regression test for the new behavior.\n\nJoe", "msg_date": "Sun, 04 Aug 2002 23:19:37 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: anonymous composite types for Table Functions (aka" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Tom Lane wrote:\n>> Hm. I'd sort of expect the \"z\" to become both the table and column\n>> alias in this case. What do you think?\n\n> I guess that would make sense. I'll make a separate patch just for that \n> change if that's OK.\n\nIn the cold light of morning I started to wonder what should happen if\nyou write \"from foo() as z\" when foo returns a tuple. It would probably\nbe peculiar for the z to overwrite the column name of just the first\ncolumn --- there is no such column renaming for an ordinary table alias.\n\nMy current thought: z becomes the table alias, and it also becomes the\ncolumn alias *if* the function returns scalar. For a function returning\ntuple, this syntax doesn't affect the column names. (In any case this\nsyntax is disallowed for functions returning RECORD.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Aug 2002 09:37:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: anonymous composite types for Table Functions (aka SRFs) " }, { "msg_contents": "Tom Lane wrote:\n> In the cold light of morning I started to wonder what should happen if\n> you write \"from foo() as z\" when foo returns a tuple. It would probably\n> be peculiar for the z to overwrite the column name of just the first\n> column --- there is no such column renaming for an ordinary table alias.\n> \n> My current thought: z becomes the table alias, and it also becomes the\n> column alias *if* the function returns scalar. For a function returning\n> tuple, this syntax doesn't affect the column names. (In any case this\n> syntax is disallowed for functions returning RECORD.)\n\nI think the one liner patch I sent in last night does exactly what you \ndescribe -- so I guess we're in complete agreement ;-)\n\nSee below:\n\ntest=# \\d foo\n Table \"foo\"\n Column | Type | Modifiers\n--------+---------+-----------\n f1 | bytea |\n f2 | integer |\nIndexes: foo_idx1 btree (f1)\n\ntest=# create function foo1() returns setof int as 'select f2 from foo' \nlanguage sql;\nCREATE FUNCTION\ntest=# create function foo2() returns setof foo as 'select * from foo' \nlanguage sql;\nCREATE FUNCTION\ntest=# select * from foo1() as z where z.z = 1;\n z\n---\n 1\n(1 row)\n\ntest=# select * from foo1() as z(a) where z.a = 1;\n a\n---\n 1\n(1 row)\n\ntest=# select * from foo2() as z where z.f2 = 1;\n f1 | f2\n------------------------+----\n \\237M@y[J\\272z\\304\\003 | 1\n(1 row)\n\ntest=# select * from foo2() as z(a) where z.f2 = 1;\n a | f2\n------------------------+----\n \\237M@y[J\\272z\\304\\003 | 1\n(1 row)\n\ntest=# create function foo3() returns setof record as 'select * from \nfoo' language sql;\nCREATE FUNCTION\ntest=# select * from foo3() as z where z.f2 = 1;\nERROR: A column definition list is required for functions returning RECORD\ntest=# select * from foo3() as z(a bytea, b int) where z.f2 = 1;\nERROR: No such attribute z.f2\ntest=# select * from foo3() as z(a bytea, b int) where z.b = 1;\n a | b\n------------------------+---\n \\237M@y[J\\272z\\304\\003 | 1\n(1 row)\n\n\nJoe\n\n", "msg_date": "Mon, 05 Aug 2002 08:12:27 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: anonymous composite types for Table Functions (aka" }, { "msg_contents": "\nI assume this is the patch you and Tom now want applied.\n\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nJoe Conway wrote:\n> The attached patch disallows the use of coldeflists for functions that \n> don't return type RECORD. It also catches a core dump condition when a \n> function returning RECORD had an alias list instead of a coldeflist.\n> \n> Now both conditions throw an ERROR.\n> \n> Sample output below:\n> \n> Tom Lane wrote:\n> > regression=# select * from foo() as z;\n> > foo\n> > ------\n> > 8800\n> > 1891\n> > 3420\n> > 9850\n> > ...\n> > \n> > (hm, what happened to the alias?)\n> \n> Actually nothing wrong with this one. The z is the relation alias, not \n> the column alias. The column alias defaults to the function name for \n> SRFs returning scalar. If you try:\n> \n> test=# select myfoo1.* from myfoo1() as z;\n> ERROR: Relation \"myfoo1\" does not exist\n> \n> which is as expected.\n> \n> > \n> > regression=# select * from foo() as z(a int);\n> > foo\n> > ------\n> > 8800\n> > 1891\n> > 3420\n> > 9850\n> > 7164\n> > ...\n> > \n> > (again, what happened to the alias? Column name should be a)\n> \n> This one now throws an error:\n> test=# select * from myfoo1() as z(a int);\n> ERROR: A column definition list is only allowed for functions returning \n> RECORD\n> \n> \n> > \n> > regression=# select * from foo() as z(a int8);\n> > foo\n> > ------\n> > 8800\n> > 1891\n> > 3420\n> > 9850\n> > \n> > (definitely not cool)\n> \n> Same here.\n> \n> Other change is like so:\n> test=# create function myfoo2() returns setof record as 'select * from \n> ct limit 10' language sql;\n> \n> test=# select * from myfoo2() as z(a);\n> ERROR: A column definition list is required for functions returning RECORD\n> test=# select * from myfoo2();\n> ERROR: A column definition list is required for functions returning RECORD\n> test=# select * from myfoo2() as (a int, b text, c text, d text, e text);\n> a | b | c | d | e\n> ----+--------+-------+------+------\n> 1 | group1 | test1 | att1 | val1\n> 2 | group1 | test1 | att2 | val2\n> 3 | group1 | test1 | att3 | val3\n> 4 | group1 | test1 | att4 | val4\n> 5 | group1 | test2 | att1 | val5\n> 6 | group1 | test2 | att2 | val6\n> 7 | group1 | test2 | att3 | val7\n> 8 | group1 | test2 | att4 | val8\n> 9 | group2 | test3 | att1 | val1\n> 10 | group2 | test3 | att2 | val2\n> (10 rows)\n> \n> test=# select * from myfoo2() as (a int8, b text, c text, d text, e text);\n> ERROR: Query-specified return tuple and actual function return tuple do \n> not match\n> \n> \n> Please apply if no objections.\n> \n> Thanks,\n> \n> Joe\n\n> Index: src/backend/parser/parse_relation.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/parser/parse_relation.c,v\n> retrieving revision 1.73\n> diff -c -r1.73 parse_relation.c\n> *** src/backend/parser/parse_relation.c\t5 Aug 2002 02:30:50 -0000\t1.73\n> --- src/backend/parser/parse_relation.c\t5 Aug 2002 03:16:42 -0000\n> ***************\n> *** 729,734 ****\n> --- 729,755 ----\n> \t */\n> \tfunctyptype = get_typtype(funcrettype);\n> \n> + \tif (coldeflist != NIL)\n> + \t{\n> + \t\t/*\n> + \t\t * we *only* allow a coldeflist for functions returning a\n> + \t\t * RECORD pseudo-type\n> + \t\t */\n> + \t\tif (functyptype != 'p' || (functyptype == 'p' && funcrettype != RECORDOID))\n> + \t\t\telog(ERROR, \"A column definition list is only allowed for\"\n> + \t\t\t\t\t\t\" functions returning RECORD\");\n> + \t}\n> + \telse\n> + \t{\n> + \t\t/*\n> + \t\t * ... and a coldeflist is *required* for functions returning a\n> + \t\t * RECORD pseudo-type\n> + \t\t */\n> + \t\tif (functyptype == 'p' && funcrettype == RECORDOID)\n> + \t\t\telog(ERROR, \"A column definition list is required for functions\"\n> + \t\t\t\t\t\t\" returning RECORD\");\n> + \t}\n> + \n> \tif (functyptype == 'c')\n> \t{\n> \t\t/*\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 5 Aug 2002 12:23:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: anonymous composite types for Table Functions (aka SRFs)" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nJoe Conway wrote:\n> Joe Conway wrote:\n> > Tom Lane wrote:\n> >> Hm. I'd sort of expect the \"z\" to become both the table and column\n> >> alias in this case. What do you think?\n> > \n> > I guess that would make sense. I'll make a separate patch just for that \n> > change if that's OK.\n> > \n> \n> Simple change -- patch attached.\n> \n> test=# select * from myfoo1() as z;\n> z\n> ----\n> 1\n> 2\n> 3\n> (3 rows)\n> \n> test=# select * from myfoo1();\n> myfoo1\n> --------\n> 1\n> 2\n> 3\n> (3 rows)\n> \n> \n> test=# select * from myfoo1() as z(a);\n> a\n> ----\n> 1\n> 2\n> 3\n> (3 rows)\n> \n> \n> Joe\n\n> Index: src/backend/parser/parse_relation.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/parser/parse_relation.c,v\n> retrieving revision 1.73\n> diff -c -r1.73 parse_relation.c\n> *** src/backend/parser/parse_relation.c\t5 Aug 2002 02:30:50 -0000\t1.73\n> --- src/backend/parser/parse_relation.c\t5 Aug 2002 05:22:02 -0000\n> ***************\n> *** 807,813 ****\n> \t\t\telog(ERROR, \"Too many column aliases specified for function %s\",\n> \t\t\t\t funcname);\n> \t\tif (numaliases == 0)\n> ! \t\t\teref->colnames = makeList1(makeString(funcname));\n> \t}\n> \telse if (functyptype == 'p' && funcrettype == RECORDOID)\n> \t{\n> --- 807,813 ----\n> \t\t\telog(ERROR, \"Too many column aliases specified for function %s\",\n> \t\t\t\t funcname);\n> \t\tif (numaliases == 0)\n> ! \t\t\teref->colnames = makeList1(makeString(eref->aliasname));\n> \t}\n> \telse if (functyptype == 'p' && funcrettype == RECORDOID)\n> \t{\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 5 Aug 2002 12:25:18 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: anonymous composite types for Table Functions (aka" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nJoe Conway wrote:\n> Joe Conway wrote:\n> > Simple change -- patch attached.\n> \n> Of course, the simple change has ripple effects! Here's a patch for the \n> rangefunc regression test for the new behavior.\n> \n> Joe\n\n> Index: src/test/regress/expected/rangefuncs.out\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/test/regress/expected/rangefuncs.out,v\n> retrieving revision 1.2\n> diff -c -r1.2 rangefuncs.out\n> *** src/test/regress/expected/rangefuncs.out\t16 Jul 2002 05:53:34 -0000\t1.2\n> --- src/test/regress/expected/rangefuncs.out\t5 Aug 2002 05:52:01 -0000\n> ***************\n> *** 48,56 ****\n> -- sql, proretset = f, prorettype = b\n> CREATE FUNCTION getfoo(int) RETURNS int AS 'SELECT $1;' LANGUAGE SQL;\n> SELECT * FROM getfoo(1) AS t1;\n> ! getfoo \n> ! --------\n> ! 1\n> (1 row)\n> \n> CREATE VIEW vw_getfoo AS SELECT * FROM getfoo(1);\n> --- 48,56 ----\n> -- sql, proretset = f, prorettype = b\n> CREATE FUNCTION getfoo(int) RETURNS int AS 'SELECT $1;' LANGUAGE SQL;\n> SELECT * FROM getfoo(1) AS t1;\n> ! t1 \n> ! ----\n> ! 1\n> (1 row)\n> \n> CREATE VIEW vw_getfoo AS SELECT * FROM getfoo(1);\n> ***************\n> *** 65,74 ****\n> DROP FUNCTION getfoo(int);\n> CREATE FUNCTION getfoo(int) RETURNS setof int AS 'SELECT fooid FROM foo WHERE fooid = $1;' LANGUAGE SQL;\n> SELECT * FROM getfoo(1) AS t1;\n> ! getfoo \n> ! --------\n> ! 1\n> ! 1\n> (2 rows)\n> \n> CREATE VIEW vw_getfoo AS SELECT * FROM getfoo(1);\n> --- 65,74 ----\n> DROP FUNCTION getfoo(int);\n> CREATE FUNCTION getfoo(int) RETURNS setof int AS 'SELECT fooid FROM foo WHERE fooid = $1;' LANGUAGE SQL;\n> SELECT * FROM getfoo(1) AS t1;\n> ! t1 \n> ! ----\n> ! 1\n> ! 1\n> (2 rows)\n> \n> CREATE VIEW vw_getfoo AS SELECT * FROM getfoo(1);\n> ***************\n> *** 84,91 ****\n> DROP FUNCTION getfoo(int);\n> CREATE FUNCTION getfoo(int) RETURNS setof text AS 'SELECT fooname FROM foo WHERE fooid = $1;' LANGUAGE SQL;\n> SELECT * FROM getfoo(1) AS t1;\n> ! getfoo \n> ! --------\n> Joe\n> Ed\n> (2 rows)\n> --- 84,91 ----\n> DROP FUNCTION getfoo(int);\n> CREATE FUNCTION getfoo(int) RETURNS setof text AS 'SELECT fooname FROM foo WHERE fooid = $1;' LANGUAGE SQL;\n> SELECT * FROM getfoo(1) AS t1;\n> ! t1 \n> ! -----\n> Joe\n> Ed\n> (2 rows)\n> ***************\n> *** 139,147 ****\n> DROP FUNCTION getfoo(int);\n> CREATE FUNCTION getfoo(int) RETURNS int AS 'DECLARE fooint int; BEGIN SELECT fooid into fooint FROM foo WHERE fooid = $1; RETURN fooint; END;' LANGUAGE 'plpgsql';\n> SELECT * FROM getfoo(1) AS t1;\n> ! getfoo \n> ! --------\n> ! 1\n> (1 row)\n> \n> CREATE VIEW vw_getfoo AS SELECT * FROM getfoo(1);\n> --- 139,147 ----\n> DROP FUNCTION getfoo(int);\n> CREATE FUNCTION getfoo(int) RETURNS int AS 'DECLARE fooint int; BEGIN SELECT fooid into fooint FROM foo WHERE fooid = $1; RETURN fooint; END;' LANGUAGE 'plpgsql';\n> SELECT * FROM getfoo(1) AS t1;\n> ! t1 \n> ! ----\n> ! 1\n> (1 row)\n> \n> CREATE VIEW vw_getfoo AS SELECT * FROM getfoo(1);\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 5 Aug 2002 12:25:35 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: anonymous composite types for Table Functions (aka" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nJoe Conway wrote:\n> The attached patch disallows the use of coldeflists for functions that \n> don't return type RECORD. It also catches a core dump condition when a \n> function returning RECORD had an alias list instead of a coldeflist.\n> \n> Now both conditions throw an ERROR.\n> \n> Sample output below:\n> \n> Tom Lane wrote:\n> > regression=# select * from foo() as z;\n> > foo\n> > ------\n> > 8800\n> > 1891\n> > 3420\n> > 9850\n> > ...\n> > \n> > (hm, what happened to the alias?)\n> \n> Actually nothing wrong with this one. The z is the relation alias, not \n> the column alias. The column alias defaults to the function name for \n> SRFs returning scalar. If you try:\n> \n> test=# select myfoo1.* from myfoo1() as z;\n> ERROR: Relation \"myfoo1\" does not exist\n> \n> which is as expected.\n> \n> > \n> > regression=# select * from foo() as z(a int);\n> > foo\n> > ------\n> > 8800\n> > 1891\n> > 3420\n> > 9850\n> > 7164\n> > ...\n> > \n> > (again, what happened to the alias? Column name should be a)\n> \n> This one now throws an error:\n> test=# select * from myfoo1() as z(a int);\n> ERROR: A column definition list is only allowed for functions returning \n> RECORD\n> \n> \n> > \n> > regression=# select * from foo() as z(a int8);\n> > foo\n> > ------\n> > 8800\n> > 1891\n> > 3420\n> > 9850\n> > \n> > (definitely not cool)\n> \n> Same here.\n> \n> Other change is like so:\n> test=# create function myfoo2() returns setof record as 'select * from \n> ct limit 10' language sql;\n> \n> test=# select * from myfoo2() as z(a);\n> ERROR: A column definition list is required for functions returning RECORD\n> test=# select * from myfoo2();\n> ERROR: A column definition list is required for functions returning RECORD\n> test=# select * from myfoo2() as (a int, b text, c text, d text, e text);\n> a | b | c | d | e\n> ----+--------+-------+------+------\n> 1 | group1 | test1 | att1 | val1\n> 2 | group1 | test1 | att2 | val2\n> 3 | group1 | test1 | att3 | val3\n> 4 | group1 | test1 | att4 | val4\n> 5 | group1 | test2 | att1 | val5\n> 6 | group1 | test2 | att2 | val6\n> 7 | group1 | test2 | att3 | val7\n> 8 | group1 | test2 | att4 | val8\n> 9 | group2 | test3 | att1 | val1\n> 10 | group2 | test3 | att2 | val2\n> (10 rows)\n> \n> test=# select * from myfoo2() as (a int8, b text, c text, d text, e text);\n> ERROR: Query-specified return tuple and actual function return tuple do \n> not match\n> \n> \n> Please apply if no objections.\n> \n> Thanks,\n> \n> Joe\n\n> Index: src/backend/parser/parse_relation.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/parser/parse_relation.c,v\n> retrieving revision 1.73\n> diff -c -r1.73 parse_relation.c\n> *** src/backend/parser/parse_relation.c\t5 Aug 2002 02:30:50 -0000\t1.73\n> --- src/backend/parser/parse_relation.c\t5 Aug 2002 03:16:42 -0000\n> ***************\n> *** 729,734 ****\n> --- 729,755 ----\n> \t */\n> \tfunctyptype = get_typtype(funcrettype);\n> \n> + \tif (coldeflist != NIL)\n> + \t{\n> + \t\t/*\n> + \t\t * we *only* allow a coldeflist for functions returning a\n> + \t\t * RECORD pseudo-type\n> + \t\t */\n> + \t\tif (functyptype != 'p' || (functyptype == 'p' && funcrettype != RECORDOID))\n> + \t\t\telog(ERROR, \"A column definition list is only allowed for\"\n> + \t\t\t\t\t\t\" functions returning RECORD\");\n> + \t}\n> + \telse\n> + \t{\n> + \t\t/*\n> + \t\t * ... and a coldeflist is *required* for functions returning a\n> + \t\t * RECORD pseudo-type\n> + \t\t */\n> + \t\tif (functyptype == 'p' && funcrettype == RECORDOID)\n> + \t\t\telog(ERROR, \"A column definition list is required for functions\"\n> + \t\t\t\t\t\t\" returning RECORD\");\n> + \t}\n> + \n> \tif (functyptype == 'c')\n> \t{\n> \t\t/*\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 5 Aug 2002 12:28:03 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: anonymous composite types for Table Functions (aka" }, { "msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\n\nJoe Conway wrote:\n> The attached patch disallows the use of coldeflists for functions that \n> don't return type RECORD. It also catches a core dump condition when a \n> function returning RECORD had an alias list instead of a coldeflist.\n> \n> Now both conditions throw an ERROR.\n> \n> Sample output below:\n> \n> Tom Lane wrote:\n> > regression=# select * from foo() as z;\n> > foo\n> > ------\n> > 8800\n> > 1891\n> > 3420\n> > 9850\n> > ...\n> > \n> > (hm, what happened to the alias?)\n> \n> Actually nothing wrong with this one. The z is the relation alias, not \n> the column alias. The column alias defaults to the function name for \n> SRFs returning scalar. If you try:\n> \n> test=# select myfoo1.* from myfoo1() as z;\n> ERROR: Relation \"myfoo1\" does not exist\n> \n> which is as expected.\n> \n> > \n> > regression=# select * from foo() as z(a int);\n> > foo\n> > ------\n> > 8800\n> > 1891\n> > 3420\n> > 9850\n> > 7164\n> > ...\n> > \n> > (again, what happened to the alias? Column name should be a)\n> \n> This one now throws an error:\n> test=# select * from myfoo1() as z(a int);\n> ERROR: A column definition list is only allowed for functions returning \n> RECORD\n> \n> \n> > \n> > regression=# select * from foo() as z(a int8);\n> > foo\n> > ------\n> > 8800\n> > 1891\n> > 3420\n> > 9850\n> > \n> > (definitely not cool)\n> \n> Same here.\n> \n> Other change is like so:\n> test=# create function myfoo2() returns setof record as 'select * from \n> ct limit 10' language sql;\n> \n> test=# select * from myfoo2() as z(a);\n> ERROR: A column definition list is required for functions returning RECORD\n> test=# select * from myfoo2();\n> ERROR: A column definition list is required for functions returning RECORD\n> test=# select * from myfoo2() as (a int, b text, c text, d text, e text);\n> a | b | c | d | e\n> ----+--------+-------+------+------\n> 1 | group1 | test1 | att1 | val1\n> 2 | group1 | test1 | att2 | val2\n> 3 | group1 | test1 | att3 | val3\n> 4 | group1 | test1 | att4 | val4\n> 5 | group1 | test2 | att1 | val5\n> 6 | group1 | test2 | att2 | val6\n> 7 | group1 | test2 | att3 | val7\n> 8 | group1 | test2 | att4 | val8\n> 9 | group2 | test3 | att1 | val1\n> 10 | group2 | test3 | att2 | val2\n> (10 rows)\n> \n> test=# select * from myfoo2() as (a int8, b text, c text, d text, e text);\n> ERROR: Query-specified return tuple and actual function return tuple do \n> not match\n> \n> \n> Please apply if no objections.\n> \n> Thanks,\n> \n> Joe\n\n> Index: src/backend/parser/parse_relation.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/parser/parse_relation.c,v\n> retrieving revision 1.73\n> diff -c -r1.73 parse_relation.c\n> *** src/backend/parser/parse_relation.c\t5 Aug 2002 02:30:50 -0000\t1.73\n> --- src/backend/parser/parse_relation.c\t5 Aug 2002 03:16:42 -0000\n> ***************\n> *** 729,734 ****\n> --- 729,755 ----\n> \t */\n> \tfunctyptype = get_typtype(funcrettype);\n> \n> + \tif (coldeflist != NIL)\n> + \t{\n> + \t\t/*\n> + \t\t * we *only* allow a coldeflist for functions returning a\n> + \t\t * RECORD pseudo-type\n> + \t\t */\n> + \t\tif (functyptype != 'p' || (functyptype == 'p' && funcrettype != RECORDOID))\n> + \t\t\telog(ERROR, \"A column definition list is only allowed for\"\n> + \t\t\t\t\t\t\" functions returning RECORD\");\n> + \t}\n> + \telse\n> + \t{\n> + \t\t/*\n> + \t\t * ... and a coldeflist is *required* for functions returning a\n> + \t\t * RECORD pseudo-type\n> + \t\t */\n> + \t\tif (functyptype == 'p' && funcrettype == RECORDOID)\n> + \t\t\telog(ERROR, \"A column definition list is required for functions\"\n> + \t\t\t\t\t\t\" returning RECORD\");\n> + \t}\n> + \n> \tif (functyptype == 'c')\n> \t{\n> \t\t/*\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 6 Aug 2002 01:33:29 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: anonymous composite types for Table Functions (aka SRFs)" }, { "msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\nJoe Conway wrote:\n> Joe Conway wrote:\n> > Tom Lane wrote:\n> >> Hm. I'd sort of expect the \"z\" to become both the table and column\n> >> alias in this case. What do you think?\n> > \n> > I guess that would make sense. I'll make a separate patch just for that \n> > change if that's OK.\n> > \n> \n> Simple change -- patch attached.\n> \n> test=# select * from myfoo1() as z;\n> z\n> ----\n> 1\n> 2\n> 3\n> (3 rows)\n> \n> test=# select * from myfoo1();\n> myfoo1\n> --------\n> 1\n> 2\n> 3\n> (3 rows)\n> \n> \n> test=# select * from myfoo1() as z(a);\n> a\n> ----\n> 1\n> 2\n> 3\n> (3 rows)\n> \n> \n> Joe\n\n> Index: src/backend/parser/parse_relation.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/parser/parse_relation.c,v\n> retrieving revision 1.73\n> diff -c -r1.73 parse_relation.c\n> *** src/backend/parser/parse_relation.c\t5 Aug 2002 02:30:50 -0000\t1.73\n> --- src/backend/parser/parse_relation.c\t5 Aug 2002 05:22:02 -0000\n> ***************\n> *** 807,813 ****\n> \t\t\telog(ERROR, \"Too many column aliases specified for function %s\",\n> \t\t\t\t funcname);\n> \t\tif (numaliases == 0)\n> ! \t\t\teref->colnames = makeList1(makeString(funcname));\n> \t}\n> \telse if (functyptype == 'p' && funcrettype == RECORDOID)\n> \t{\n> --- 807,813 ----\n> \t\t\telog(ERROR, \"Too many column aliases specified for function %s\",\n> \t\t\t\t funcname);\n> \t\tif (numaliases == 0)\n> ! \t\t\teref->colnames = makeList1(makeString(eref->aliasname));\n> \t}\n> \telse if (functyptype == 'p' && funcrettype == RECORDOID)\n> \t{\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 6 Aug 2002 01:34:12 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: anonymous composite types for Table Functions (aka" }, { "msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\n\nJoe Conway wrote:\n> Joe Conway wrote:\n> > Simple change -- patch attached.\n> \n> Of course, the simple change has ripple effects! Here's a patch for the \n> rangefunc regression test for the new behavior.\n> \n> Joe\n\n> Index: src/test/regress/expected/rangefuncs.out\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/test/regress/expected/rangefuncs.out,v\n> retrieving revision 1.2\n> diff -c -r1.2 rangefuncs.out\n> *** src/test/regress/expected/rangefuncs.out\t16 Jul 2002 05:53:34 -0000\t1.2\n> --- src/test/regress/expected/rangefuncs.out\t5 Aug 2002 05:52:01 -0000\n> ***************\n> *** 48,56 ****\n> -- sql, proretset = f, prorettype = b\n> CREATE FUNCTION getfoo(int) RETURNS int AS 'SELECT $1;' LANGUAGE SQL;\n> SELECT * FROM getfoo(1) AS t1;\n> ! getfoo \n> ! --------\n> ! 1\n> (1 row)\n> \n> CREATE VIEW vw_getfoo AS SELECT * FROM getfoo(1);\n> --- 48,56 ----\n> -- sql, proretset = f, prorettype = b\n> CREATE FUNCTION getfoo(int) RETURNS int AS 'SELECT $1;' LANGUAGE SQL;\n> SELECT * FROM getfoo(1) AS t1;\n> ! t1 \n> ! ----\n> ! 1\n> (1 row)\n> \n> CREATE VIEW vw_getfoo AS SELECT * FROM getfoo(1);\n> ***************\n> *** 65,74 ****\n> DROP FUNCTION getfoo(int);\n> CREATE FUNCTION getfoo(int) RETURNS setof int AS 'SELECT fooid FROM foo WHERE fooid = $1;' LANGUAGE SQL;\n> SELECT * FROM getfoo(1) AS t1;\n> ! getfoo \n> ! --------\n> ! 1\n> ! 1\n> (2 rows)\n> \n> CREATE VIEW vw_getfoo AS SELECT * FROM getfoo(1);\n> --- 65,74 ----\n> DROP FUNCTION getfoo(int);\n> CREATE FUNCTION getfoo(int) RETURNS setof int AS 'SELECT fooid FROM foo WHERE fooid = $1;' LANGUAGE SQL;\n> SELECT * FROM getfoo(1) AS t1;\n> ! t1 \n> ! ----\n> ! 1\n> ! 1\n> (2 rows)\n> \n> CREATE VIEW vw_getfoo AS SELECT * FROM getfoo(1);\n> ***************\n> *** 84,91 ****\n> DROP FUNCTION getfoo(int);\n> CREATE FUNCTION getfoo(int) RETURNS setof text AS 'SELECT fooname FROM foo WHERE fooid = $1;' LANGUAGE SQL;\n> SELECT * FROM getfoo(1) AS t1;\n> ! getfoo \n> ! --------\n> Joe\n> Ed\n> (2 rows)\n> --- 84,91 ----\n> DROP FUNCTION getfoo(int);\n> CREATE FUNCTION getfoo(int) RETURNS setof text AS 'SELECT fooname FROM foo WHERE fooid = $1;' LANGUAGE SQL;\n> SELECT * FROM getfoo(1) AS t1;\n> ! t1 \n> ! -----\n> Joe\n> Ed\n> (2 rows)\n> ***************\n> *** 139,147 ****\n> DROP FUNCTION getfoo(int);\n> CREATE FUNCTION getfoo(int) RETURNS int AS 'DECLARE fooint int; BEGIN SELECT fooid into fooint FROM foo WHERE fooid = $1; RETURN fooint; END;' LANGUAGE 'plpgsql';\n> SELECT * FROM getfoo(1) AS t1;\n> ! getfoo \n> ! --------\n> ! 1\n> (1 row)\n> \n> CREATE VIEW vw_getfoo AS SELECT * FROM getfoo(1);\n> --- 139,147 ----\n> DROP FUNCTION getfoo(int);\n> CREATE FUNCTION getfoo(int) RETURNS int AS 'DECLARE fooint int; BEGIN SELECT fooid into fooint FROM foo WHERE fooid = $1; RETURN fooint; END;' LANGUAGE 'plpgsql';\n> SELECT * FROM getfoo(1) AS t1;\n> ! t1 \n> ! ----\n> ! 1\n> (1 row)\n> \n> CREATE VIEW vw_getfoo AS SELECT * FROM getfoo(1);\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 6 Aug 2002 01:34:48 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: anonymous composite types for Table Functions (aka" } ]
[ { "msg_contents": "This behavior doesn't look right:\n\nnconway=# create table foo (a int default 50, b int default 100);\nCREATE TABLE\nnconway=# copy foo from stdin;\nEnter data to be copied followed by a newline.\nEnd with a backslash and a period on a line by itself.\n>> \n>> \\.\nnconway=# select * from foo;\n a | b \n---+---\n 0 | \n(1 row)\n\n(The first line of the COPY input is blank: i.e. just a newline)\n\nThe problem appears to be in both 7.2.1 and current CVS.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Wed, 24 Jul 2002 13:59:39 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": true, "msg_subject": "bug in COPY" }, { "msg_contents": "nconway@klamath.dyndns.org (Neil Conway) writes:\n> This behavior doesn't look right:\n\nIt's not, but I believe the correct point of view is that the input\ndata is defective and should be rejected. See past discussions\nleading up to the TODO item that mentions rejecting COPY input rows\nwith the wrong number of fields (rather than silently filling with\nNULLs as we do now).\n\nA subsidiary point here is that pg_atoi() explicitly takes a zero-length\nstring as valid input of value 0. I think this is quite bogus myself,\nbut I don't know why that behavior was put in or whether we'd be breaking\nanything if we tightened it up.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Jul 2002 16:23:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: bug in COPY " }, { "msg_contents": "On Wed, Jul 24, 2002 at 04:23:56PM -0400, Tom Lane wrote:\n> nconway@klamath.dyndns.org (Neil Conway) writes:\n> > This behavior doesn't look right:\n> \n> It's not, but I believe the correct point of view is that the input\n> data is defective and should be rejected. See past discussions\n> leading up to the TODO item that mentions rejecting COPY input rows\n> with the wrong number of fields (rather than silently filling with\n> NULLs as we do now).\n\nYeah, I was thinking that too. Now that we have column lists in\nCOPY, there is no need to keep this functionality around: if the\nuser wants to load data that is missing a column, they can just\nomit the column from the column list and have the column default\ninserted (which is a lot more sensible than inserting NULL).\n\nUnfortunately, I think that removing this properly will require\nrefactoring some of the COPY code. I'll take a look at implementing\nthis...\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Wed, 24 Jul 2002 18:32:00 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": true, "msg_subject": "Re: bug in COPY" }, { "msg_contents": "nconway@klamath.dyndns.org (Neil Conway) writes:\n>> leading up to the TODO item that mentions rejecting COPY input rows\n>> with the wrong number of fields (rather than silently filling with\n>> NULLs as we do now).\n\n> Yeah, I was thinking that too. Now that we have column lists in\n> COPY, there is no need to keep this functionality around: if the\n> user wants to load data that is missing a column, they can just\n> omit the column from the column list and have the column default\n> inserted (which is a lot more sensible than inserting NULL).\n\nRight, that was the thinking exactly.\n\n> Unfortunately, I think that removing this properly will require\n> refactoring some of the COPY code. I'll take a look at implementing\n> this...\n\nI thought it could be hacked in pretty easily, but if you want to\nclean up the structure while you're in there, go for it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 25 Jul 2002 10:50:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: bug in COPY " }, { "msg_contents": "Tom Lane wrote:\n> nconway@klamath.dyndns.org (Neil Conway) writes:\n> > This behavior doesn't look right:\n> \n> It's not, but I believe the correct point of view is that the input\n> data is defective and should be rejected. See past discussions\n> leading up to the TODO item that mentions rejecting COPY input rows\n> with the wrong number of fields (rather than silently filling with\n> NULLs as we do now).\n> \n> A subsidiary point here is that pg_atoi() explicitly takes a zero-length\n> string as valid input of value 0. I think this is quite bogus myself,\n> but I don't know why that behavior was put in or whether we'd be breaking\n> anything if we tightened it up.\n\nYea, I found the atoi zero-length behavior when I tried to clean up the\nuse of strtol() recently. If you remove that behavior, the regression\ntests fail.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 28 Jul 2002 22:45:34 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: bug in COPY" }, { "msg_contents": "Tom Lane wrote:\n> A subsidiary point here is that pg_atoi() explicitly takes a zero-length\n> string as valid input of value 0. I think this is quite bogus myself,\n> but I don't know why that behavior was put in or whether we'd be breaking\n> anything if we tightened it up.\n\nI have attached a patch the throws an error if pg_atoi() is passed a\nzero-length string, and have included regression diffs showing the\neffects of the patch.\n\nSeems the new code catches a few places that were bad, like defineing {}\nfor an array of 0,0. The copy2.out change is because pg_atoi catches\nthe problem before COPY does.\n\nThe tightening up of pg_atoi seems safe and makes sense to me.\n\nIf no adverse comments, I will apply and fix up the regression results.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: src/backend/utils/adt/numutils.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/utils/adt/numutils.c,v\nretrieving revision 1.51\ndiff -c -c -r1.51 numutils.c\n*** src/backend/utils/adt/numutils.c\t16 Jul 2002 18:34:16 -0000\t1.51\n--- src/backend/utils/adt/numutils.c\t27 Aug 2002 17:51:45 -0000\n***************\n*** 60,66 ****\n \tif (s == (char *) NULL)\n \t\telog(ERROR, \"pg_atoi: NULL pointer!\");\n \telse if (*s == 0)\n! \t\tl = (long) 0;\n \telse\n \t\tl = strtol(s, &badp, 10);\n \n--- 60,66 ----\n \tif (s == (char *) NULL)\n \t\telog(ERROR, \"pg_atoi: NULL pointer!\");\n \telse if (*s == 0)\n! \t\telog(ERROR, \"pg_atoi: zero-length string!\");\n \telse\n \t\tl = strtol(s, &badp, 10);\n \n\n*** ./expected/arrays.out\tThu Nov 29 16:02:41 2001\n--- ./results/arrays.out\tTue Aug 27 14:14:47 2002\n***************\n*** 16,21 ****\n--- 16,22 ----\n --\n INSERT INTO arrtest (a[5], b[2][1][2], c, d, f, g)\n VALUES ('{1,2,3,4,5}', '{{{},{1,2}}}', '{}', '{}', '{}', '{}');\n+ ERROR: pg_atoi: zero-length string!\n UPDATE arrtest SET e[0] = '1.1';\n UPDATE arrtest SET e[1] = '2.2';\n INSERT INTO arrtest (f)\n***************\n*** 29,39 ****\n VALUES ('{}', '{3,4}', '{foo,bar}', '{bar,foo}');\n SELECT * FROM arrtest;\n a | b | c | d | e | f | g \n! -------------+-----------------+-----------+---------------+-----------+-----------------+-------------\n! {1,2,3,4,5} | {{{0,0},{1,2}}} | {} | {} | | {} | {}\n {11,12,23} | {{3,4},{4,5}} | {foobar} | {{elt1,elt2}} | {3.4,6.7} | {\"abc \",abcde} | {abc,abcde}\n {} | {3,4} | {foo,bar} | {bar,foo} | | | \n! (3 rows)\n \n SELECT arrtest.a[1],\n arrtest.b[1][1][1],\n--- 30,39 ----\n VALUES ('{}', '{3,4}', '{foo,bar}', '{bar,foo}');\n SELECT * FROM arrtest;\n a | b | c | d | e | f | g \n! ------------+---------------+-----------+---------------+-----------+-----------------+-------------\n {11,12,23} | {{3,4},{4,5}} | {foobar} | {{elt1,elt2}} | {3.4,6.7} | {\"abc \",abcde} | {abc,abcde}\n {} | {3,4} | {foo,bar} | {bar,foo} | | | \n! (2 rows)\n \n SELECT arrtest.a[1],\n arrtest.b[1][1][1],\n***************\n*** 43,61 ****\n FROM arrtest;\n a | b | c | d | e \n ----+---+--------+------+---\n- 1 | 0 | | | \n 11 | | foobar | elt1 | \n | | foo | | \n! (3 rows)\n \n SELECT a[1], b[1][1][1], c[1], d[1][1], e[0]\n FROM arrtest;\n a | b | c | d | e \n ----+---+--------+------+---\n- 1 | 0 | | | \n 11 | | foobar | elt1 | \n | | foo | | \n! (3 rows)\n \n SELECT a[1:3],\n b[1:1][1:2][1:2],\n--- 43,59 ----\n FROM arrtest;\n a | b | c | d | e \n ----+---+--------+------+---\n 11 | | foobar | elt1 | \n | | foo | | \n! (2 rows)\n \n SELECT a[1], b[1][1][1], c[1], d[1][1], e[0]\n FROM arrtest;\n a | b | c | d | e \n ----+---+--------+------+---\n 11 | | foobar | elt1 | \n | | foo | | \n! (2 rows)\n \n SELECT a[1:3],\n b[1:1][1:2][1:2],\n***************\n*** 63,82 ****\n d[1:1][1:2]\n FROM arrtest;\n a | b | c | d \n! ------------+-----------------+-----------+---------------\n! {1,2,3} | {{{0,0},{1,2}}} | | \n {11,12,23} | | {foobar} | {{elt1,elt2}}\n | | {foo,bar} | \n! (3 rows)\n \n SELECT array_dims(a) AS a,array_dims(b) AS b,array_dims(c) AS c\n FROM arrtest;\n a | b | c \n! -------+-----------------+-------\n! [1:5] | [1:1][1:2][1:2] | \n [1:3] | [1:2][1:2] | [1:1]\n | [1:2] | [1:2]\n! (3 rows)\n \n -- returns nothing \n SELECT *\n--- 61,78 ----\n d[1:1][1:2]\n FROM arrtest;\n a | b | c | d \n! ------------+---+-----------+---------------\n {11,12,23} | | {foobar} | {{elt1,elt2}}\n | | {foo,bar} | \n! (2 rows)\n \n SELECT array_dims(a) AS a,array_dims(b) AS b,array_dims(c) AS c\n FROM arrtest;\n a | b | c \n! -------+------------+-------\n [1:3] | [1:2][1:2] | [1:1]\n | [1:2] | [1:2]\n! (2 rows)\n \n -- returns nothing \n SELECT *\n***************\n*** 99,109 ****\n WHERE array_dims(c) is not null;\n SELECT a,b,c FROM arrtest;\n a | b | c \n! ---------------+-----------------------+-------------------\n! {16,25,3,4,5} | {{{113,142},{1,147}}} | {}\n {} | {3,4} | {foo,new_word}\n {16,25,23} | {{3,4},{4,5}} | {foobar,new_word}\n! (3 rows)\n \n SELECT a[1:3],\n b[1:1][1:2][1:2],\n--- 95,104 ----\n WHERE array_dims(c) is not null;\n SELECT a,b,c FROM arrtest;\n a | b | c \n! ------------+---------------+-------------------\n {} | {3,4} | {foo,new_word}\n {16,25,23} | {{3,4},{4,5}} | {foobar,new_word}\n! (2 rows)\n \n SELECT a[1:3],\n b[1:1][1:2][1:2],\n***************\n*** 111,119 ****\n d[1:1][2:2]\n FROM arrtest;\n a | b | c | d \n! ------------+-----------------------+-------------------+----------\n! {16,25,3} | {{{113,142},{1,147}}} | | \n | | {foo,new_word} | \n {16,25,23} | | {foobar,new_word} | {{elt2}}\n! (3 rows)\n \n--- 106,113 ----\n d[1:1][2:2]\n FROM arrtest;\n a | b | c | d \n! ------------+---+-------------------+----------\n | | {foo,new_word} | \n {16,25,23} | | {foobar,new_word} | {{elt2}}\n! (2 rows)\n \n\n======================================================================\n\n*** ./expected/copy2.out\tWed Aug 21 22:25:28 2002\n--- ./results/copy2.out\tTue Aug 27 14:15:50 2002\n***************\n*** 34,40 ****\n ERROR: Attribute \"d\" specified more than once\n -- missing data: should fail\n COPY x from stdin;\n! ERROR: copy: line 1, Missing data for column \"b\"\n lost synchronization with server, resetting connection\n COPY x from stdin;\n ERROR: copy: line 1, Missing data for column \"e\"\n--- 34,40 ----\n ERROR: Attribute \"d\" specified more than once\n -- missing data: should fail\n COPY x from stdin;\n! ERROR: copy: line 1, pg_atoi: zero-length string!\n lost synchronization with server, resetting connection\n COPY x from stdin;\n ERROR: copy: line 1, Missing data for column \"e\"\n\n======================================================================", "msg_date": "Tue, 27 Aug 2002 14:46:34 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: bug in COPY" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \telse if (*s == 0)\n> ! \t\telog(ERROR, \"pg_atoi: zero-length string!\");\n\nThe exclamation point seems inappropriate. Perhaps \"zero-length input\"\nwould be better than \"string\" also.\n\nOtherwise this seems like a reasonable thing to do.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Aug 2002 15:55:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: bug in COPY " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > \telse if (*s == 0)\n> > ! \t\telog(ERROR, \"pg_atoi: zero-length string!\");\n> \n> The exclamation point seems inappropriate. Perhaps \"zero-length input\"\n> would be better than \"string\" also.\n\nI copied the other test case:\n\n if (s == (char *) NULL)\n elog(ERROR, \"pg_atoi: NULL pointer!\"); \n\nI removed them both '!'.\n\n> Otherwise this seems like a reasonable thing to do.\n\nOK, will apply now.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 27 Aug 2002 15:58:36 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: bug in COPY" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> The exclamation point seems inappropriate. Perhaps \"zero-length input\"\n>> would be better than \"string\" also.\n\n> I copied the other test case:\n\n> if (s == (char *) NULL)\n> elog(ERROR, \"pg_atoi: NULL pointer!\"); \n\nWell, the NULL-pointer test might equally well be coded as an Assert:\nit's to catch backend coding errors, not cases of incorrect user input.\nSo the exclamation point there didn't bother me.\n\n> I removed them both '!'.\n\nIf you like. But the two conditions are not comparable.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Aug 2002 16:21:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: bug in COPY " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom Lane wrote:\n> >> The exclamation point seems inappropriate. Perhaps \"zero-length input\"\n> >> would be better than \"string\" also.\n> \n> > I copied the other test case:\n> \n> > if (s == (char *) NULL)\n> > elog(ERROR, \"pg_atoi: NULL pointer!\"); \n> \n> Well, the NULL-pointer test might equally well be coded as an Assert:\n> it's to catch backend coding errors, not cases of incorrect user input.\n> So the exclamation point there didn't bother me.\n\n\nYou really think it should be Assert? I don't mind opening the chance\nto throw an error for bad input, but are you sure that a NULL should\nnever get there?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 27 Aug 2002 16:24:48 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: bug in COPY" }, { "msg_contents": "Patch applied.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom Lane wrote:\n> >> The exclamation point seems inappropriate. Perhaps \"zero-length input\"\n> >> would be better than \"string\" also.\n> \n> > I copied the other test case:\n> \n> > if (s == (char *) NULL)\n> > elog(ERROR, \"pg_atoi: NULL pointer!\"); \n> \n> Well, the NULL-pointer test might equally well be coded as an Assert:\n> it's to catch backend coding errors, not cases of incorrect user input.\n> So the exclamation point there didn't bother me.\n> \n> > I removed them both '!'.\n> \n> If you like. But the two conditions are not comparable.\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: src/backend/utils/adt/numutils.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/utils/adt/numutils.c,v\nretrieving revision 1.51\ndiff -c -c -r1.51 numutils.c\n*** src/backend/utils/adt/numutils.c\t16 Jul 2002 18:34:16 -0000\t1.51\n--- src/backend/utils/adt/numutils.c\t27 Aug 2002 20:28:41 -0000\n***************\n*** 58,66 ****\n \t */\n \n \tif (s == (char *) NULL)\n! \t\telog(ERROR, \"pg_atoi: NULL pointer!\");\n \telse if (*s == 0)\n! \t\tl = (long) 0;\n \telse\n \t\tl = strtol(s, &badp, 10);\n \n--- 58,66 ----\n \t */\n \n \tif (s == (char *) NULL)\n! \t\telog(ERROR, \"pg_atoi: NULL pointer\");\n \telse if (*s == 0)\n! \t\telog(ERROR, \"pg_atoi: zero-length string\");\n \telse\n \t\tl = strtol(s, &badp, 10);\n \nIndex: src/test/regress/expected/arrays.out\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/test/regress/expected/arrays.out,v\nretrieving revision 1.9\ndiff -c -c -r1.9 arrays.out\n*** src/test/regress/expected/arrays.out\t29 Nov 2001 21:02:41 -0000\t1.9\n--- src/test/regress/expected/arrays.out\t27 Aug 2002 20:28:45 -0000\n***************\n*** 15,21 ****\n -- 'e' is also a large object.\n --\n INSERT INTO arrtest (a[5], b[2][1][2], c, d, f, g)\n! VALUES ('{1,2,3,4,5}', '{{{},{1,2}}}', '{}', '{}', '{}', '{}');\n UPDATE arrtest SET e[0] = '1.1';\n UPDATE arrtest SET e[1] = '2.2';\n INSERT INTO arrtest (f)\n--- 15,21 ----\n -- 'e' is also a large object.\n --\n INSERT INTO arrtest (a[5], b[2][1][2], c, d, f, g)\n! VALUES ('{1,2,3,4,5}', '{{{0,0},{1,2}}}', '{}', '{}', '{}', '{}');\n UPDATE arrtest SET e[0] = '1.1';\n UPDATE arrtest SET e[1] = '2.2';\n INSERT INTO arrtest (f)\nIndex: src/test/regress/expected/copy2.out\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/test/regress/expected/copy2.out,v\nretrieving revision 1.6\ndiff -c -c -r1.6 copy2.out\n*** src/test/regress/expected/copy2.out\t22 Aug 2002 00:01:51 -0000\t1.6\n--- src/test/regress/expected/copy2.out\t27 Aug 2002 20:28:45 -0000\n***************\n*** 34,40 ****\n ERROR: Attribute \"d\" specified more than once\n -- missing data: should fail\n COPY x from stdin;\n! ERROR: copy: line 1, Missing data for column \"b\"\n lost synchronization with server, resetting connection\n COPY x from stdin;\n ERROR: copy: line 1, Missing data for column \"e\"\n--- 34,40 ----\n ERROR: Attribute \"d\" specified more than once\n -- missing data: should fail\n COPY x from stdin;\n! ERROR: copy: line 1, pg_atoi: zero-length string\n lost synchronization with server, resetting connection\n COPY x from stdin;\n ERROR: copy: line 1, Missing data for column \"e\"\nIndex: src/test/regress/sql/arrays.sql\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/test/regress/sql/arrays.sql,v\nretrieving revision 1.8\ndiff -c -c -r1.8 arrays.sql\n*** src/test/regress/sql/arrays.sql\t21 May 2001 16:54:46 -0000\t1.8\n--- src/test/regress/sql/arrays.sql\t27 Aug 2002 20:28:45 -0000\n***************\n*** 18,24 ****\n --\n \n INSERT INTO arrtest (a[5], b[2][1][2], c, d, f, g)\n! VALUES ('{1,2,3,4,5}', '{{{},{1,2}}}', '{}', '{}', '{}', '{}');\n \n UPDATE arrtest SET e[0] = '1.1';\n \n--- 18,24 ----\n --\n \n INSERT INTO arrtest (a[5], b[2][1][2], c, d, f, g)\n! VALUES ('{1,2,3,4,5}', '{{{0,0},{1,2}}}', '{}', '{}', '{}', '{}');\n \n UPDATE arrtest SET e[0] = '1.1';", "msg_date": "Tue, 27 Aug 2002 16:30:53 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: bug in COPY" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> Well, the NULL-pointer test might equally well be coded as an Assert:\n\n> You really think it should be Assert?\n\nI don't feel a need to change it, no. I was just pointing out that\nthere shouldn't be any way for a user to cause that condition to occur.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Aug 2002 16:33:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: bug in COPY " } ]
[ { "msg_contents": "I see this using the latest CVS code:\n\n$ make installcheck\n[ all tests pass ]\n$ pg_dump regression > /tmp/regression.dump\n$ psql template1\ntemplate1=# drop database regression; \nDROP DATABASE\ntemplate1=# create database regression;\nCREATE DATABASE\ntemplate1=# \\c regression\nYou are now connected to database regression.\nregression=# \\i /tmp/peter-dump\n\nThe restoration of the dump products a number of errors, including\nthe following: (I've only included the first few)\n\n395: WARNING: ProcedureCreate: type widget is not yet defined\n411: ERROR: parser: parse error at or near \",\"\n467: ERROR: parser: parse error at or near \",\"\n475: ERROR: parser: parse error at or near \",\"\n483: ERROR: parser: parse error at or near \",\"\n\nThe corresponds lines from the dump file are:\n\nCREATE FUNCTION \"widget_in\"(opaque) RETURNS widget AS\n'/home/nconway/pgsql/src/test/regress/regress.so', 'widget_in' LANGUAGE\n\"c\";\n\nCREATE TYPE \"widget\" ( internallength = 24, input = widget_in, output =\nwidget_out, , alignment = double, storage = plain);\n\nCREATE TYPE \"city_budget\" ( internallength = 16, input = int44in, output\n= int44out, , element = integer, delimiter = ',', alignment = int4,\nstorage = plain);\n\nCREATE TYPE \"int42\" ( internallength = 4, input = int4in, output =\nint4out, , default = '42', alignment = int4, storage = plain,\npassedbyvalue);\n\nCREATE TYPE \"text_w_default\" ( internallength = variable, input =\ntextin, output = textout, , default = 'zippo', alignment = int4, storage\n= plain);\n\nThe problem appears to be the spurious comma in the dumped CREATE TYPE\nstatement, and I think it was introduced by Peter E.'s recent checkin.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Wed, 24 Jul 2002 18:52:44 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": true, "msg_subject": "regression in CVS HEAD" }, { "msg_contents": "Neil Conway writes:\n\n> The problem appears to be the spurious comma in the dumped CREATE TYPE\n> statement, and I think it was introduced by Peter E.'s recent checkin.\n\nFixed. Thanks.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 25 Jul 2002 22:56:56 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: regression in CVS HEAD" } ]
[ { "msg_contents": "Hi,\n\nWe are in the process of moving to Postgres from Oracle. We were using\n\"savepoint xyz\" and \"rollback to savepoint xyz\" with Oracle. Or these\nsupported by Postgres?, if so what are the equivalent queries in postgres\nfor the same.\n\nThanks\nYuva\nSr. Java Developer\nwww.ebates.com\n", "msg_date": "Wed, 24 Jul 2002 16:24:59 -0700", "msg_from": "Yuva Chandolu <ychandolu@ebates.com>", "msg_from_op": true, "msg_subject": "savepoint and rollback queries in postgres" }, { "msg_contents": "> We are in the process of moving to Postgres from Oracle. We were using\n> \"savepoint xyz\" and \"rollback to savepoint xyz\" with Oracle. Or these\n> supported by Postgres?, if so what are the equivalent queries in postgres\n> for the same.\n\nNo, they're not supported in any released version. Postgres doens't have\nsavepoints and hence doesn't have nested transactions. However, there are\nmoves afoot and such functionality may appear in 7.3 or 7.4.\n\nChris\n\n", "msg_date": "Thu, 25 Jul 2002 10:20:44 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: savepoint and rollback queries in postgres" } ]
[ { "msg_contents": "I get an error with current:\n\ntest=# RESET SESSION AUTHORIZATION;\nERROR: parser: parse error at or near \"AUTHORIZATION\"\n--\nTatsuo Ishii\n", "msg_date": "Thu, 25 Jul 2002 11:18:54 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "RESET SESSION AUTHORIZATION" }, { "msg_contents": "> I get an error with current:\n> \n> test=# RESET SESSION AUTHORIZATION;\n> ERROR: parser: parse error at or near \"AUTHORIZATION\"\n\nIt tured out that outdated gram.c file caused the problem after doing\ncvs up (I don't know why). Anyway sorry for the false alarm.\n--\nTatsuo Ishii\n", "msg_date": "Thu, 25 Jul 2002 12:29:08 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: RESET SESSION AUTHORIZATION" } ]
[ { "msg_contents": "\n> > Now when creating a function you can do:\n> > CREATE FUNCTION foo(text) RETURNS setof RECORD ...\n> >\n> > And when using it you can do, e.g.:\n> > SELECT * from foo(sqlstmt) AS (f1 int, f2 text, f3 timestamp)\n> \n> Why is there the requirement to declare the type at SELECT \n> time at all? Why\n> not just take what you get when you run the function?\n\nYea, that would imho be ultra cool, but I guess the parser/planner must already \nknow the expected column names and types to resolve conflicts, do a reasonable \nplan and use the correct type conversions.\n\nMaybe the AS (...) could be optional, and if left out, the executor would need \nto abort iff duplicate colnames (from a joined table) or non binary compatible \nconversions would be involved. A \"select * from func();\" would then always work,\nbut if you add \"where x=5\" the executor might need to abort.\nLooks like a lot of work though.\n\nAndreas \n", "msg_date": "Thu, 25 Jul 2002 09:02:30 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Proposal: anonymous composite types for Table Functions (aka\n SRFs)" } ]
[ { "msg_contents": "I am trying to create an aggregate function that works on whole tuples,\nbut the system does not find them once defined ;(\n\nhannu=# \\d users\n Table \"users\"\n Column | Type | \nModifiers \n----------+---------+------------------------------------------------------\n fname | text | not null\n lname | text | not null\n username | text | \n userid | integer | not null \n\nhannu=# create or replace function add_table_row(text,users) returns\ntext as\nhannu-# 'state = args[0]\nhannu'# user = args[1][\"fname\"] + \":\" + args[1][\"lname\"]\nhannu'# if state:\nhannu'# return state + \"\\\\n\" + user \nhannu'# else:\nhannu'# return user\nhannu'# '\nhannu-# LANGUAGE 'plpython';\nCREATE\nhannu=# select add_table_row('',users) from users;\n add_table_row \n---------------\n jane:doe\n john:doe\n willem:doe\n rick:smith\n(4 rows)\nhannu=# create aggregate tabulate (\nhannu(# basetype = users,\nhannu(# sfunc = add_table_row,\nhannu(# stype = text,\nhannu(# initcond = ''\nhannu(# );\nCREATE\nhannu=# select tabulate(users) from users;\nERROR: No such attribute or function 'tabulate'\n\n\n\nWhat am I doing wrong ?\n\n--------------\nHannu\n\n", "msg_date": "25 Jul 2002 20:17:56 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": true, "msg_subject": "creating aggregates that work on composite types (whole tuples)" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> I am trying to create an aggregate function that works on whole tuples,\n> but the system does not find them once defined ;(\n> hannu=# select tabulate(users) from users;\n> ERROR: No such attribute or function 'tabulate'\n\nThis seems to work in CVS tip. I think you're stuck in older releases\nthough. The syntax \"foo(tablename)\" is understood to mean \"either a\ncolumn selection or a function call\" ... but aggregates were quite\ndistinct from plain functions up until about a month ago, and they\nweren't considered as an option at that spot in the code.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Jul 2002 10:49:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: creating aggregates that work on composite types (whole tuples) " }, { "msg_contents": "On Tue, 2002-07-30 at 16:49, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > I am trying to create an aggregate function that works on whole tuples,\n> > but the system does not find them once defined ;(\n> > hannu=# select tabulate(users) from users;\n> > ERROR: No such attribute or function 'tabulate'\n> \n> This seems to work in CVS tip.\n\nThat's great news.\n\nWhat I really would want is to be able to register and call the same\nfunction for \"any\" input, like count(*) is currently, only with the\nexception that the rows are actually passed to it.\n\nI think that could be made possible sometime in the future with either\nregistering for 'any' and anonymous types created on-the-fly or some\nsort of tuple \"supertype\" that any type of row could be cast into,\neither implicitly or explicitly so that I could register ggregate\ntabulate(tupletype)\n\nI would not mind having to do tabulate(tupletype(users)) but it would be\nnice if it were done automatically.\n\n> I think you're stuck in older releases\n> though. The syntax \"foo(tablename)\" is understood to mean \"either a\n> column selection or a function call\" ... but aggregates were quite\n> distinct from plain functions up until about a month ago, and they\n> weren't considered as an option at that spot in the code.\n\nThanks, I'll check it on CVS tip.\n\n---------------\nHannu\n\n", "msg_date": "30 Jul 2002 18:06:57 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": true, "msg_subject": "Re: creating aggregates that work on composite types" } ]
[ { "msg_contents": "Hi, I'm a newbie to postgresql, and I have 2 questions.\r\n\r\n1. shared memory hash key is string.\r\nwhy don't we use predefined integer value?\r\nI think it is dangerous to use string for key value,\r\nbecause it can be mistyped , which might lead us to debug more and more.\r\n\r\n2. appendStringInfoChar reallocates every bytes whenever needed,\r\nwhy don't reallocate chunk for better performace?\r\nbecause initStringInfo allocates only 256 bytes ,\r\npg_beginmessage prepared 256 bytes of StringInfo for communication buffer.\r\n\r\nIt's hard to analyze pgsql source code,\r\nbecause I'm a newbie, and have no idea about postgres internals,\r\nBut I will try hard!! \r\n\r\nThank you.\r\n\r\n\n\n\n\n\n\n\nHi, I'm a newbie to postgresql, and I have 2 \r\nquestions.\n \n1. shared memory hash key is string.why don't we use \r\npredefined integer value?I think it is dangerous to use string for key \r\nvalue,because it can be mistyped , which might lead us to debug more and \r\nmore.\n \n2. appendStringInfoChar reallocates every bytes whenever \r\nneeded,why don't reallocate chunk for better performace?because \r\ninitStringInfo allocates only 256 bytes ,pg_beginmessage prepared 256 bytes \r\nof StringInfo for communication buffer.\n \nIt's hard to analyze pgsql source code,\nbecause I'm a newbie, and have no idea about postgres \r\ninternals,\nBut I will try hard!! \n \nThank you.", "msg_date": "Fri, 26 Jul 2002 03:43:30 +0900", "msg_from": "\"Kangmo, Kim\" <ilvsusie@hahafos.com>", "msg_from_op": true, "msg_subject": "Two question about performance tuning issue." }, { "msg_contents": "On Fri, Jul 26, 2002 at 03:43:30AM +0900, Kangmo, Kim wrote:\n> 2. appendStringInfoChar reallocates every bytes whenever needed,\n> why don't reallocate chunk for better performace?\n\nenlargeStringInfo() takes care of that -- from the comment in the\nfunction:\n\n /*\n * We don't want to allocate just a little more space with each\n * append; for efficiency, double the buffer size each time it\n * overflows. Actually, we might need to more than double it if\n * 'needed' is big...\n */\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Fri, 26 Jul 2002 10:20:28 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": false, "msg_subject": "Re: Two question about performance tuning issue." } ]
[ { "msg_contents": "Earlier I had argued that SET LOCAL should not be used in the context it\nis now, and I had suggested SET TRANSACTION as a replacement. However,\nnow that I look at it in the implementation, this syntax is just too\nbizzare and prone to confuse.\n\nHere are a couple of examples of what is/would be possible.\n\nSET SESSION SESSION AUTHORIZATION\n\n(This is semantically valid, since the parameter is the \"session\nauthorization\" and you want it to last for the session.)\n\nSET TRANSACTION SESSION AUTHORIZATION\n\n(Clearly confusing)\n\nSET SESSION TRANSACTION ISOLATION LEVEL\n\n(Syntactically valid, but nonsensical.)\n\nSET TRANSACTION TRANSACTION ISOLATION LEVEL\n\n(Stupid)\n\nSET TRANSACTION ISOLATION LEVEL\n\n(This seems to imply that the parameter name is \"isolation level\" whereas\nin fact the \"transaction\" belongs to the parameter name.)\n\nSET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL\nSET TRANSACTION SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL\n\n(OK, you get the idea...)\n\nAs an alternative syntax I can suggest\n\nSET name TO value [ ON COMMIT RESET ];\n\nI think this is painfully clear, is similar to other SQL standard\ncommands, and draws on existing terminology (COMMIT/RESET). OK, slightly\nmore typing, I guess.\n\nComments?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 25 Jul 2002 22:55:21 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "SET LOCAL again" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> As an alternative syntax I can suggest\n\n> SET name TO value [ ON COMMIT RESET ];\n\nUgh. Why can't we stick with SET LOCAL?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Jul 2002 11:09:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SET LOCAL again " }, { "msg_contents": "Tom Lane writes:\n\n> > As an alternative syntax I can suggest\n>\n> > SET name TO value [ ON COMMIT RESET ];\n>\n> Ugh. Why can't we stick with SET LOCAL?\n\nSET LOCAL is already used for something else in the SQL standard. Not\nsure if we'll ever implement that, but it's something to be concerned\nabout.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 30 Jul 2002 18:38:28 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: SET LOCAL again " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n> As an alternative syntax I can suggest\n>> \n> SET name TO value [ ON COMMIT RESET ];\n>> \n>> Ugh. Why can't we stick with SET LOCAL?\n\n> SET LOCAL is already used for something else in the SQL standard. Not\n> sure if we'll ever implement that, but it's something to be concerned\n> about.\n\nActually, it looks to me like the spec's SET LOCAL has a compatible\ninterpretation: it only affects the current transaction.\n\nMy main gripe with \"ON COMMIT RESET\" is that it's a misleading\ndescription of what will happen --- RESETting a variable is quite\ndifferent from allowing it to revert to the pre-transaction state.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Jul 2002 12:44:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SET LOCAL again " }, { "msg_contents": "Tom Lane wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Tom Lane writes:\n> > As an alternative syntax I can suggest\n> >> \n> > SET name TO value [ ON COMMIT RESET ];\n> >> \n> >> Ugh. Why can't we stick with SET LOCAL?\n> \n> > SET LOCAL is already used for something else in the SQL standard. Not\n> > sure if we'll ever implement that, but it's something to be concerned\n> > about.\n> \n> Actually, it looks to me like the spec's SET LOCAL has a compatible\n> interpretation: it only affects the current transaction.\n> \n> My main gripe with \"ON COMMIT RESET\" is that it's a misleading\n> description of what will happen --- RESETting a variable is quite\n> different from allowing it to revert to the pre-transaction state.\n\nI don't like stuff trailing off at the end, especially three words. \nThat SET command is getting so big, it may fall over. ;-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 30 Jul 2002 12:47:46 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: SET LOCAL again" }, { "msg_contents": "\n\n\n\n\n\nBruce Momjian wrote:\n\nTom Lane wrote:\n \n\nPeter Eisentraut <peter_e@gmx.net> writes:\n \n\nTom Lane writes:\nAs an alternative syntax I can suggest\n \nSET name TO value [ ON COMMIT RESET ];\n \n\nUgh. Why can't we stick with SET LOCAL?\n \n\n\n\nSET LOCAL is already used for something else in the SQL standard. Not\nsure if we'll ever implement that, but it's something to be concerned\nabout.\n \n\nActually, it looks to me like the spec's SET LOCAL has a compatible\ninterpretation: it only affects the current transaction.\n\nMy main gripe with \"ON COMMIT RESET\" is that it's a misleading\ndescription of what will happen --- RESETting a variable is quite\ndifferent from allowing it to revert to the pre-transaction state.\n \n\n\nI don't like stuff trailing off at the end, especially three words. \nThat SET command is getting so big, it may fall over. ;-)\n\n \n\nPerhaps ON COMMIT REVERT would be more intuitive.\n\n\n", "msg_date": "Tue, 30 Jul 2002 16:48:40 -0500", "msg_from": "Thomas Swan <tswan@idigx.com>", "msg_from_op": false, "msg_subject": "Re: SET LOCAL again" } ]
[ { "msg_contents": " From looking at the set of implicit or not casts, I think there are two\nmajor issues to discuss:\n\n1. Should truncating/rounding casts be implicit? (e.g., float4 -> int4)\n\nI think there's a good argument for \"no\", but for some reason SQL99 says\n\"yes\", at least for the family of numerical types.\n\n2. Should casts from non-character types to text be implicit? (e.g., date\n-> text)\n\nI think this should be \"no\", for the same reason that the other direction\nis already disallowed. It's just sloppy programming.\n\nI also have a few individual cases that look worthy of consideration:\n\nabstime <-> int4: I think these should not be implicit because they\nrepresent different \"kinds\" of data. (These are binary compatible casts,\nso changing them to not implicit probably won't have any effect. I'd have\nto check this.)\n\ndate -> timestamp[tz]: I'm suspicious of this one, but it's hard to\nexplain. The definition to fill in the time component with zeros is\nreasonable, but it's not the same thing as casting integers to floats\nbecause dates really represent a time span of 24 hours and timestamps an\nindivisible point in time. I suggest making this non-implicit, for\nconformance with SQL and for general consistency between the date/time\ntypes.\n\ntime -> interval: I'm not even sure this cast should exist at all.\nProper arithmetic would be IntervalValue = TimeValue - TIME 'midnight'.\nAt least make it non-implicit.\n\ntimestamp -> abstime: This can be implicit AFAICS.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 25 Jul 2002 22:56:02 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Which casts should be implicit" }, { "msg_contents": "> date -> timestamp[tz]: I'm suspicious of this one, but it's hard to\n> explain. The definition to fill in the time component with zeros is\n> reasonable, but it's not the same thing as casting integers to floats\n> because dates really represent a time span of 24 hours and timestamps an\n> indivisible point in time. I suggest making this non-implicit, for\n> conformance with SQL and for general consistency between the date/time\n> types.\n\nAlthought I'm sure there's _loads_ of people using this conversion,\nincluding me in various random places in the codebase.\n\nChris\n\n", "msg_date": "Fri, 26 Jul 2002 10:38:24 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Which casts should be implicit" }, { "msg_contents": "> > date -> timestamp[tz]: I'm suspicious of this one, but it's hard to\n> > explain. The definition to fill in the time component with zeros is\n> > reasonable, but it's not the same thing as casting integers to floats\n> > because dates really represent a time span of 24 hours and timestamps an\n> > indivisible point in time. I suggest making this non-implicit, for\n> > conformance with SQL and for general consistency between the date/time\n> > types.\n>\n> Althought I'm sure there's _loads_ of people using this conversion,\n> including me in various random places in the codebase.\n\nActually, if inserting counts as an explicit conversion, then maybe not...\n\n", "msg_date": "Fri, 26 Jul 2002 13:49:04 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Which casts should be implicit" } ]
[ { "msg_contents": "I have committed support codes for DROP CONVERSION. I think most of\nworks for CREATE CONVERSION/DROP CONVERSION have been done.\n\nRemaining works are:\n\no Add some conversion functions (mostly cyrillc encodings)\no Add SQL99 CONVERT command\n--\nTatsuo Ishii\n", "msg_date": "Fri, 26 Jul 2002 09:03:04 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "CREATE CONVERSION/DROP CONVERSION implemented" } ]
[ { "msg_contents": "Is there any development in progress to make PGSQL work on embedded Linux with other non-x86 CPUs? I'm specifically targetting an SH3 hardware platform with 128MB of FROM. I would like to know if anybody made any headstart already. Thank You.\n\n\n\n\n\n\n\nIs there any development in progress to make PGSQL \nwork on embedded Linux with other non-x86 CPUs? I'm specifically targetting an \nSH3 hardware platform with 128MB of FROM. I would like to know if anybody made \nany headstart already. Thank \nYou.", "msg_date": "Fri, 26 Jul 2002 15:36:31 +0800", "msg_from": "\"Celestino I. Olalo Jr.\" <jun@digi.com.ph>", "msg_from_op": true, "msg_subject": "postgres on Linux SH3" }, { "msg_contents": "On Fri, 26 Jul 2002, Celestino I. Olalo Jr. wrote:\n\n> Is there any development in progress to make PGSQL work on embedded\n> Linux with other non-x86 CPUs? I'm specifically targetting an SH3\n> hardware platform with 128MB of FROM. I would like to know if anybody\n> made any headstart already. Thank You.\n\nHave you tried yet, to get it to work? Are there any issues/problems that\nyou have encountered?\n\n\n", "msg_date": "Fri, 26 Jul 2002 06:53:07 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: postgres on Linux SH3" }, { "msg_contents": "Actually, I will only have my hands on the hardware platform by next week.\nI'm just hoping that somebody has done it before and may probably give me\nsome tips (instead of starting from scratch). I'm also hoping that there's\nsome technical document or guide on porting Postgres to other CPUs.\n\n----- Original Message -----\nFrom: \"Marc G. Fournier\" <scrappy@hub.org>\nTo: \"Celestino I. Olalo Jr.\" <jun@digi.com.ph>\nCc: <pgsql-hackers@postgresql.org>\nSent: Friday, July 26, 2002 5:53 PM\nSubject: Re: [HACKERS] postgres on Linux SH3\n\n\n> On Fri, 26 Jul 2002, Celestino I. Olalo Jr. wrote:\n>\n> > Is there any development in progress to make PGSQL work on embedded\n> > Linux with other non-x86 CPUs? I'm specifically targetting an SH3\n> > hardware platform with 128MB of FROM. I would like to know if anybody\n> > made any headstart already. Thank You.\n>\n> Have you tried yet, to get it to work? Are there any issues/problems that\n> you have encountered?\n>\n>\n\n", "msg_date": "Fri, 26 Jul 2002 18:30:20 +0800", "msg_from": "\"Celestino I. Olalo Jr.\" <jun@digi.com.ph>", "msg_from_op": true, "msg_subject": "Re: postgres on Linux SH3" }, { "msg_contents": "On Fri, 26 Jul 2002, Celestino I. Olalo Jr. wrote:\n\n> Actually, I will only have my hands on the hardware platform by next\n> week. I'm just hoping that somebody has done it before and may\n> probably give me some tips (instead of starting from scratch). I'm\n> also hoping that there's some technical document or guide on porting\n> Postgres to other CPUs.\n\nIt should probably Just Work. You will have to write some\nsh3-specific spinlock inlines to get good performance,\nthough.\n\nMatthew.\n\n", "msg_date": "Fri, 26 Jul 2002 13:59:00 +0100 (BST)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": false, "msg_subject": "Re: postgres on Linux SH3" } ]
[ { "msg_contents": "\nSomething to maybe add to the TODO list, if someone has the\ntime/inclination to work on it ...\n\nThe problem with the current auth system, as I see it, is that you can't\neasily have seperate user lists and passwords per database ... its shared\nacross the system ...\n\nThe closest you can get is to have a database defined as 'password' in\npg_hba.conf, with an external password file from pg_shadow, which, for the\nmost part, is good ... but it doesn't lend itself well to a 'hands off'\nserver ...\n\nRight now, with v7.2, we have two 'sub-processes' that start up for stats\ncollection ... has anyone thought about adding a 3rd as a password server?\n\nBasically, it would be used to manage the pg_hba.conf file itself *while*\nthe server is/was live ...\n\nFor instance, CREATE DATABASE would need to get extended to have\nsomething like \"WITH AUTH '{trust|password|ident}' FROM '<IP>'\" added to\nit, which would add an appropriate line to pg_hba.conf ...\n\nThe database owner would have the ability to add users if (and only if)\nthe database was setup for 'password', and the password daemon would\nautomatically modify the password file(s) for that database ..\n\nWhat would be even more cool ... to be able to do something like:\n\nCREATE USER <user> FROM <IP> WITH PASSWORD <password>\n\nwhich, if it didn't exist, would create a line in pg_hba.conf of:\n\nhost\t<database>\t<ip>\tpassword\t<database>\n\nand create a <database> password file with that person in it ...\n\n\n\n\n", "msg_date": "Fri, 26 Jul 2002 10:48:53 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Password sub-process ..." }, { "msg_contents": "\"Marc G. Fournier\" wrote:\n> \n> Something to maybe add to the TODO list, if someone has the\n> time/inclination to work on it ...\n> \n> The problem with the current auth system, as I see it, is that you can't\n> easily have seperate user lists and passwords per database ... its shared\n> across the system ...\n\nI don't see a problem with that in general. Other databases (like\nInformix) do it even worse and you need a UNIX or NIS account where the\npassword is held!\n\nWhat would be good is IMHO to have GRANT|REVOKE CONNECT which defaults\nto REVOKE, so only superusers and the DB owner can connect, but that the\nowner later can change it without the need to edit hba.conf.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n", "msg_date": "Fri, 26 Jul 2002 10:02:58 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Password sub-process ..." }, { "msg_contents": "On Fri, 26 Jul 2002, Jan Wieck wrote:\n\n> What would be good is IMHO to have GRANT|REVOKE CONNECT which defaults\n> to REVOKE, so only superusers and the DB owner can connect, but that the\n> owner later can change it without the need to edit hba.conf.\n\nOh, yes. Me too please. I think something close to this is coming with \nschemes - well at least my take on it indicates that.\n\n\nRod\n-- \n \"Open Source Software - Sometimes you get more than you paid for...\"\n\n", "msg_date": "Fri, 26 Jul 2002 07:33:43 -0700 (PDT)", "msg_from": "\"Roderick A. Anderson\" <raanders@acm.org>", "msg_from_op": false, "msg_subject": "Re: Password sub-process ..." }, { "msg_contents": "On Fri, Jul 26, 2002 at 10:48:53 -0300,\n \"Marc G. Fournier\" <scrappy@hub.org> wrote:\n> \n> Something to maybe add to the TODO list, if someone has the\n> time/inclination to work on it ...\n> \n> The problem with the current auth system, as I see it, is that you can't\n> easily have seperate user lists and passwords per database ... its shared\n> across the system ...\n\nIf you look at the 7.3 docs you will see that you will be able to restrict\naccess to databases by user or group name. If you use group name and have\none group per database you will be able to administer access pretty easily.\nIf you have a lot of databases you can use group names matching database\nnames and not have to touch your conf file when making new databases.\n", "msg_date": "Fri, 26 Jul 2002 10:28:09 -0500", "msg_from": "Bruno Wolff III <bruno@wolff.to>", "msg_from_op": false, "msg_subject": "Re: Password sub-process ..." }, { "msg_contents": "On Fri, 2002-07-26 at 11:28, Bruno Wolff III wrote:\n> On Fri, Jul 26, 2002 at 10:48:53 -0300,\n> \"Marc G. Fournier\" <scrappy@hub.org> wrote:\n> > \n> > Something to maybe add to the TODO list, if someone has the\n> > time/inclination to work on it ...\n> > \n> > The problem with the current auth system, as I see it, is that you can't\n> > easily have seperate user lists and passwords per database ... its shared\n> > across the system ...\n> \n> If you look at the 7.3 docs you will see that you will be able to restrict\n> access to databases by user or group name. If you use group name and have\n> one group per database you will be able to administer access pretty easily.\n> If you have a lot of databases you can use group names matching database\n> names and not have to touch your conf file when making new databases.\n\nThis still doesn't allow john on db1 to be a different user than john on\ndb2. To accomplish that (easily) we still need to install different\ninstances for each database. Very minor issue, but it would be nice to\nhave the ability (for those selling PostgreSQL storage services).\n\n", "msg_date": "26 Jul 2002 11:30:06 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Password sub-process ..." }, { "msg_contents": "Rod Taylor <rbt@zort.ca> writes:\n> This still doesn't allow john on db1 to be a different user than john on\n> db2. To accomplish that (easily) we still need to install different\n> instances for each database.\n\nSome people think that cross-database user names are a feature, not\na bug. I cannot see any way to change that without creating huge\nbackward-compatibility headaches --- and it's not at all clear to\nme that it's a step forward, anyway.\n\nI think that it might be worth adding a CONNECT privilege at the\ndatabase level; that together with Bruce's recent revisions to\npg_hba.conf ought to be a pretty good improvement.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Jul 2002 11:48:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Password sub-process ... " }, { "msg_contents": "On Fri, 26 Jul 2002, Bruno Wolff III wrote:\n\n> On Fri, Jul 26, 2002 at 10:48:53 -0300,\n> \"Marc G. Fournier\" <scrappy@hub.org> wrote:\n> >\n> > Something to maybe add to the TODO list, if someone has the\n> > time/inclination to work on it ...\n> >\n> > The problem with the current auth system, as I see it, is that you can't\n> > easily have seperate user lists and passwords per database ... its shared\n> > across the system ...\n>\n> If you look at the 7.3 docs you will see that you will be able to restrict\n> access to databases by user or group name. If you use group name and have\n> one group per database you will be able to administer access pretty easily.\n> If you have a lot of databases you can use group names matching database\n> names and not have to touch your conf file when making new databases.\n\nRight, but, unless I'm missed something, you still won't have the ability\nto have \"two joeblows on the system, with two distinct passwords, having\naccess to two different databases\" ...\n\n", "msg_date": "Fri, 26 Jul 2002 13:45:42 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: Password sub-process ..." }, { "msg_contents": "On 26 Jul 2002, Rod Taylor wrote:\n\n> On Fri, 2002-07-26 at 11:28, Bruno Wolff III wrote:\n> > On Fri, Jul 26, 2002 at 10:48:53 -0300,\n> > \"Marc G. Fournier\" <scrappy@hub.org> wrote:\n> > >\n> > > Something to maybe add to the TODO list, if someone has the\n> > > time/inclination to work on it ...\n> > >\n> > > The problem with the current auth system, as I see it, is that you can't\n> > > easily have seperate user lists and passwords per database ... its shared\n> > > across the system ...\n> >\n> > If you look at the 7.3 docs you will see that you will be able to restrict\n> > access to databases by user or group name. If you use group name and have\n> > one group per database you will be able to administer access pretty easily.\n> > If you have a lot of databases you can use group names matching database\n> > names and not have to touch your conf file when making new databases.\n>\n> This still doesn't allow john on db1 to be a different user than john on\n> db2. To accomplish that (easily) we still need to install different\n> instances for each database. Very minor issue, but it would be nice to\n> have the ability (for those selling PostgreSQL storage services).\n\nActually, in an IS dept with several applications running, each with a\nseperate group of users, this would be a plus ... if I have to create N\ninstances, I'm splitting up available memory/shared memory between them,\ninstead of creating one great big pool ...\n\nfor instance, when we upgraded archives.postgresql.org's memory a while\nback, we created a shared memory segment of 1.5Gig of RAM for all the\ndatabases (except fts, she's still under the old v7.1.3 db). Which means\nthat if all databases are quiet, and one comes to life to do a nice big\nquery, it has a nice big pool of RAM to work with ...\n\n\n", "msg_date": "Fri, 26 Jul 2002 13:48:53 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: Password sub-process ..." }, { "msg_contents": "On Fri, 26 Jul 2002, Tom Lane wrote:\n\n> Rod Taylor <rbt@zort.ca> writes:\n> > This still doesn't allow john on db1 to be a different user than john on\n> > db2. To accomplish that (easily) we still need to install different\n> > instances for each database.\n>\n> Some people think that cross-database user names are a feature, not\n> a bug. I cannot see any way to change that without creating huge\n> backward-compatibility headaches --- and it's not at all clear to\n> me that it's a step forward, anyway.\n>\n> I think that it might be worth adding a CONNECT privilege at the\n> database level; that together with Bruce's recent revisions to\n> pg_hba.conf ought to be a pretty good improvement.\n\nNote that I'm not looking to get rid of any functionality, only suggesting\nthat we should look at improving the ability to do remote administration\n(ie. eliminate the requirement to manually change files) ...\n\nAs an example ... at the University I work at, we've started to use PgSQL\nfor more and more of our internal stuff, and/or let the students start to\nuse it for their projects ... so we have PgSQL running on one server,\nwhile its being access by other ones around campus. I'd like to be able\nto be able to streamline things so that operations could easily create a\nnew database for a student (or faculty) on the server as a simple SQL\n\"CREATE DATABASE/USER\" command, vs risking them making a mistake when they\nmanually edit the pg_hba.conf file ...\n\nAlso, I thnk I might have missed the point of the whole CONNECT privilege\nthing ... if I have two ppl named joe on the system, each with different\npasswords, how does the CONNECT know which one is the one that has access\nto that database?\n\n\n", "msg_date": "Fri, 26 Jul 2002 13:55:58 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: Password sub-process ... " }, { "msg_contents": "On Fri, Jul 26, 2002 at 13:55:58 -0300,\n \"Marc G. Fournier\" <scrappy@hub.org> wrote:\n> \n> As an example ... at the University I work at, we've started to use PgSQL\n> for more and more of our internal stuff, and/or let the students start to\n> use it for their projects ... so we have PgSQL running on one server,\n> while its being access by other ones around campus. I'd like to be able\n> to be able to streamline things so that operations could easily create a\n> new database for a student (or faculty) on the server as a simple SQL\n> \"CREATE DATABASE/USER\" command, vs risking them making a mistake when they\n> manually edit the pg_hba.conf file ...\n\n From what I read in the development docs, in 7.3 you will be able to just\ndo a createuser and createdb to make things work. There will be a \"sameuser\"\nuser specification which will allow access to a database with a matching name.\n\n> Also, I thnk I might have missed the point of the whole CONNECT privilege\n> thing ... if I have two ppl named joe on the system, each with different\n> passwords, how does the CONNECT know which one is the one that has access\n> to that database?\n\nI think for something like a University or IT shop you would want to use\nIDs that are consistant accross all servers. Unfortunately when you are\nproviding hosting service for other companies it may not be feasible to\ndo that.\n", "msg_date": "Fri, 26 Jul 2002 12:21:50 -0500", "msg_from": "Bruno Wolff III <bruno@wolff.to>", "msg_from_op": false, "msg_subject": "Re: Password sub-process ..." }, { "msg_contents": "On Fri, 2002-07-26 at 11:48, Tom Lane wrote:\n> Rod Taylor <rbt@zort.ca> writes:\n> > This still doesn't allow john on db1 to be a different user than john on\n> > db2. To accomplish that (easily) we still need to install different\n> > instances for each database.\n> \n> Some people think that cross-database user names are a feature, not\n> a bug. I cannot see any way to change that without creating huge\n> backward-compatibility headaches --- and it's not at all clear to\n> me that it's a step forward, anyway.\n\nI've been considering ways to come up with 2 classes of user. One which\nis global, the other which is local. But it's not nearly enough of an\ninconvenience to warrant it.\n\n", "msg_date": "26 Jul 2002 13:27:47 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Password sub-process ..." }, { "msg_contents": "On Fri, 2002-07-26 at 12:55, Marc G. Fournier wrote:\n> On Fri, 26 Jul 2002, Tom Lane wrote:\n> \n> > Rod Taylor <rbt@zort.ca> writes:\n> > > This still doesn't allow john on db1 to be a different user than john on\n> > > db2. To accomplish that (easily) we still need to install different\n> > > instances for each database.\n> >\n> > Some people think that cross-database user names are a feature, not\n> > a bug. I cannot see any way to change that without creating huge\n> > backward-compatibility headaches --- and it's not at all clear to\n> > me that it's a step forward, anyway.\n> >\n> > I think that it might be worth adding a CONNECT privilege at the\n> > database level; that together with Bruce's recent revisions to\n> > pg_hba.conf ought to be a pretty good improvement.\n\n> Also, I thnk I might have missed the point of the whole CONNECT privilege\n> thing ... if I have two ppl named joe on the system, each with different\n> passwords, how does the CONNECT know which one is the one that has access\n> to that database?\n\nWell.. right now we call one db1_joe and db2_joe. I meant adding the\nability to lock some users to specific DBs -- and only exist there. \nAuthentication would use destination DB as well as username.\n\nWhere DB is null, the user is a global user. Usernames would still be\nunique per database.\n\n\n", "msg_date": "26 Jul 2002 13:49:30 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Password sub-process ..." }, { "msg_contents": "Bruno Wolff III wrote:\n> On Fri, Jul 26, 2002 at 13:55:58 -0300,\n> \"Marc G. Fournier\" <scrappy@hub.org> wrote:\n> > \n> > As an example ... at the University I work at, we've started to use PgSQL\n> > for more and more of our internal stuff, and/or let the students start to\n> > use it for their projects ... so we have PgSQL running on one server,\n> > while its being access by other ones around campus. I'd like to be able\n> > to be able to streamline things so that operations could easily create a\n> > new database for a student (or faculty) on the server as a simple SQL\n> > \"CREATE DATABASE/USER\" command, vs risking them making a mistake when they\n> > manually edit the pg_hba.conf file ...\n> \n> >From what I read in the development docs, in 7.3 you will be able to just\n> do a createuser and createdb to make things work. There will be a \"sameuser\"\n> user specification which will allow access to a database with a matching name.\n\nActually, there is a 'samegroup' iu 7.3 too, so you can create the db,\ncreate the group, and add whoever you want to the group. Pretty simple.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 Jul 2002 23:02:17 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Password sub-process ..." }, { "msg_contents": "Marc G. Fournier wrote:\n> \n> Something to maybe add to the TODO list, if someone has the\n> time/inclination to work on it ...\n> \n> The problem with the current auth system, as I see it, is that you can't\n> easily have seperate user lists and passwords per database ... its shared\n> across the system ...\n> \n> The closest you can get is to have a database defined as 'password' in\n> pg_hba.conf, with an external password file from pg_shadow, which, for the\n> most part, is good ... but it doesn't lend itself well to a 'hands off'\n> server ...\n\nActually, that is removed in 7.3. It was too weird a syntax and format\nand the original idea of sharing /etc/passwd there didn't work anymore\non most systems.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 Jul 2002 23:03:11 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Password sub-process ..." }, { "msg_contents": "On Mon, 29 Jul 2002, Bruce Momjian wrote:\n\n> Marc G. Fournier wrote:\n> >\n> > Something to maybe add to the TODO list, if someone has the\n> > time/inclination to work on it ...\n> >\n> > The problem with the current auth system, as I see it, is that you can't\n> > easily have seperate user lists and passwords per database ... its shared\n> > across the system ...\n> >\n> > The closest you can get is to have a database defined as 'password' in\n> > pg_hba.conf, with an external password file from pg_shadow, which, for the\n> > most part, is good ... but it doesn't lend itself well to a 'hands off'\n> > server ...\n>\n> Actually, that is removed in 7.3. It was too weird a syntax and format\n> and the original idea of sharing /etc/passwd there didn't work anymore\n> on most systems.\n\nwhoa ... what replaced it? weird it might have been, but it worked great\nif you knew about it ...\n\n\n", "msg_date": "Tue, 30 Jul 2002 00:12:01 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: Password sub-process ..." }, { "msg_contents": "Marc G. Fournier wrote:\n> On Mon, 29 Jul 2002, Bruce Momjian wrote:\n> \n> > Marc G. Fournier wrote:\n> > >\n> > > Something to maybe add to the TODO list, if someone has the\n> > > time/inclination to work on it ...\n> > >\n> > > The problem with the current auth system, as I see it, is that you can't\n> > > easily have seperate user lists and passwords per database ... its shared\n> > > across the system ...\n> > >\n> > > The closest you can get is to have a database defined as 'password' in\n> > > pg_hba.conf, with an external password file from pg_shadow, which, for the\n> > > most part, is good ... but it doesn't lend itself well to a 'hands off'\n> > > server ...\n> >\n> > Actually, that is removed in 7.3. It was too weird a syntax and format\n> > and the original idea of sharing /etc/passwd there didn't work anymore\n> > on most systems.\n> \n> whoa ... what replaced it? weird it might have been, but it worked great\n> if you knew about it ...\n\nWell, I asked and no one answered. ;-)\n\nActually, it is replaced by encrypted pg_shadow by default in 7.3, and\nthe new USER (users or groups) column in pg_hba.conf that will be in\n7.3 that can restrict based on user/group. This replaces the use of the\nsecondary file for just usernames. You can now specify a filename in\npg_hba.conf listing these. Would you look over the pg_hba.conf in CVS\nand tell me what additional things are needed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 Jul 2002 23:16:10 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Password sub-process ..." }, { "msg_contents": "On Mon, 29 Jul 2002, Bruce Momjian wrote:\n\n> Actually, it is replaced by encrypted pg_shadow by default in 7.3, and\n> the new USER (users or groups) column in pg_hba.conf that will be in 7.3\n> that can restrict based on user/group. This replaces the use of the\n> secondary file for just usernames. You can now specify a filename in\n> pg_hba.conf listing these. Would you look over the pg_hba.conf in CVS\n> and tell me what additional things are needed.\n\nWow, what a change ... some nice stuff in there, mind you, but unless I'm\nmissing something, you've thrown out some *major* functionality that we\nhad before :( And since I missed this, its quite possible that i am\nmissing something :)\n\nFirst and foremost in my mind ... how do you have two users in the system\nwith seperate passwords?\n\nFor instance, I have an application that right now that each authenticated\nuser has a seperate userid/pass in pg_user ... this doesn't deal will with\nrunning multiple instances of this app on the same instance of PgSQL,\nsince as soon as there are two 'bruce' users, only one can have a password\n... I could run two instances of PgSQL, but then you have to split the\nresources between the two, instead of, for instance, having one great big\nshared memory pool attached to one instance to cover both ...\n\nSo, I recode the app (yes, I have an app that was coded like this that I\nhave to fix ... we weren't thinking when we wrote that section) so that\nwhen I add a new user to the application it does two things:\n\n\t1. adds the username to pg_user *if* required\n\t2. adds the username/password to a \"password\" file specific to\n\t that instance of the application\n\nSo, unless I've missed something, in v7.3, this won't be possible?\n\nSomehow, I need to be able to have two users Bruce in pg_users, each with\nseperate passwords, with Bruce with pass1 having access to database1 and\nBruce with pass2 having access to database2 ...\n\nNow, to knock out some thoughts here ... would it be possible to add a\nfield to pg_{user,shadow} to state what database that userid/passwd pair\nbelongs to? so, if AUTHTYPE == md5 or password, authentication would be\nbased on all those users that 'belong' to that database? This could add\nthe ability for a database owner to easily add a user for his/her\ndatabase, in that if a user is created within a specific database by a\nnon-superuse account, it automatically assigns that user to that database?\n\nCREATE USER would have an extra, option paramater of 'FOR <database>'?\n\n\n\n", "msg_date": "Tue, 30 Jul 2002 00:43:52 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: Password sub-process ..." }, { "msg_contents": "Marc G. Fournier wrote:\n> On Mon, 29 Jul 2002, Bruce Momjian wrote:\n> \n> > Actually, it is replaced by encrypted pg_shadow by default in 7.3, and\n> > the new USER (users or groups) column in pg_hba.conf that will be in 7.3\n> > that can restrict based on user/group. This replaces the use of the\n> > secondary file for just usernames. You can now specify a filename in\n> > pg_hba.conf listing these. Would you look over the pg_hba.conf in CVS\n> > and tell me what additional things are needed.\n> \n> Wow, what a change ... some nice stuff in there, mind you, but unless I'm\n> missing something, you've thrown out some *major* functionality that we\n> had before :( And since I missed this, its quite possible that i am\n> missing something :)\n> \n> First and foremost in my mind ... how do you have two users in the system\n> with seperate passwords?\n\nNo, it doesn't seem possible now. I didn't know anyone was still using\nthat secondary password feature, and if they were, I thought they were\nusing only the 'username-list' version where no password was supplied,\nnot the username-crypted-password version.\n\nActually, it is hard to argue that having two users in pg_shadow, but\nhaving them as different people with different passwords makes much\nsense, though I can see why you would want to do that.\n\nThe idea of removing it was that it wasn't used much, and that the\nsyntax of an optional password file at the end was pretty weird,\nespecially now that we have a USER column.\n\nNot sure what to do now. We can re-add it but the code that did it is\ngone, and we now cache everything, so the code has to be refactored to\ncache that username/cryptpassword content.\n\nI actually added to code to make administration easier, but in your\ncase, I seem to have made it harder.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 Jul 2002 23:59:09 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Password sub-process ..." }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> First and foremost in my mind ... how do you have two users in the system\n> with seperate passwords? ...\n> since as soon as there are two 'bruce' users, only one can have a password\n\nUh, we've *never* supported \"two bruce users\" ... users have always been\ninstallation-wide. I am not sure what the notion of a database-owning\nuser means if user names are not of wider scope than databases.\n\nNo doubt we could redesign the system so that user names are local to a\ndatabase, and break a lot of existing setups in the process. But what's\nthe value? If you want separate usernames you can set up separate\npostmasters. If we change, and you don't want separate user names\nacross databases, you'll be out of luck.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Jul 2002 00:17:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Password sub-process ... " }, { "msg_contents": "Tom Lane wrote:\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > First and foremost in my mind ... how do you have two users in the system\n> > with seperate passwords? ...\n> > since as soon as there are two 'bruce' users, only one can have a password\n> \n> Uh, we've *never* supported \"two bruce users\" ... users have always been\n> installation-wide. I am not sure what the notion of a database-owning\n> user means if user names are not of wider scope than databases.\n> \n> No doubt we could redesign the system so that user names are local to a\n> database, and break a lot of existing setups in the process. But what's\n> the value? If you want separate usernames you can set up separate\n> postmasters. If we change, and you don't want separate user names\n> across databases, you'll be out of luck.\n\nHe was being tricky by having different passwords for the same user on\neach database, so one user couldn't get into the other database, even\nthough it was the same name. He could actually have a user access\ndatabases 1,2,3 and another user with a different password access\ndatabases 4,5,6 because of the username/password files. Now, he can't\ndo that.\n\nHaving those file function as username lists is already implemented\nbetter in the new code. The question is whether using those secondary\npasswords is widespread enough that I need to get that into the code\ntoo. It was pretty confusing for users, so I am hesitant to re-add it,\nbut I hate for Marc to lose functionality he had in the past.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 30 Jul 2002 00:33:08 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Password sub-process ..." }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> Uh, we've *never* supported \"two bruce users\" ...\n\n> He was being tricky by having different passwords for the same user on\n> each database, so one user couldn't get into the other database, even\n> though it was the same name.\n\nBut the system didn't realize they were two different users. (Try\ndropping just one of them.) And what if they happened to choose the\nsame password? I think this is a fragile kluge not a supported feature.\n\n> The question is whether using those secondary\n> passwords is widespread enough that I need to get that into the code\n> too. It was pretty confusing for users, so I am hesitant to re-add it,\n> but I hate for Marc to lose functionality he had in the past.\n\nI'd like to think of a better answer, not put back that same kluge.\nIdeas anyone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Jul 2002 00:39:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Password sub-process ... " }, { "msg_contents": "Tom Lane wrote:\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > First and foremost in my mind ... how do you have two users in the system\n> > with seperate passwords? ...\n> > since as soon as there are two 'bruce' users, only one can have a password\n> \n> Uh, we've *never* supported \"two bruce users\" ... users have always been\n> installation-wide. I am not sure what the notion of a database-owning\n> user means if user names are not of wider scope than databases.\n> \n> No doubt we could redesign the system so that user names are local to a\n> database, and break a lot of existing setups in the process. But what's\n> the value? If you want separate usernames you can set up separate\n> postmasters. If we change, and you don't want separate user names\n> across databases, you'll be out of luck.\n\nOn the topic of whether Marc gets extra consideration for feature\nrequests, here is a funny joke about Jerry Pournelle from Byte Magazine:\n\n\thttp://www.netfunny.com/rhf/jokes/95q1/jpreviews.html\n\nI love the helicopter tech support.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 30 Jul 2002 00:41:38 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Password sub-process ..." }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom Lane wrote:\n> >> Uh, we've *never* supported \"two bruce users\" ...\n> \n> > He was being tricky by having different passwords for the same user on\n> > each database, so one user couldn't get into the other database, even\n> > though it was the same name.\n> \n> But the system didn't realize they were two different users. (Try\n> dropping just one of them.) And what if they happened to choose the\n> same password? I think this is a fragile kluge not a supported feature.\n> \n> > The question is whether using those secondary\n> > passwords is widespread enough that I need to get that into the code\n> > too. It was pretty confusing for users, so I am hesitant to re-add it,\n> > but I hate for Marc to lose functionality he had in the past.\n> \n> I'd like to think of a better answer, not put back that same kluge.\n> Ideas anyone?\n\nAgreed. A clear kludge. I just feel guilty because I removed it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 30 Jul 2002 00:42:16 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Password sub-process ..." }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom Lane wrote:\n> >> Uh, we've *never* supported \"two bruce users\" ...\n> \n> > He was being tricky by having different passwords for the same user on\n> > each database, so one user couldn't get into the other database, even\n> > though it was the same name.\n> \n> But the system didn't realize they were two different users. (Try\n> dropping just one of them.) And what if they happened to choose the\n> same password? I think this is a fragile kluge not a supported feature.\n> \n> > The question is whether using those secondary\n> > passwords is widespread enough that I need to get that into the code\n> > too. It was pretty confusing for users, so I am hesitant to re-add it,\n> > but I hate for Marc to lose functionality he had in the past.\n> \n> I'd like to think of a better answer, not put back that same kluge.\n> Ideas anyone?\n\nI just thought a little more. Basically, I can't imagine any better\nanswer because they _should_ be the same user, and any trickery that\nallows the same user to have two different passwords for two different\ndatabase will appear to be bad design.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 30 Jul 2002 00:43:48 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Password sub-process ..." }, { "msg_contents": "On Tue, 30 Jul 2002, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > First and foremost in my mind ... how do you have two users in the system\n> > with seperate passwords? ...\n> > since as soon as there are two 'bruce' users, only one can have a password\n>\n> Uh, we've *never* supported \"two bruce users\" ... users have always been\n> installation-wide. I am not sure what the notion of a database-owning\n> user means if user names are not of wider scope than databases.\n\nSorry, you mis-understand here ... pg_user/shadow only has one bruce user\nin it ... but the way it was up until now, with the password file in\npg_hba.conf, I could assign bruce with a different password for database1\nvs database2 ... effectively, have 'two bruce users' ...\n\n\n", "msg_date": "Tue, 30 Jul 2002 02:01:54 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: Password sub-process ... " }, { "msg_contents": "On Tue, 30 Jul 2002, Bruce Momjian wrote:\n\n> Tom Lane wrote:\n> > \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > > First and foremost in my mind ... how do you have two users in the system\n> > > with seperate passwords? ...\n> > > since as soon as there are two 'bruce' users, only one can have a password\n> >\n> > Uh, we've *never* supported \"two bruce users\" ... users have always been\n> > installation-wide. I am not sure what the notion of a database-owning\n> > user means if user names are not of wider scope than databases.\n> >\n> > No doubt we could redesign the system so that user names are local to a\n> > database, and break a lot of existing setups in the process. But what's\n> > the value? If you want separate usernames you can set up separate\n> > postmasters. If we change, and you don't want separate user names\n> > across databases, you'll be out of luck.\n>\n> He was being tricky by having different passwords for the same user on\n> each database, so one user couldn't get into the other database, even\n> though it was the same name. He could actually have a user access\n> databases 1,2,3 and another user with a different password access\n> databases 4,5,6 because of the username/password files. Now, he can't\n> do that.\n>\n> Having those file function as username lists is already implemented\n> better in the new code. The question is whether using those secondary\n> passwords is widespread enough that I need to get that into the code\n> too. It was pretty confusing for users, so I am hesitant to re-add it,\n> but I hate for Marc to lose functionality he had in the past.\n\nYou seem to have done a nice job with the + and @ for 'maps' ... how about\nthird on that states that the map file has a username:password pair in it?\n\nI do like how the pg_hba.conf has changed, just don't like the lose of\nfunctionality :(\n\n", "msg_date": "Tue, 30 Jul 2002 02:03:28 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: Password sub-process ..." }, { "msg_contents": "On Tue, 30 Jul 2002, Bruce Momjian wrote:\n\n> Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Tom Lane wrote:\n> > >> Uh, we've *never* supported \"two bruce users\" ...\n> >\n> > > He was being tricky by having different passwords for the same user on\n> > > each database, so one user couldn't get into the other database, even\n> > > though it was the same name.\n> >\n> > But the system didn't realize they were two different users. (Try\n> > dropping just one of them.) And what if they happened to choose the\n> > same password? I think this is a fragile kluge not a supported feature.\n> >\n> > > The question is whether using those secondary\n> > > passwords is widespread enough that I need to get that into the code\n> > > too. It was pretty confusing for users, so I am hesitant to re-add it,\n> > > but I hate for Marc to lose functionality he had in the past.\n> >\n> > I'd like to think of a better answer, not put back that same kluge.\n> > Ideas anyone?\n>\n> Agreed. A clear kludge. I just feel guilty because I removed it.\n\ndon't feel guilty ... it *wasn't* the nicest implementation of a feature,\nbut it was definitely useful ...\n\n", "msg_date": "Tue, 30 Jul 2002 02:04:35 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: Password sub-process ..." }, { "msg_contents": "Marc G. Fournier wrote:\n> You seem to have done a nice job with the + and @ for 'maps' ... how about\n> third on that states that the map file has a username:password pair in it?\n> \n> I do like how the pg_hba.conf has changed, just don't like the lose of\n> functionality :(\n\nOK, but the only logic for using it is your duplicate users. There\nwould be no other reason someone would use such a feature, right?\n\nI assume it would be MD5?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 30 Jul 2002 01:07:25 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Password sub-process ..." }, { "msg_contents": "On Tue, 30 Jul 2002, Bruce Momjian wrote:\n\n> Marc G. Fournier wrote:\n> > You seem to have done a nice job with the + and @ for 'maps' ... how about\n> > third on that states that the map file has a username:password pair in it?\n> >\n> > I do like how the pg_hba.conf has changed, just don't like the lose of\n> > functionality :(\n>\n> OK, but the only logic for using it is your duplicate users. There\n> would be no other reason someone would use such a feature, right?\n\nHrmmm ... let's make this simpler ... there was a thread going around\nasking why MySQL vs PgSQL, and one of the answers had to do with ISPs ...\nfrom a 'shared host' point of view, what is done for v7.3 makes it very\ndifficult for an ISP to 'save resources' by running one instance, without\nthem starting to look like hotmail:\n\nbruce\nbruce001\nbruce002\nbruce003\n\nI'm lucky, I don't do virtual hosting, so I can use host/ip based\nrestrictions on our databases, with a select few requiring a password ...\nbut most out there do virtual hosting, which means that all the domains\nconnecting to the database look like they are coming from the same IP ...\n\nso, I can easily do something like:\n\nhost database bruce IP1\nhost database bruce IP2\n\nand know that client on IP1 can't look at client on IP2s database, even\nwith the same user ... but in a VH environment, you have:\n\nhost database bruce IP1\nhost database bruce IP1\n\nin the old system, I could make both password based, so that altho both\nbruce's were looking to come from the same IP, only the one with the right\npassword could connect, so Client on IP1's bruce wouldn't be able to look\nin Client on IP2's database, since he wouldn't have the required password\nto connect ...\n\n> I assume it would be MD5?\n\nI've been using DES, but MD5 would work too ...\n\n\n", "msg_date": "Tue, 30 Jul 2002 02:40:20 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: Password sub-process ..." }, { "msg_contents": "On Tue, 2002-07-30 at 10:40, Marc G. Fournier wrote:\n> On Tue, 30 Jul 2002, Bruce Momjian wrote:\n> \n> > Marc G. Fournier wrote:\n> > > You seem to have done a nice job with the + and @ for 'maps' ... how about\n> > > third on that states that the map file has a username:password pair in it?\n> > >\n> > > I do like how the pg_hba.conf has changed, just don't like the lose of\n> > > functionality :(\n> >\n> > OK, but the only logic for using it is your duplicate users. There\n> > would be no other reason someone would use such a feature, right?\n> \n> Hrmmm ... let's make this simpler ... there was a thread going around\n> asking why MySQL vs PgSQL, and one of the answers had to do with ISPs ...\n> from a 'shared host' point of view, what is done for v7.3 makes it very\n> difficult for an ISP to 'save resources' by running one instance, without\n> them starting to look like hotmail:\n> \n> bruce\n> bruce001\n> bruce002\n> bruce003\n> \n> I'm lucky, I don't do virtual hosting, so I can use host/ip based\n> restrictions on our databases, with a select few requiring a password ...\n> but most out there do virtual hosting, which means that all the domains\n> connecting to the database look like they are coming from the same IP ...\n> \n> so, I can easily do something like:\n> \n> host database bruce IP1\n> host database bruce IP2\n> \n> and know that client on IP1 can't look at client on IP2s database, even\n> with the same user ... but in a VH environment, you have:\n> \n> host database bruce IP1\n> host database bruce IP1\n\nWhy can't you just name the user user@database ?\n\nIt should not be /too/ hard to explain to user bruce that his username\nat database accounts is bruce@accounts ?\n\n> in the old system, I could make both password based, so that altho both\n> bruce's were looking to come from the same IP, only the one with the right\n> password could connect, so Client on IP1's bruce wouldn't be able to look\n> in Client on IP2's database, since he wouldn't have the required password\n> to connect ...\n\nBut still, what happens if both bruces want to set their password to\n\"brucessecretpassword\" ?\n\n----------------\nHannu\n\n", "msg_date": "30 Jul 2002 12:49:52 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Password sub-process ..." }, { "msg_contents": "Marc G. Fournier wrote:\n> so, I can easily do something like:\n> \n> host database bruce IP1\n> host database bruce IP2\n> \n> and know that client on IP1 can't look at client on IP2s database, even\n> with the same user ... but in a VH environment, you have:\n> \n> host database bruce IP1\n> host database bruce IP1\n> \n> in the old system, I could make both password based, so that altho both\n> bruce's were looking to come from the same IP, only the one with the right\n> password could connect, so Client on IP1's bruce wouldn't be able to look\n> in Client on IP2's database, since he wouldn't have the required password\n> to connect ...\n> \n> > I assume it would be MD5?\n> \n> I've been using DES, but MD5 would work too ...\n\nOK, I have one idea. Right now the file format for usernames can be:\n\n\tuser, user, \"user\"\nor\n\tuser user \"user\"\nor\n\tuser\n\tuser\n\t\"user\"\n\nso we don't really have columns in the file. What we could do is to\nallow the username to be specified as \"user:pass\" and the \"pass\" could\nbe in plaintext or md5. You could actually specify the \"pass\" in\npg_hba.conf or in a secondary file. The code currently makes no\ndistinction between them.\n\nThis does make the code a little more complex, but it is documenting\nthis that be cause the most confusion.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 30 Jul 2002 11:55:15 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Password sub-process ..." }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, I have one idea. Right now the file format for usernames can be:\n\nBut this is just reimplementing the original functionality, which was\nquite broken IMHO. The setup Marc is describing doesn't really have\nusers per-database, it's only faking it. And what if he wants to use\nsome non-password-based auth method, like IDENT?\n\nI am wondering if we could have a configure-time or install-time\noption to make pg_shadow (and pg_group I guess) be database-local\ninstead of installation-wide. I am not sure about the implications\nof this --- in particular, is the notion of a database owner still\nmeaningful? How could the postmaster cope with it (I'd guess we'd\nneed multiple flat files, one per DB, for the postmaster to read)?\n\nIf we're going to do work to support this concept, then let's really\nsupport it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Jul 2002 12:24:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Password sub-process ... " }, { "msg_contents": "Hi,\n\n> I am wondering if we could have a configure-time or install-time\n> option to make pg_shadow (and pg_group I guess) be database-local\n> instead of installation-wide. I am not sure about the implications\n> of this --- in particular, is the notion of a database owner still\n> meaningful? How could the postmaster cope with it (I'd guess we'd\n> need multiple flat files, one per DB, for the postmaster to read)?\n\nI realy like the idea, but how would you handle the postgres (super)user in\nthis scenario? One global postgres user, or a separate one for each db? In\nthe last case, the DB owner would be the DB-specific postgres user. A global\nsuperuser would still be needed for backups and other maintainance tasks...\n\nSander\n\n\n", "msg_date": "Tue, 30 Jul 2002 20:18:49 +0200", "msg_from": "\"Sander Steffann\" <sander@steffann.nl>", "msg_from_op": false, "msg_subject": "Re: Password sub-process ... " }, { "msg_contents": "I tried to understand what causes\ntoo many pgsql idle processes. Can\npostmaster automatically aged and\ncleaning up those unused idle process?\n\nIs there a catalog to track those\npsql processes - what their functions, who\nissues, etc.?\n\nthanks.\n\njohnl \n", "msg_date": "Wed, 31 Jul 2002 11:12:30 -0500", "msg_from": "\"John Liu\" <johnl@synthesys.com>", "msg_from_op": false, "msg_subject": "many idle processes" }, { "msg_contents": "\"John Liu\" <johnl@synthesys.com> writes:\n\n> I tried to understand what causes\n> too many pgsql idle processes. Can\n> postmaster automatically aged and\n> cleaning up those unused idle process?\n\nThose processes are attached to open client connections. If you don't\nlike them, change your client to close connections when it's not using\nthem, but your app will be slower since creating a new connection\n(and backend process) takes some time.\n\n> Is there a catalog to track those\n> psql processes - what their functions, who\n> issues, etc.?\n\nThere is one backend process per open client connection, plus the\npostmaster, which handles creating new connections.\n\n-Doug\n", "msg_date": "31 Jul 2002 12:24:46 -0400", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: many idle processes" }, { "msg_contents": "> Is there a catalog to track those\n> psql processes - what their functions, who\n> issues, etc.?\n> \n> thanks.\n> \n> johnl \n\nIf you have it enabled in your postgresql.conf, just go:\n\nselect * from pg_stat_activity;\n\nChris\n\n", "msg_date": "Thu, 1 Aug 2002 09:41:15 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: many idle processes" } ]
[ { "msg_contents": "I'll be on vacation for the next two weeks. Back to work on August 12.\n\nI might read some eMail every now and then. But don't expect much, I\nhave one of my sons visiting from Germany.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n", "msg_date": "Fri, 26 Jul 2002 10:11:47 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": true, "msg_subject": "Going on vacation" } ]
[ { "msg_contents": "Hi,\n\nWe have a query \"select count(count(*)) from test group by\ntrunc(test_date)\". This works fine with Oracle but when moving to postgres I\nchanged it to \"select count(count(*)) from test group by date_trunc('day',\ntest_date)\" but I get the following error\n\nERROR: Aggregate function calls may not be nested\n\nCan some one help me...\n\nThanks\nYuva\n", "msg_date": "Fri, 26 Jul 2002 13:03:40 -0700", "msg_from": "Yuva Chandolu <ychandolu@ebates.com>", "msg_from_op": true, "msg_subject": "regd count(count(*)) in group by" }, { "msg_contents": "Try this:\n\nSELECT count(*)\n FROM (\n\tSELECT count(*)\n\tFROM test\n\tGROUP BY date_trunc('day', test_date)\n\t) as qry;\n\n\nOn Fri, 2002-07-26 at 16:03, Yuva Chandolu wrote:\n> Hi,\n> \n> We have a query \"select count(count(*)) from test group by\n> trunc(test_date)\". This works fine with Oracle but when moving to postgres I\n> changed it to \"select count(count(*)) from test group by date_trunc('day',\n> test_date)\" but I get the following error\n> \n> ERROR: Aggregate function calls may not be nested\n> \n> Can some one help me...\n> \n> Thanks\n> Yuva\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n\n", "msg_date": "26 Jul 2002 16:50:31 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: regd count(count(*)) in group by" }, { "msg_contents": "On 26 Jul 2002, Rod Taylor wrote:\n\n> Try this:\n>\n> SELECT count(*)\n> FROM (\n> \tSELECT count(*)\n> \tFROM test\n> \tGROUP BY date_trunc('day', test_date)\n> \t) as qry;\n\nOr this:\n\n\tSELECT COUNT(*)\n\tFROM (\n\t\tSELECT DISTINCT(date_trunc('day', test_date))\n\t\tFROM test\n\t);\n\nMatthew.\n\n", "msg_date": "Fri, 26 Jul 2002 23:09:08 +0100 (BST)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": false, "msg_subject": "Re: regd count(count(*)) in group by" } ]
[ { "msg_contents": "On SGI multiprocessor machines, I suspect that a spinlock\nimplementation of LWLockAcquire would give better performance than\nusing IPC semaphores. Is there any specific reason that a spinlock\ncould not be used in this context?\n\n+-----------------------------+------------------------------------+\n| Robert E. Bruccoleri, Ph.D. | email: bruc@acm.org |\n| P.O. Box 314 | URL: http://www.congen.com/~bruc |\n| Pennington, NJ 08534 | |\n+-----------------------------+------------------------------------+\n", "msg_date": "Fri, 26 Jul 2002 21:46:30 -0400 (EDT)", "msg_from": "\"Robert E. Bruccoleri\" <bruc@stone.congenomics.com>", "msg_from_op": true, "msg_subject": "Question about LWLockAcquire's use of semaphores instead of spinlocks" }, { "msg_contents": "\"Robert E. Bruccoleri\" <bruc@stone.congenomics.com> writes:\n> On SGI multiprocessor machines, I suspect that a spinlock\n> implementation of LWLockAcquire would give better performance than\n> using IPC semaphores. Is there any specific reason that a spinlock\n> could not be used in this context?\n\nAre you confusing LWLockAcquire with TAS spinlocks?\n\nIf you're saying that we don't have an implementation of TAS for\nSGI hardware, then feel free to contribute one. If you are wanting to\nreplace LWLocks with spinlocks, then you are sadly mistaken, IMHO.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 27 Jul 2002 02:26:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Question about LWLockAcquire's use of semaphores instead of\n\tspinlocks" }, { "msg_contents": "Tom Lane writes:\n> \n> \n> \"Robert E. Bruccoleri\" <bruc@stone.congenomics.com> writes:\n> > On SGI multiprocessor machines, I suspect that a spinlock\n> > implementation of LWLockAcquire would give better performance than\n> > using IPC semaphores. Is there any specific reason that a spinlock\n> > could not be used in this context?\n> \n> Are you confusing LWLockAcquire with TAS spinlocks?\n\nNo.\n\n> If you're saying that we don't have an implementation of TAS for\n> SGI hardware, then feel free to contribute one. If you are wanting to\n> replace LWLocks with spinlocks, then you are sadly mistaken, IMHO.\n\nThis touches on my question. Why am I mistaken? I don't understand.\n\nBTW, about 5 years ago, I rewrote the TAS spinlocks for the\nSGI platform to make it work correctly. The current implementation\nis fine.\n\n+-----------------------------+------------------------------------+ \n| Robert E. Bruccoleri, Ph.D. | email: bruc@acm.org |\n| P.O. Box 314 | URL: http://www.congen.com/~bruc |\n| Pennington, NJ 08534 | |\n+-----------------------------+------------------------------------+\n", "msg_date": "Sat, 27 Jul 2002 23:45:07 -0400 (EDT)", "msg_from": "\"Robert E. Bruccoleri\" <bruc@stone.congenomics.com>", "msg_from_op": true, "msg_subject": "Re: Question about LWLockAcquire's use of semaphores instead of\n\tspinlocks" }, { "msg_contents": "Hi Bob:\nWe're have been working with an sproc version of postgres and it has improve\nperformance over a NUMA3 origin 3000 due to IRIX implements round_robin by\ndefault on memory placement instead of first touch as it did on fork. We're\nbeen wondering about replacing IPC shmem with a shared arena to help\nperformance improve on IRIX. I dont�know if people here in postgres are\ninterested on specifical ports but it could help you improve your\nperformance.\nRegards\n----- Original Message -----\nFrom: \"Robert E. Bruccoleri\" <bruc@stone.congenomics.com>\nTo: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nCc: <bruc@acm.org>; <pgsql-hackers@postgresql.org>\nSent: Sunday, July 28, 2002 5:45 AM\nSubject: Re: [HACKERS] Question about LWLockAcquire's use of semaphores\ninstead of spinlocks\n\n\n> Tom Lane writes:\n> >\n> >\n> > \"Robert E. Bruccoleri\" <bruc@stone.congenomics.com> writes:\n> > > On SGI multiprocessor machines, I suspect that a spinlock\n> > > implementation of LWLockAcquire would give better performance than\n> > > using IPC semaphores. Is there any specific reason that a spinlock\n> > > could not be used in this context?\n> >\n> > Are you confusing LWLockAcquire with TAS spinlocks?\n>\n> No.\n>\n> > If you're saying that we don't have an implementation of TAS for\n> > SGI hardware, then feel free to contribute one. If you are wanting to\n> > replace LWLocks with spinlocks, then you are sadly mistaken, IMHO.\n>\n> This touches on my question. Why am I mistaken? I don't understand.\n>\n> BTW, about 5 years ago, I rewrote the TAS spinlocks for the\n> SGI platform to make it work correctly. The current implementation\n> is fine.\n>\n> +-----------------------------+------------------------------------+\n> | Robert E. Bruccoleri, Ph.D. | email: bruc@acm.org |\n> | P.O. Box 314 | URL: http://www.congen.com/~bruc |\n> | Pennington, NJ 08534 | |\n> +-----------------------------+------------------------------------+\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n>\n\n\n", "msg_date": "Sun, 28 Jul 2002 12:22:53 +0200", "msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>", "msg_from_op": false, "msg_subject": "Re: Question about LWLockAcquire's use of semaphores instead of\n\tspinlocks" }, { "msg_contents": "\"Robert E. Bruccoleri\" <bruc@stone.congenomics.com> writes:\n> Tom Lane writes:\n>> If you're saying that we don't have an implementation of TAS for\n>> SGI hardware, then feel free to contribute one. If you are wanting to\n>> replace LWLocks with spinlocks, then you are sadly mistaken, IMHO.\n\n> This touches on my question. Why am I mistaken? I don't understand.\n\nBecause we just got done replacing spinlocks with LWLocks ;-). I don't\nbelieve that reverting that change will improve matters. It will\ncertainly hurt on SMP machines, and I believe that it would at best\nbe a breakeven proposition on uniprocessors. See the discussions last\nfall that led up to development of the LWLock mechanism.\n\nThe problem with TAS spinlocks is that they are suitable only for\nimplementing locks that will be held for *very short* periods, ie,\nactual contention is rare. Over time we had allowed that mechanism to\nbe abused for locking fairly large and complex shared-memory data\nstructures (eg, the lock manager, the buffer manager). The next step\nup, a lock-manager lock, is very expensive and certainly can't be used\nby the lock manager itself anyway. LWLocks are an intermediate\nmechanism that is marginally more expensive than a spinlock but behaves\nmuch more gracefully in the presence of contention. LWLocks also allow\nus to distinguish shared and exclusive lock modes, thus further reducing\ncontention in some cases.\n\nBTW, now that I reread the title of your message, I wonder if you\nhaven't just misunderstood what's happening in lwlock.c. There is no\nIPC semaphore call in the fast (no-contention) path of control. A\nsemaphore call occurs only when we are forced to wait, ie, yield the\nprocessor. Substituting a spinlock for that cannot improve matters;\nit would essentially result in wasting the remainder of our timeslice\nin a busy-loop, rather than yielding the CPU at once to some other\nprocess that can get some useful work done.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 28 Jul 2002 12:58:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Question about LWLockAcquire's use of semaphores instead of\n\tspinlocks" }, { "msg_contents": "Dear Tom,\n\tThank you for the explanation. I did not understand what was\ngoing on in lwlock.c.\n\tMy systems are all SGI Origins having between 8 and 32\nprocessors, and I've been running PostgreSQL on them for about 5\nyears. These machines do provide a number of good mechanisms for high\nperformance shared memory parallelism that I don't think are found\nelsewhere. I wish that I had the time to understand and tune\nPostgreSQL to run really well on them.\n\tI have a question for you and other developers with regard to\nmy SGI needs. If I made a functional Origin 2000 system available to\nyou with hardware support, would the group be willing to tailor the\nSGI port for better performance?\n\n\t\t\t\t\tSincerely,\n\t\t\t\t\tBob\n> \n> \n> \"Robert E. Bruccoleri\" <bruc@stone.congenomics.com> writes:\n> > Tom Lane writes:\n> >> If you're saying that we don't have an implementation of TAS for\n> >> SGI hardware, then feel free to contribute one. If you are wanting to\n> >> replace LWLocks with spinlocks, then you are sadly mistaken, IMHO.\n> \n> > This touches on my question. Why am I mistaken? I don't understand.\n> \n> Because we just got done replacing spinlocks with LWLocks ;-). I don't\n> believe that reverting that change will improve matters. It will\n> certainly hurt on SMP machines, and I believe that it would at best\n> be a breakeven proposition on uniprocessors. See the discussions last\n> fall that led up to development of the LWLock mechanism.\n> \n> The problem with TAS spinlocks is that they are suitable only for\n> implementing locks that will be held for *very short* periods, ie,\n> actual contention is rare. Over time we had allowed that mechanism to\n> be abused for locking fairly large and complex shared-memory data\n> structures (eg, the lock manager, the buffer manager). The next step\n> up, a lock-manager lock, is very expensive and certainly can't be used\n> by the lock manager itself anyway. LWLocks are an intermediate\n> mechanism that is marginally more expensive than a spinlock but behaves\n> much more gracefully in the presence of contention. LWLocks also allow\n> us to distinguish shared and exclusive lock modes, thus further reducing\n> contention in some cases.\n> \n> BTW, now that I reread the title of your message, I wonder if you\n> haven't just misunderstood what's happening in lwlock.c. There is no\n> IPC semaphore call in the fast (no-contention) path of control. A\n> semaphore call occurs only when we are forced to wait, ie, yield the\n> processor. Substituting a spinlock for that cannot improve matters;\n> it would essentially result in wasting the remainder of our timeslice\n> in a busy-loop, rather than yielding the CPU at once to some other\n> process that can get some useful work done.\n> \n> \t\t\tregards, tom lane\n> \n\n+-----------------------------+------------------------------------+\n| Robert E. Bruccoleri, Ph.D. | email: bruc@acm.org |\n| P.O. Box 314 | URL: http://www.congen.com/~bruc |\n| Pennington, NJ 08534 | |\n+-----------------------------+------------------------------------+\n", "msg_date": "Sun, 28 Jul 2002 20:46:20 -0400 (EDT)", "msg_from": "\"Robert E. Bruccoleri\" <bruc@stone.congenomics.com>", "msg_from_op": true, "msg_subject": "Re: Question about LWLockAcquire's use of semaphores instead of\n\tspinlocks" }, { "msg_contents": "Dear Luis,\n\tI would be very interested. Replacing the IPC shared memory\nwith an arena make a lot of sense. --Bob\n\n> \n> Hi Bob:\n> We're have been working with an sproc version of postgres and it has improve\n> performance over a NUMA3 origin 3000 due to IRIX implements round_robin by\n> default on memory placement instead of first touch as it did on fork. We're\n> been wondering about replacing IPC shmem with a shared arena to help\n> performance improve on IRIX. I dont�know if people here in postgres are\n> interested on specifical ports but it could help you improve your\n> performance.\n> Regards\n\n+-----------------------------+------------------------------------+\n| Robert E. Bruccoleri, Ph.D. | email: bruc@acm.org |\n| P.O. Box 314 | URL: http://www.congen.com/~bruc |\n| Pennington, NJ 08534 | |\n+-----------------------------+------------------------------------+\n", "msg_date": "Sun, 28 Jul 2002 20:48:01 -0400 (EDT)", "msg_from": "\"Robert E. Bruccoleri\" <bruc@stone.congenomics.com>", "msg_from_op": true, "msg_subject": "Re: Question about LWLockAcquire's use of semaphores instead of\n\tspinlocks" }, { "msg_contents": "\n----- Original Message -----\nFrom: \"Robert E. Bruccoleri\" <bruc@stone.congenomics.com>\nTo: \"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>\nCc: <bruc@acm.org>; <tgl@sss.pgh.pa.us>; <pgsql-hackers@postgresql.org>\nSent: Monday, July 29, 2002 2:48 AM\nSubject: Re: [HACKERS] Question about LWLockAcquire's use of semaphores\ninstead of spinlocks\n\n\n> Dear Luis,\n> I would be very interested. Replacing the IPC shared memory\n> with an arena make a lot of sense. --Bob\n>\nOn old PowerChallenge postgres works really fine, but in new NUMA\narchitectures postgres works so badly, as we have known, forked backends\ndon't allow IRIX to manage memory as it would be desired. Leaving First\nTouch placement algorithm means that almost every useful data is placed on\nthe first node the process is run. Trying to use more than one node with\nthis schema results in a false sharing, secondary cache hits ratio drops\nbelow 85% due to latency on a second node is about 6 times bigger than in\nthe first node even worse if you have more than 4 nodes. All of this causes\nthat you're almost only working with a node (4 cpus in origin 3000).\nImplementing Round-Robin placement algorithms causes that memory pages are\nplaced each one in one node, this causes that all nodes have the same chance\nto work with some pages locally and some pages remotely. The more the number\nof nodes, the more advantage you can take with round-robin.\nYou can enable round-robin recompiling postgres, setting before the\nenviroment variable _DSM_ROUND_ROBIN=TRUE\nit works fine with fork(), and it is not necessary using sprocs.\nChanging IPC shared memory for a shared arena could improve performance\nbecause it's the native shared segment on IRIX. it's something we're willing\nto do, but by now it is only a project.\nHope it helps\n\n\n\n", "msg_date": "Mon, 29 Jul 2002 11:50:42 +0200", "msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>", "msg_from_op": false, "msg_subject": "Re: Question about LWLockAcquire's use of semaphores instead of\n\tspinlocks" }, { "msg_contents": "Robert E. Bruccoleri wrote:\n> Dear Tom,\n> \tThank you for the explanation. I did not understand what was\n> going on in lwlock.c.\n\nYes, as Tom said, using the pre-7.2 code on SMP machines, if one backend\nhad a spinlock, the other backend would TAS loop trying to get the lock\nuntil its timeslice ended or the other backend released the lock. Now,\nwe TAS, then sleep on a semaphore and get woken up when the first\nbackend releases the lock. We worked hard on that logic, I can tell you\nthat and it was a huge discussion topic on the Fall of 2001.\n\n> \tMy systems are all SGI Origins having between 8 and 32\n> processors, and I've been running PostgreSQL on them for about 5\n> years. These machines do provide a number of good mechanisms for high\n> performance shared memory parallelism that I don't think are found\n> elsewhere. I wish that I had the time to understand and tune\n> PostgreSQL to run really well on them.\n> \tI have a question for you and other developers with regard to\n> my SGI needs. If I made a functional Origin 2000 system available to\n> you with hardware support, would the group be willing to tailor the\n> SGI port for better performance?\n\nWe would have to understand how the SGI code is better than our existing\ncode on SMP machines.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 Jul 2002 23:38:52 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Question about LWLockAcquire's use of semaphores instead" }, { "msg_contents": "> We would have to understand how the SGI code is better than our existing\n> code on SMP machines.\n\nthere is a big problem with postgres on SGI NUMA architectures, on UMA\nsystems postgres works fine, but NUMA Origins need a native shared memory\nmanagement. It scales fine over old challenges, but scales very poorly on\nNUMA architectures, giving fine speed-up only within a single node. For more\nthan one node throughput drops greatly, implementing Round-robin memory\nplacement algorithms it gets a bit better, changing from forks to native\nsprocs(medium-weighted processes) makes it work better, but not good enough,\nif you want postgres to run fine on this machines I think (it's not tested\nyet) it would be neccesary to implement native shared arenas instead of IPC\nshared memory in order to let IRIX make a fine load-balance.\n\nI take advantage of this message to say that there is a cuple of things that\nwe have to insert on FAQ-IRIX about using 32 bits or 64 bits objects,\nbecause it is a known issue that using 32 bit objects on IRIX do not allow\nto use more than 1,2 Gb of shared memory because system management is unable\nto find a single segment of this size.\n\nRegards\n\n\n", "msg_date": "Tue, 30 Jul 2002 12:17:52 +0200", "msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>", "msg_from_op": false, "msg_subject": "Re: Question about LWLockAcquire's use of semaphores instead" }, { "msg_contents": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es> writes:\n> if you want postgres to run fine on this machines I think (it's not tested\n> yet) it would be neccesary to implement native shared arenas instead of IPC\n> shared memory in order to let IRIX make a fine load-balance.\n\nIn CVS tip, the direct dependencies on SysV shared-memory calls have\nbeen split into a separate file, src/backend/port/sysv_shmem.c. It\nwould not be difficult to make a crude port to some other shared-memory\nAPI, if you want to do some performance testing.\n\nA not-so-crude port would perhaps be more difficult. One thing that we\ndepend on is being able to detect whether old backends are still running\nin a particular database cluster (this could happen if the postmaster\nitself crashes, leaving orphaned backends behind). Currently this is\ndone by recording the shmem key in the postmaster's lockfile, and then\nduring restart looking to see if any process is still attached to that\nshmem segment. So we are relying on the fact that SysV shmem segments\n(a) are not anonymous, and (b) accept a syscall to find out whether any\nother processes are attached to them. If the shared-memory API you want\nto use doesn't support similar capabilities, then there's a problem.\nYou might be able to think of a different way to provide the same kind\nof interlock, though.\n\n> I take advantage of this message to say that there is a cuple of things that\n> we have to insert on FAQ-IRIX about using 32 bits or 64 bits objects,\n\nSend a patch ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Jul 2002 09:52:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Question about LWLockAcquire's use of semaphores instead " }, { "msg_contents": "----- Original Message -----\nFrom: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nTo: <bruc@acm.org>\nCc: \"Tom Lane\" <tgl@sss.pgh.pa.us>; <pgsql-hackers@postgresql.org>\n>\n> We would have to understand how the SGI code is better than our existing\n> code on SMP machines.\n>\n\nI've been searching for data from SGI's Origin presentation to illustrate\nwhat am I saying, this graph only covers Memory bandwith, but take present\nthat as distance between nodes increase, memory access latency is also\nincreased:", "msg_date": "Tue, 30 Jul 2002 17:23:53 +0200", "msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>", "msg_from_op": false, "msg_subject": "Re: Question about LWLockAcquire's use of semaphores instead" } ]
[ { "msg_contents": "\n1234567890\nHi!\nI write function on plpgsql and have some troubles:\n\nI have table \"owe\" : uid int4, date int4, cost float4;\nI want to select into \"owe_old\" : DECLARE owe_old float4;\n\ncur_date := date_part(''year'', now()) || ''-'' || date_part(''month'', now()) || ''-01'';\n\twe have cur_date = '2002-05-01' for example..\n\nAnd I want to :\n\tSELECT INTO owe_old sum(cost) FROM owe WHERE date > int4(ABSTIME ''''cur_date''''');\n\nAnd I have error:\nNOTICE: line 24 at select into variables\nERROR: parser: parse error at or near \"$1\"\n\nHow I can substitute \"cur_date\" into SQL statement..?\nOr how I can get sum(cost) in different way..?\n\nThank you!\n\n\n-- \n--\nBest regards, KVN.\n PHP4You (<http://php4you.kiev.ua/>)\n PEAR [ru] (<http://pear.php.net/manual/ru/>)\n mailto:kvn@php.net\n", "msg_date": "Sat, 27 Jul 2002 15:38:50 +0000 (UTC)", "msg_from": "\"Vitaliy N. Kravchenko\" <kvn@phbme.ntu-kpi.kiev.ua>", "msg_from_op": true, "msg_subject": "PL/PGSQL q?" } ]
[ { "msg_contents": "The field pg_language.lanispl seems to have several different meanings:\n\n1. This is a user-defined language.\n\n2. This language may be dropped.\n\n3. You may define a trigger using a function defined in this language (or\n in C or in internal).\n\n4. Functions defined in this language may be called. (see fmgr.c)\n\n5. This language needs to be dumped.\n\n(1) and (2) are now taken care of by the new dependency system. (3)\nseems to aim at disallowing trigger functions in SQL. Perhaps this should\nbe made explicit instead of taking this backdoor approach. I don't\nunderstand what (4) is intending to do. (5) is not really needed if we\ntake pg_dump's current approach of associating a language with the\nnamespace of the underlying function.\n\nDoes anyone have any knowledge about this attribute? Can it be removed?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sat, 27 Jul 2002 20:00:33 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "What exactly does lanispl mean?" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> The field pg_language.lanispl seems to have several different meanings:\n> 1. This is a user-defined language.\n> 2. This language may be dropped.\n> 3. You may define a trigger using a function defined in this language (or\n> in C or in internal).\n> 4. Functions defined in this language may be called. (see fmgr.c)\n> 5. This language needs to be dumped.\n\n> (1) and (2) are now taken care of by the new dependency system. (3)\n> seems to aim at disallowing trigger functions in SQL. Perhaps this should\n> be made explicit instead of taking this backdoor approach.\n\nSeems unnecessary, since (IIRC) we already have checks that SQL\nfunctions can't return opaque while triggers must do so.\n\n> I don't understand what (4) is intending to do.\n\nfmgr.c uses lanispl to indicate that the function should be called\nvia the language's handler function, instead of directly.\n\nI suppose we could remove lanispl if we made the convention that a\nnon-PL language must have InvalidOid in lanplcallfoid; then the\ntest whether to use a handler is \"is lanplcallfoid not zero\" rather\nthan looking at a different column. This does seem cleaner: use\na handler if there is one.\n\nI agree that the other uses of the flag are bogus and need to be\nrethought.\n\n> (5) is not really needed if we\n> take pg_dump's current approach of associating a language with the\n> namespace of the underlying function.\n\nWell, do you like that? It was only a quick hack to get pg_dump\nrunning with schemas; I'm not convinced we really want it to act\nthat way.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 27 Jul 2002 19:02:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: What exactly does lanispl mean? " }, { "msg_contents": "Tom Lane writes:\n\n> > (5) is not really needed if we\n> > take pg_dump's current approach of associating a language with the\n> > namespace of the underlying function.\n>\n> Well, do you like that? It was only a quick hack to get pg_dump\n> running with schemas; I'm not convinced we really want it to act\n> that way.\n\nWe should probably put the languages into a schema.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 30 Jul 2002 18:38:14 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: What exactly does lanispl mean? " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> Well, do you like that? It was only a quick hack to get pg_dump\n>> running with schemas; I'm not convinced we really want it to act\n>> that way.\n\n> We should probably put the languages into a schema.\n\nWell, that would be the cleanest answer but it's surely overkill\ngiven the small number of languages (and the fact that superuser\nprivilege is required to install one). I don't feel like doing\nthat much work just to make pg_dump's life a little easier...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Jul 2002 12:39:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: What exactly does lanispl mean? " } ]
[ { "msg_contents": "The attached patch implements START TRANSACTION, per SQL99. The\nfunctionality of the command is basically identical to that of\nBEGIN; it just accepts a few extra options (only one of which\nPostgreSQL currently implements), and is standards-compliant.\nThe patch includes a simple regression test and documentation.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC", "msg_date": "Sat, 27 Jul 2002 16:05:20 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": true, "msg_subject": "START TRANSACTION" }, { "msg_contents": "On Sat, Jul 27, 2002 at 04:05:20PM -0400, Neil Conway wrote:\n> The attached patch implements START TRANSACTION, per SQL99.\n\nOh, forgot to mention two things: I also removed the grammar's\n\"support\" for chained transactions, since it was basically non-\nexistent (I don't see the advantage of producing an \"chained\ntransactions not support\" error rather than a generic one).\nI also renamed the 'opt_level' production to 'iso_level', since\nit's not \"optional\".\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Sat, 27 Jul 2002 16:19:08 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": true, "msg_subject": "Re: START TRANSACTION" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nNeil Conway wrote:\n> The attached patch implements START TRANSACTION, per SQL99. The\n> functionality of the command is basically identical to that of\n> BEGIN; it just accepts a few extra options (only one of which\n> PostgreSQL currently implements), and is standards-compliant.\n> The patch includes a simple regression test and documentation.\n> \n> Cheers,\n> \n> Neil\n> \n> -- \n> Neil Conway <neilconway@rogers.com>\n> PGP Key ID: DB3C29FC\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 30 Jul 2002 15:44:41 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: START TRANSACTION" }, { "msg_contents": "Neil Conway writes:\n\n> The attached patch implements START TRANSACTION, per SQL99. The\n> functionality of the command is basically identical to that of\n> BEGIN; it just accepts a few extra options (only one of which\n> PostgreSQL currently implements), and is standards-compliant.\n> The patch includes a simple regression test and documentation.\n\nVery nice patch, but I don't think we need the regression test. It's a\nbit too simple.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 30 Jul 2002 23:43:22 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: START TRANSACTION" }, { "msg_contents": "Peter Eisentraut wrote:\n> Neil Conway writes:\n> \n> > The attached patch implements START TRANSACTION, per SQL99. The\n> > functionality of the command is basically identical to that of\n> > BEGIN; it just accepts a few extra options (only one of which\n> > PostgreSQL currently implements), and is standards-compliant.\n> > The patch includes a simple regression test and documentation.\n> \n> Very nice patch, but I don't think we need the regression test. It's a\n> bit too simple.\n\nRoger.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 30 Jul 2002 18:32:18 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: START TRANSACTION" }, { "msg_contents": "Peter Eisentraut dijo: \n\n> Neil Conway writes:\n> \n> > The attached patch implements START TRANSACTION, per SQL99. The\n> > functionality of the command is basically identical to that of\n> > BEGIN; it just accepts a few extra options (only one of which\n> > PostgreSQL currently implements), and is standards-compliant.\n> > The patch includes a simple regression test and documentation.\n> \n> Very nice patch, but I don't think we need the regression test. It's a\n> bit too simple.\n\nThat makes me wonder: should I produce some regression tests for\nCLUSTER?\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Investigaci�n es lo que hago cuando no s� lo que estoy haciendo\"\n(Wernher von Braun)\n\n", "msg_date": "Sun, 4 Aug 2002 00:07:49 -0400 (CLT)", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] START TRANSACTION" }, { "msg_contents": "\n[ Regression test removed, per Peter.]\n\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\n\nNeil Conway wrote:\n> The attached patch implements START TRANSACTION, per SQL99. The\n> functionality of the command is basically identical to that of\n> BEGIN; it just accepts a few extra options (only one of which\n> PostgreSQL currently implements), and is standards-compliant.\n> The patch includes a simple regression test and documentation.\n> \n> Cheers,\n> \n> Neil\n> \n> -- \n> Neil Conway <neilconway@rogers.com>\n> PGP Key ID: DB3C29FC\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 4 Aug 2002 00:31:47 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: START TRANSACTION" }, { "msg_contents": "Alvaro Herrera <alvherre@atentus.com> writes:\n> That makes me wonder: should I produce some regression tests for\n> CLUSTER?\n\nIt'd be a good thing.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 04 Aug 2002 00:37:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] START TRANSACTION " }, { "msg_contents": "Tom Lane dijo: \n\n> Alvaro Herrera <alvherre@atentus.com> writes:\n> > That makes me wonder: should I produce some regression tests for\n> > CLUSTER?\n> \n> It'd be a good thing.\n\nI'm attaching cluster.sql and cluster.out to be added to the regression\ntests.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"In Europe they call me Niklaus Wirth; in the US they call me Nickel's worth.\n That's because in Europe they call me by name, and in the US by value!\"", "msg_date": "Mon, 5 Aug 2002 02:54:12 -0400 (CLT)", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": false, "msg_subject": "CLUSTER regression test" } ]
[ { "msg_contents": "I'd like to add the ability to use a sub-select in a CHECK constraint.\nCan someone elaborate on what changes would be needed to support\nthis? From a (very) brief look at execMain.c, ExecEvalExpr() seems\nto support subplans already, so I wouldn't *guess* it would be too\ninvolved, but I'd appreciate a more informed assessment...\n\nThanks in advance,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Sat, 27 Jul 2002 17:03:00 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": true, "msg_subject": "sub-selects in CHECK" }, { "msg_contents": "nconway@klamath.dyndns.org (Neil Conway) writes:\n> I'd like to add the ability to use a sub-select in a CHECK constraint.\n> Can someone elaborate on what changes would be needed to support\n> this?\n\nDefine what you think should happen when the other rows referenced\nby the subselect change.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 27 Jul 2002 19:07:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: sub-selects in CHECK " }, { "msg_contents": "On Sat, Jul 27, 2002 at 07:07:13PM -0400, Tom Lane wrote:\n> nconway@klamath.dyndns.org (Neil Conway) writes:\n> > I'd like to add the ability to use a sub-select in a CHECK constraint.\n> > Can someone elaborate on what changes would be needed to support\n> > this?\n> \n> Define what you think should happen when the other rows referenced\n> by the subselect change.\n\nGood point -- but given that SQL99 specifically mentions that this\nfunctionality should be available (Feature 671, \"Subqueries in\nCHECK constraints\"), there must be some reasonable behavior\nadopted by another DBMS...\n\nIn any case, there are already plenty of ways to create non-sensical\nconstraints. For example:\n\nCHECK ( foo < random() )\n\nor even:\n\nCREATE FUNCTION check_func() returns int as 'select ...' language 'sql';\n\nALTER TABLE foo ADD CONSTRAINT check_x CHECK (x > check_func() );\n\n(which is effectively a sub-select with a different syntax)\n\nSo the restrictions \"no sub-selects or aggregates in a CHECK constraint\"\nis quite insufficient, if we actually want to prevent an application\ndeveloper from creating dubious constraints.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Sat, 27 Jul 2002 19:36:27 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": true, "msg_subject": "Re: sub-selects in CHECK" }, { "msg_contents": "nconway@klamath.dyndns.org (Neil Conway) writes:\n> Good point -- but given that SQL99 specifically mentions that this\n> functionality should be available (Feature 671, \"Subqueries in\n> CHECK constraints\"), there must be some reasonable behavior\n> adopted by another DBMS...\n\nIt's effectively equivalent to a database-wide assertion, which is\nanother SQL feature that we don't support.\n\n> In any case, there are already plenty of ways to create non-sensical\n> constraints.\n\nCertainly, but this one isn't really ill-defined, it's just very\ndifficult to support in any acceptably-efficient manner.\n\nIf you want to cheat horribly, ie have the condition checked only when a\nsingle-row constraint would be checked, then you can stick the subselect\ninside a function call. I don't think we are really adding any\nfunctionality unless we can do better than that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 27 Jul 2002 20:48:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: sub-selects in CHECK " } ]
[ { "msg_contents": "It seems JDBC in current does not compile with jdk1.3.0. Is this a\nintended change or am I missing something?\n\n\t :\n\t :\n [javac] /usr/local/src/pgsql/current/pgsql/src/interfaces/jdbc/org/postgresql/jdbc2/Jdbc2ResultSet.java:13: getStatement() in org.postgresql.jdbc2.AbstractJdbc2ResultSet cannot implement getStatement() in java.sql.ResultSet; attempting to use incompatible return type\n [javac] found : org.postgresql.jdbc2.Statement\n [javac] required: java.sql.Statement\n [javac] public class Jdbc2ResultSet extends org.postgresql.jdbc2.AbstractJdbc2ResultSet implements java.sql.ResultSet\n [javac] ^\n [javac] Note: Some input files use or override a deprecated API.\n [javac] Note: Recompile with -deprecation for details.\n [javac] 11 errors\n--\nTatsuo Ishii\n", "msg_date": "Mon, 29 Jul 2002 11:12:24 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "JDBC does not compile" }, { "msg_contents": "Make sure you have a clean copy of the source, Barry has done an\nextensive re-organization to remove the duplicate code between jdbc1,\nand jdbc2, to prepare for jdbc3. \n\nIt compiles clean on my machine using jdk1.3.1\n\nDave\nOn Sun, 2002-07-28 at 22:12, Tatsuo Ishii wrote:\n> It seems JDBC in current does not compile with jdk1.3.0. Is this a\n> intended change or am I missing something?\n> \n>:\n>:\n> [javac] /usr/local/src/pgsql/current/pgsql/src/interfaces/jdbc/org/postgresql/jdbc2/Jdbc2ResultSet.java:13: getStatement() in org.postgresql.jdbc2.AbstractJdbc2ResultSet cannot implement getStatement() in java.sql.ResultSet; attempting to use incompatible return type\n> [javac] found : org.postgresql.jdbc2.Statement\n> [javac] required: java.sql.Statement\n> [javac] public class Jdbc2ResultSet extends org.postgresql.jdbc2.AbstractJdbc2ResultSet implements java.sql.ResultSet\n> [javac] ^\n> [javac] Note: Some input files use or override a deprecated API.\n> [javac] Note: Recompile with -deprecation for details.\n> [javac] 11 errors\n> --\n> Tatsuo Ishii\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n> \n\n\n\n", "msg_date": "29 Jul 2002 05:28:26 -0400", "msg_from": "Dave Cramer <dave@fastcrypt.com>", "msg_from_op": false, "msg_subject": "Re: JDBC does not compile" }, { "msg_contents": "> Make sure you have a clean copy of the source, Barry has done an\n> extensive re-organization to remove the duplicate code between jdbc1,\n> and jdbc2, to prepare for jdbc3. \n> \n> It compiles clean on my machine using jdk1.3.1\n\nIt turns out that my class path did bad thing. After unset the class\npath, I succeeded in compiling. Thanks.\n--\nTatsuo Ishii\n", "msg_date": "Mon, 29 Jul 2002 22:25:37 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: JDBC does not compile" } ]
[ { "msg_contents": "Hi,\n\nI've just managed to recover from a fun Postgres lock up experience!\n\nBasically Postgres seemed to have hung, with a stack of idle postmaster\nprocesses. There was an idle VACUUM process and an idle CHECKPOINT process\nrunning as well.\n\nI finally managed to get it restarted (after a reboot), then I ran vacuumdb\nagain to test it, ctrl-c'd it while it was running, and then ps ax was\nshowing an idle VACUUM process again!\n\nI restarted the computer again and then vacuumdb ran to completion.\n\nSome of my problems were related to the fact that we were using daemontools\nto monitor the postgres process. I've taken that off now as I think it was\nmore trouble than it was worth.\n\nThe vacuum analyze that caused the original crash I think was in pg_amop. I\nhave a copy of the data dir from then if anyone's interested, and I can dig\ninto the logfiles a bit more to try to give more info...\n\nChris\n\n", "msg_date": "Mon, 29 Jul 2002 10:36:03 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Postgres Lock Up Fun" } ]
[ { "msg_contents": "Hi all\nAs I understand every time there is a request to postgres a new backend is made, and when the request is finished, even if the connection is already active the backend dies. I wonder if is there any parameter that allow backends to remain beyond a transaction. Creating a new backend every time a transaction is made means forking the code and reallocating sort_memory. Although it is not a high resource usage, on short transactions as OLTPs it is a relevant work time, I think it would be interesting that a predefined number of backends were allowed to remain active beyond the transaction.\nThanks and Regards\n\n\n\n\n\n\n\nHi all\nAs I understand every time there is a request to \npostgres a new backend is made, and when the request is finished, even if the \nconnection is already active the backend dies. I wonder if is there any \nparameter that allow backends to remain beyond a transaction. Creating a new \nbackend every time a transaction is made means forking the code and reallocating \nsort_memory. Although it is not a high resource usage, on short transactions as \nOLTPs it is a relevant work time, I think it would be interesting that a \npredefined number of backends were allowed to remain active beyond the \ntransaction.\nThanks and Regards", "msg_date": "Mon, 29 Jul 2002 11:32:46 +0200", "msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>", "msg_from_op": true, "msg_subject": "question on backends" }, { "msg_contents": "Just use persistent connections.\n\nChris\n ----- Original Message ----- \n From: Luis Alberto Amigo Navarro \n To: pgsql-hackers@postgresql.org \n Sent: Monday, July 29, 2002 5:32 PM\n Subject: [HACKERS] question on backends\n\n\n Hi all\n As I understand every time there is a request to postgres a new backend is made, and when the request is finished, even if the connection is already active the backend dies. I wonder if is there any parameter that allow backends to remain beyond a transaction. Creating a new backend every time a transaction is made means forking the code and reallocating sort_memory. Although it is not a high resource usage, on short transactions as OLTPs it is a relevant work time, I think it would be interesting that a predefined number of backends were allowed to remain active beyond the transaction.\n Thanks and Regards\n\n\n\n\n\n\n\nJust use persistent connections.\n \nChris\n\n----- Original Message ----- \nFrom:\nLuis Alberto \n Amigo Navarro \nTo: pgsql-hackers@postgresql.org\n\nSent: Monday, July 29, 2002 5:32 PM\nSubject: [HACKERS] question on \n backends\n\nHi all\nAs I understand every time there is a request to \n postgres a new backend is made, and when the request is finished, even if the \n connection is already active the backend dies. I wonder if is there any \n parameter that allow backends to remain beyond a transaction. Creating a new \n backend every time a transaction is made means forking the code and \n reallocating sort_memory. Although it is not a high resource usage, on short \n transactions as OLTPs it is a relevant work time, I think it would be \n interesting that a predefined number of backends were allowed to remain active \n beyond the transaction.\nThanks and \nRegards", "msg_date": "Mon, 29 Jul 2002 18:36:30 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: question on backends" }, { "msg_contents": "How?\n ----- Original Message ----- \n From: Christopher Kings-Lynne \n To: Luis Alberto Amigo Navarro ; pgsql-hackers@postgresql.org \n Sent: Monday, July 29, 2002 12:36 PM\n Subject: Re: [HACKERS] question on backends\n\n\n Just use persistent connections.\n\n Chris\n ----- Original Message ----- \n From: Luis Alberto Amigo Navarro \n To: pgsql-hackers@postgresql.org \n Sent: Monday, July 29, 2002 5:32 PM\n Subject: [HACKERS] question on backends\n\n\n Hi all\n As I understand every time there is a request to postgres a new backend is made, and when the request is finished, even if the connection is already active the backend dies. I wonder if is there any parameter that allow backends to remain beyond a transaction. Creating a new backend every time a transaction is made means forking the code and reallocating sort_memory. Although it is not a high resource usage, on short transactions as OLTPs it is a relevant work time, I think it would be interesting that a predefined number of backends were allowed to remain active beyond the transaction.\n Thanks and Regards\n\n\n\n\n\n\n\nHow?\n\n----- Original Message ----- \nFrom:\nChristopher Kings-Lynne \nTo: Luis Alberto Amigo Navarro ; pgsql-hackers@postgresql.org\n\nSent: Monday, July 29, 2002 12:36 \nPM\nSubject: Re: [HACKERS] question on \n backends\n\nJust use persistent \nconnections.\n \nChris\n\n----- Original Message ----- \nFrom:\nLuis \n Alberto Amigo Navarro \nTo: pgsql-hackers@postgresql.org\n\nSent: Monday, July 29, 2002 5:32 \n PM\nSubject: [HACKERS] question on \n backends\n\nHi all\nAs I understand every time there is a request \n to postgres a new backend is made, and when the request is finished, even if \n the connection is already active the backend dies. I wonder if is there any \n parameter that allow backends to remain beyond a transaction. Creating a new \n backend every time a transaction is made means forking the code and \n reallocating sort_memory. Although it is not a high resource usage, on short \n transactions as OLTPs it is a relevant work time, I think it would be \n interesting that a predefined number of backends were allowed to remain \n active beyond the transaction.\nThanks and \nRegards", "msg_date": "Mon, 29 Jul 2002 13:00:46 +0200", "msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>", "msg_from_op": true, "msg_subject": "Re: question on backends" }, { "msg_contents": "libpq has a function pconnect as opposed to connect that will do it. PHP and most other interfaces will let you use persistent connections.\n\nChris\n ----- Original Message ----- \n From: Luis Alberto Amigo Navarro \n To: Christopher Kings-Lynne ; pgsql-hackers@postgresql.org \n Sent: Monday, July 29, 2002 7:00 PM\n Subject: Re: [HACKERS] question on backends\n\n\n How?\n ----- Original Message ----- \n From: Christopher Kings-Lynne \n To: Luis Alberto Amigo Navarro ; pgsql-hackers@postgresql.org \n Sent: Monday, July 29, 2002 12:36 PM\n Subject: Re: [HACKERS] question on backends\n\n\n Just use persistent connections.\n\n Chris\n ----- Original Message ----- \n From: Luis Alberto Amigo Navarro \n To: pgsql-hackers@postgresql.org \n Sent: Monday, July 29, 2002 5:32 PM\n Subject: [HACKERS] question on backends\n\n\n Hi all\n As I understand every time there is a request to postgres a new backend is made, and when the request is finished, even if the connection is already active the backend dies. I wonder if is there any parameter that allow backends to remain beyond a transaction. Creating a new backend every time a transaction is made means forking the code and reallocating sort_memory. Although it is not a high resource usage, on short transactions as OLTPs it is a relevant work time, I think it would be interesting that a predefined number of backends were allowed to remain active beyond the transaction.\n Thanks and Regards\n\n\n\n\n\n\n\nlibpq has a function pconnect as opposed to connect \nthat will do it.  PHP and most other interfaces will let you use persistent \nconnections.\n \nChris\n\n----- Original Message ----- \nFrom:\nLuis Alberto \n Amigo Navarro \nTo: Christopher Kings-Lynne ; pgsql-hackers@postgresql.org\n\nSent: Monday, July 29, 2002 7:00 PM\nSubject: Re: [HACKERS] question on \n backends\n\nHow?\n\n----- Original Message ----- \nFrom:\nChristopher Kings-Lynne \nTo: Luis Alberto Amigo Navarro ; pgsql-hackers@postgresql.org\n\nSent: Monday, July 29, 2002 12:36 \n PM\nSubject: Re: [HACKERS] question on \n backends\n\nJust use persistent \n connections.\n \nChris\n\n----- Original Message ----- \nFrom:\nLuis \n Alberto Amigo Navarro \nTo: pgsql-hackers@postgresql.org\n\nSent: Monday, July 29, 2002 5:32 \n PM\nSubject: [HACKERS] question on \n backends\n\nHi all\nAs I understand every time there is a request \n to postgres a new backend is made, and when the request is finished, even \n if the connection is already active the backend dies. I wonder if is there \n any parameter that allow backends to remain beyond a transaction. Creating \n a new backend every time a transaction is made means forking the code and \n reallocating sort_memory. Although it is not a high resource usage, on \n short transactions as OLTPs it is a relevant work time, I think it would \n be interesting that a predefined number of backends were allowed to remain \n active beyond the transaction.\nThanks and \n Regards", "msg_date": "Mon, 29 Jul 2002 23:28:54 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: question on backends" }, { "msg_contents": "On Mon, Jul 29, 2002 at 11:28:54PM +0800, Christopher Kings-Lynne wrote:\n> libpq has a function pconnect as opposed to connect that will do it.\n\nlibpq has neither function, AFAIK.\n\nAs for persistent backends, it's on the TODO list, but I'm not aware\nthat anyone has put any work into implementing it. At the moment,\na backend connects to a single database for its entire lifecycle -- so\nyou'd either need one pool of persistent backends for each database\nin use, or you'd need to allow a backend to change the database it is\nconnected to.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Mon, 29 Jul 2002 12:11:19 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": false, "msg_subject": "Re: question on backends" }, { "msg_contents": "if i put debug_level=1 i get for one connect and several inserts on backend\ndie after each insert\n----- Original Message -----\nFrom: \"Hannu Krosing\" <hannu@tm.ee>\nTo: \"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>\nCc: <pgsql-hackers@postgresql.org>\nSent: Monday, July 29, 2002 8:42 PM\nSubject: Re: [HACKERS] question on backends\n\n\n> On Mon, 2002-07-29 at 11:32, Luis Alberto Amigo Navarro wrote:\n> > Hi all\n> > As I understand every time there is a request to postgres a new backend\n> > is made, and when the request is finished, even if the connection is\n> > already active the backend dies.\n>\n> I think you have misunderstood it. A new backend is forked only when a\n> new connection is made, not for every transaction.\n>\n> There may be some frontends that do make a new connection for each http\n> request or such, but most of them allow for persistent connections,\n> either as an option or by default.\n>\n> > I wonder if is there any parameter\n> > that allow backends to remain beyond a transaction. Creating a new\n> > backend every time a transaction is made means forking the code and\n> > reallocating sort_memory.\n>\n> ------------------\n> Hannu\n>\n>\n\n\n", "msg_date": "Mon, 29 Jul 2002 20:21:27 +0200", "msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>", "msg_from_op": true, "msg_subject": "Re: question on backends" }, { "msg_contents": "On Mon, 2002-07-29 at 11:32, Luis Alberto Amigo Navarro wrote:\n> Hi all\n> As I understand every time there is a request to postgres a new backend\n> is made, and when the request is finished, even if the connection is\n> already active the backend dies.\n\nI think you have misunderstood it. A new backend is forked only when a\nnew connection is made, not for every transaction.\n\nThere may be some frontends that do make a new connection for each http\nrequest or such, but most of them allow for persistent connections,\neither as an option or by default.\n\n> I wonder if is there any parameter\n> that allow backends to remain beyond a transaction. Creating a new\n> backend every time a transaction is made means forking the code and\n> reallocating sort_memory.\n\n------------------\nHannu\n\n", "msg_date": "29 Jul 2002 20:42:30 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: question on backends" }, { "msg_contents": "libpq\nPQsetdb(........\n----- Original Message -----\nFrom: \"Hannu Krosing\" <hannu@tm.ee>\nTo: \"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>\nCc: <pgsql-hackers@postgresql.org>\nSent: Monday, July 29, 2002 9:40 PM\nSubject: Re: [HACKERS] question on backends\n\n\n> On Mon, 2002-07-29 at 20:21, Luis Alberto Amigo Navarro wrote:\n> > if i put debug_level=1 i get for one connect and several inserts on\nbackend\n> > die after each insert\n>\n> What client do you use ?\n>\n> ----------\n> Hannu\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n>\n\n\n", "msg_date": "Mon, 29 Jul 2002 20:50:00 +0200", "msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>", "msg_from_op": true, "msg_subject": "Re: question on backends" }, { "msg_contents": "On Mon, 2002-07-29 at 20:21, Luis Alberto Amigo Navarro wrote:\n> if i put debug_level=1 i get for one connect and several inserts on backend\n> die after each insert\n\nWhat client do you use ?\n\n----------\nHannu\n\n", "msg_date": "29 Jul 2002 21:40:35 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: question on backends" }, { "msg_contents": "On Mon, 2002-07-29 at 20:50, Luis Alberto Amigo Navarro wrote:\n> libpq\n> PQsetdb(........\n> ----- Original Message -----\n> From: \"Hannu Krosing\" <hannu@tm.ee>\n> To: \"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>\n> Cc: <pgsql-hackers@postgresql.org>\n> Sent: Monday, July 29, 2002 9:40 PM\n> Subject: Re: [HACKERS] question on backends\n> \n> \n> > On Mon, 2002-07-29 at 20:21, Luis Alberto Amigo Navarro wrote:\n> > > if i put debug_level=1 i get for one connect and several inserts on\n> backend\n> > > die after each insert\n>\n\nIt should not happen.\n\nI've run several websites using both php and python (which use libpq to\nconnect to backend), and they have been up for months connected to the\n_same_ backend, doing inserts, updates, deletes and selects.\n\nI guess there is some error in you code that manifests itself as dying\nbackends.\n\nYou can test this by trying to do several inserts in one transaction.\nand see if the data is still there after the backend closes after the\nfirst insert.\n\n--------------\nHannu\n\n", "msg_date": "29 Jul 2002 22:07:55 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: question on backends" }, { "msg_contents": "Ah yes - that was me making an unfortunate exptrapolation without thinking\nit through.\n\nOf course, PHP implements persistent connections for you, etc., etc., not\nthe postgres client library.\n\nChris\n\n> -----Original Message-----\n> From: Neil Conway [mailto:nconway@klamath.dyndns.org]\n> Sent: Tuesday, 30 July 2002 12:11 AM\n> To: Christopher Kings-Lynne\n> Cc: Luis Alberto Amigo Navarro; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] question on backends\n>\n>\n> On Mon, Jul 29, 2002 at 11:28:54PM +0800, Christopher Kings-Lynne wrote:\n> > libpq has a function pconnect as opposed to connect that will do it.\n>\n> libpq has neither function, AFAIK.\n>\n> As for persistent backends, it's on the TODO list, but I'm not aware\n> that anyone has put any work into implementing it. At the moment,\n> a backend connects to a single database for its entire lifecycle -- so\n> you'd either need one pool of persistent backends for each database\n> in use, or you'd need to allow a backend to change the database it is\n> connected to.\n>\n> Cheers,\n>\n> Neil\n>\n> --\n> Neil Conway <neilconway@rogers.com>\n> PGP Key ID: DB3C29FC\n>\n\n", "msg_date": "Tue, 30 Jul 2002 09:28:31 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: question on backends" } ]
[ { "msg_contents": "Given the GPL isn't wanted in the contrib directory, it may be wise to\ntake care of src/interfaces/odbc/license.txt\n\n\n\n", "msg_date": "29 Jul 2002 09:38:52 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "GPL License" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Rod Taylor [mailto:rbt@zort.ca] \n> Sent: 29 July 2002 14:39\n> To: PostgreSQL-development\n> Subject: [HACKERS] GPL License\n> \n> \n> Given the GPL isn't wanted in the contrib directory, it may \n> be wise to take care of src/interfaces/odbc/license.txt\n\nTake care of it how? It can't be just removed as everything under\nsrc/interfaces/odbc *is* LGPL.\n\nRegards, Dave.\n", "msg_date": "Mon, 29 Jul 2002 14:40:35 +0100", "msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: GPL License" }, { "msg_contents": "> > -----Original Message-----\n> > From: Rod Taylor [mailto:rbt@zort.ca] \n> > Sent: 29 July 2002 14:39\n> > To: PostgreSQL-development\n> > Subject: [HACKERS] GPL License\n> > \n> > \n> > Given the GPL isn't wanted in the contrib directory, it may \n> > be wise to take care of src/interfaces/odbc/license.txt\n> \n> Take care of it how? It can't be just removed as everything under\n> src/interfaces/odbc *is* LGPL.\n\nGenerally you have to track down all contributors to the code and ask\nthem 1) to assign the copyright to the group, and 2) if its ok to change\nthe license on their bit.\n\nDoing 1 helps doing 2 later.\n\n\n\n", "msg_date": "29 Jul 2002 09:43:42 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: GPL License" }, { "msg_contents": "\"Dave Page\" <dpage@vale-housing.co.uk> writes:\n>> From: Rod Taylor [mailto:rbt@zort.ca] \n>> Given the GPL isn't wanted in the contrib directory, it may \n>> be wise to take care of src/interfaces/odbc/license.txt\n\n> Take care of it how? It can't be just removed as everything under\n> src/interfaces/odbc *is* LGPL.\n\nYou're failing to distinguish GPL from LGPL. The consensus as I recall\nit was that we wanted to remove GPL-license stuff from our standard\ndistribution, but LGPL is okay (it's not so fundamentally incompatible\nwith the project's own BSD license).\n\nI would rather see ODBC under BSD, sure, but I can live with LGPL.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 Jul 2002 10:38:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: GPL License " } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Rod Taylor [mailto:rbt@zort.ca] \n> Sent: 29 July 2002 14:44\n> To: Dave Page\n> Cc: PostgreSQL-development\n> Subject: RE: [HACKERS] GPL License\n> \n> \n> > > -----Original Message-----\n> > > From: Rod Taylor [mailto:rbt@zort.ca]\n> > > Sent: 29 July 2002 14:39\n> > > To: PostgreSQL-development\n> > > Subject: [HACKERS] GPL License\n> > > \n> > > \n> > > Given the GPL isn't wanted in the contrib directory, it may\n> > > be wise to take care of src/interfaces/odbc/license.txt\n> > \n> > Take care of it how? It can't be just removed as everything under \n> > src/interfaces/odbc *is* LGPL.\n> \n> Generally you have to track down all contributors to the code \n> and ask them 1) to assign the copyright to the group, and 2) \n> if its ok to change the license on their bit.\n> \n> Doing 1 helps doing 2 later.\n\nYeah, I tried to do that with about 10 developers on pgAdmin I when I\nwanted to reuse some of the code on pgAdmin II (which has it's own\nlicence). I eventually had to give up.\n\npsqlODBC has been LGPL much longer than it's has been in the main CVS.\nOriginally it was written by Christian Czezatke and Dan McGuirk (1996),\nfollowing them, Insight Distribution Systems (1996 - 1998 - Byron\nNikolaidis + others iirc), then numerous PostgreSQL developers. Tracking\ndown all the developers since Christian & Dan would be a nightmare I\nsuspect - especially as we don't (to my knowledge) have CVS logs prior\nto Apr 13 15:01:38 1998.\n\nI would like to see it under BSD licence though...\n\nRegards, Dave.\n", "msg_date": "Mon, 29 Jul 2002 14:58:12 +0100", "msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: GPL License" } ]
[ { "msg_contents": "Hi,\n\nI need small help in outer joins in postgresql. We have three tables created\nusing the following scripts\n\nCREATE TABLE \"yuva_test1\" (\n \"yt1_id\" numeric(16, 0), \n \"yt1_name\" varchar(16) NOT NULL, \n \"yt1_descr\" varchar(32)\n);\n\nCREATE TABLE \"yuva_test2\" (\n \"yt2_id\" numeric(16, 0), \n \"yt2_name\" varchar(16) NOT NULL, \n \"yt2_descr\" varchar(32)\n);\n\nCREATE TABLE \"yuva_test3\" (\n \"yt3_id\" numeric(16, 0), \n \"yt3_name\" varchar(16) NOT NULL, \n \"yt3_descr\" varchar(32)\n);\n\nWhen I run the query \"select yt1_name, yt1_descr, yt2_name, yt2_descr,\nyt3_name, yt3_descr from yuva_test1, yuva_test2, yuva_test3 where yt1_id =\nyt2_id(+) and yt1_id = yt3_id(+)\", it works fine with Oracle(created same\ntables and data on Oracle database) and gives the results as expected.\n\nBut I don't know what is the equivalent query in postgres... Can some one\nhelp me.\n\nThanks\nYuva\n", "msg_date": "Mon, 29 Jul 2002 13:07:43 -0700", "msg_from": "Yuva Chandolu <ychandolu@ebates.com>", "msg_from_op": true, "msg_subject": "outer join help..." } ]
[ { "msg_contents": "The problems noticed by Peter E several weeks ago have not yet been\nresolved:\n\n http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&selm=Pine.LNX.4.44.0207082049460.1247-100000%40localhost.localdomain\n\n(namely, that libpqxx and the new SSL code have not been fully\nintegrated into the tree)\n\nIn talking to the author of libpqxx, he *still* hasn't been given a CVS\naccount -- Marc, can this be done?\n\nOnce that has happened, it shouldn't be too difficult to get libpqxx\ninto shape for 7.3 -- I'll commit to doing the work if no one else\nwants to...\n\nRegarding SSL, I'd be inclined to remove it from CVS in a week or two,\nunless the author volunteers to fix some of the problems Peter commented\non, or someone else steps up to the plate to fix it.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Mon, 29 Jul 2002 16:30:41 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": true, "msg_subject": "(still) unintegrated stuff in CVS" } ]
[ { "msg_contents": "Hi,\n\nI tried yuva_test1 left outer join yuva_test2 and yuva_test1 left outer join\nyuva_test3 in the same query in Oracle. I tried the following query in\npostgres and it worked...\n\nselect yt1_name, yt1_descr, yt2_name, yt2_descr, yt3_name, yt3_descr from\n(yuva_test1 left outer join yuva_test2 on yt1_id = yt2_id) as A left outer\njoin yuva_test3 on yt1_id = yt3_id\n\nI have used table alias technique and I got the same results as with Oracle.\n\nCould you please tell me if the above query is correct or not, because some\ntimes wrong queries may give correct results with test data and they fail\nwhen we try with live data.\n\nThanks\nYuva\n\n-----Original Message-----\nFrom: Andrew Sullivan [mailto:andrew@libertyrms.info]\nSent: Monday, July 29, 2002 1:27 PM\nTo: Yuva Chandolu\nSubject: Re: [HACKERS] outer join help...\n\n\nOn Mon, Jul 29, 2002 at 01:07:43PM -0700, Yuva Chandolu wrote:\n> Hi,\n> \n> I need small help in outer joins in postgresql. We have three tables\ncreated\n> using the following scripts\n> \n> CREATE TABLE \"yuva_test1\" (\n> \"yt1_id\" numeric(16, 0), \n> \"yt1_name\" varchar(16) NOT NULL, \n> \"yt1_descr\" varchar(32)\n> );\n> \n> CREATE TABLE \"yuva_test2\" (\n> \"yt2_id\" numeric(16, 0), \n> \"yt2_name\" varchar(16) NOT NULL, \n> \"yt2_descr\" varchar(32)\n> );\n> \n> CREATE TABLE \"yuva_test3\" (\n> \"yt3_id\" numeric(16, 0), \n> \"yt3_name\" varchar(16) NOT NULL, \n> \"yt3_descr\" varchar(32)\n> );\n> \n> When I run the query \"select yt1_name, yt1_descr, yt2_name, yt2_descr,\n> yt3_name, yt3_descr from yuva_test1, yuva_test2, yuva_test3 where yt1_id =\n> yt2_id(+) and yt1_id = yt3_id(+)\", it works fine with Oracle(created same\n> tables and data on Oracle database) and gives the results as expected.\n\nselect yt1_name, yt1_descr, yt2_name, yt2_descr, yt3_name, yt3_descr\nfrom yuva_test1 [left? right? I don't know the Oracle syntax] outer\njoin yuva_test2 on yt1_id=yt2_id [left|right] outer join yuva_test3\non yt1_id = yt3_id \n\nis what you want, I think.\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n", "msg_date": "Mon, 29 Jul 2002 13:35:03 -0700", "msg_from": "Yuva Chandolu <ychandolu@ebates.com>", "msg_from_op": true, "msg_subject": "Re: outer join help..." }, { "msg_contents": "Looks fine, you may want to rephrase it as:\n\nselect yt1_name, yt1_descr, yt2_name, yt2_descr, yt3_name, yt3_descr \nfrom yuva_test1 left outer join yuva_test2 on yt1_id = yt2_id\n left outer join yuva_test3 on yt1_id = yt3_id\n\nto make it more legible. The alias is overkill in this case since you \ndon't have any duplicate tables.\n\nYuva Chandolu wrote:\n> Hi,\n> \n> I tried yuva_test1 left outer join yuva_test2 and yuva_test1 left outer join\n> yuva_test3 in the same query in Oracle. I tried the following query in\n> postgres and it worked...\n> \n> select yt1_name, yt1_descr, yt2_name, yt2_descr, yt3_name, yt3_descr from\n> (yuva_test1 left outer join yuva_test2 on yt1_id = yt2_id) as A left outer\n> join yuva_test3 on yt1_id = yt3_id\n> \n> I have used table alias technique and I got the same results as with Oracle.\n> \n> Could you please tell me if the above query is correct or not, because some\n> times wrong queries may give correct results with test data and they fail\n> when we try with live data.\n> \n> Thanks\n> Yuva\n> \n> -----Original Message-----\n> From: Andrew Sullivan [mailto:andrew@libertyrms.info]\n> Sent: Monday, July 29, 2002 1:27 PM\n> To: Yuva Chandolu\n> Subject: Re: [HACKERS] outer join help...\n> \n> \n> On Mon, Jul 29, 2002 at 01:07:43PM -0700, Yuva Chandolu wrote:\n> \n>>Hi,\n>>\n>>I need small help in outer joins in postgresql. We have three tables\n> \n> created\n> \n>>using the following scripts\n>>\n>>CREATE TABLE \"yuva_test1\" (\n>> \"yt1_id\" numeric(16, 0), \n>> \"yt1_name\" varchar(16) NOT NULL, \n>> \"yt1_descr\" varchar(32)\n>>);\n>>\n>>CREATE TABLE \"yuva_test2\" (\n>> \"yt2_id\" numeric(16, 0), \n>> \"yt2_name\" varchar(16) NOT NULL, \n>> \"yt2_descr\" varchar(32)\n>>);\n>>\n>>CREATE TABLE \"yuva_test3\" (\n>> \"yt3_id\" numeric(16, 0), \n>> \"yt3_name\" varchar(16) NOT NULL, \n>> \"yt3_descr\" varchar(32)\n>>);\n>>\n>>When I run the query \"select yt1_name, yt1_descr, yt2_name, yt2_descr,\n>>yt3_name, yt3_descr from yuva_test1, yuva_test2, yuva_test3 where yt1_id =\n>>yt2_id(+) and yt1_id = yt3_id(+)\", it works fine with Oracle(created same\n>>tables and data on Oracle database) and gives the results as expected.\n> \n> \n> select yt1_name, yt1_descr, yt2_name, yt2_descr, yt3_name, yt3_descr\n> from yuva_test1 [left? right? I don't know the Oracle syntax] outer\n> join yuva_test2 on yt1_id=yt2_id [left|right] outer join yuva_test3\n> on yt1_id = yt3_id \n> \n> is what you want, I think.\n> \n> A\n> \n\n\n", "msg_date": "Mon, 29 Jul 2002 17:31:17 -0400", "msg_from": "Marc Lavergne <mlavergne-pub@richlava.com>", "msg_from_op": false, "msg_subject": "Re: outer join help..." } ]
[ { "msg_contents": "\nHave any organizations run TPC benchmarks against PostgreSQL other than\nthe old and much maligned GreatBridge benchmark? It is unfortunate that\nthere are not any moderately standard benchmarks available regarding\nPostgreSQL. I do not need to say \"PgSQL is better than X\", but it would\nbe nice to at least say \"it has been tested at load level X and works\nfine\". Failing that, all we have is anecdotes and conjecture. :/\n\n-- \n __\n /\n | Paul Ramsey\n | Refractions Research\n | Email: pramsey@refractions.net\n | Phone: (250) 885-0632\n \\_\n", "msg_date": "Mon, 29 Jul 2002 13:36:14 -0700", "msg_from": "Paul Ramsey <pramsey@refractions.net>", "msg_from_op": true, "msg_subject": "TPC-* Benchmarks" }, { "msg_contents": "Hi Paul,\n\nYou might want to take at look at the Open Source Database Benchmark\nproject:\n\nhttp://www.sf.net/projects/osdb\n\nThis is an implementation (by Andy Riebs from Compaq) of the AS3AP\ndatabase benchmark, and works with PostgreSQL, MySQL, and one other\ndatabase too from memory. You're probably best off to get the latest\nversion from CVS as there have been a few good enhancements since the\nlatest code snapshot.\n\nIf you need any further help, please let me know. There is a PostgreSQL\nAdvocacy and Marketing site which will come online over time\n(http://advocacy.postgresql.org), and if you have suggestions regarding\nthe publishing of benchmarks, this is probably the place they will be\nimplemented.\n\nHope this helps.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\nPaul Ramsey wrote:\n> \n> Have any organizations run TPC benchmarks against PostgreSQL other than\n> the old and much maligned GreatBridge benchmark? It is unfortunate that\n> there are not any moderately standard benchmarks available regarding\n> PostgreSQL. I do not need to say \"PgSQL is better than X\", but it would\n> be nice to at least say \"it has been tested at load level X and works\n> fine\". Failing that, all we have is anecdotes and conjecture. :/\n> \n> --\n> __\n> /\n> | Paul Ramsey\n> | Refractions Research\n> | Email: pramsey@refractions.net\n> | Phone: (250) 885-0632\n> \\_\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Tue, 30 Jul 2002 07:14:48 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: TPC-* Benchmarks" }, { "msg_contents": "Try here:\n\nhttp://osdb.sourceforge.net/\n\nChris\n\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Paul Ramsey\n> Sent: Tuesday, 30 July 2002 4:36 AM\n> To: pgsql-hackers@postgresql.org\n> Subject: [HACKERS] TPC-* Benchmarks\n> \n> \n> \n> Have any organizations run TPC benchmarks against PostgreSQL other than\n> the old and much maligned GreatBridge benchmark? It is unfortunate that\n> there are not any moderately standard benchmarks available regarding\n> PostgreSQL. I do not need to say \"PgSQL is better than X\", but it would\n> be nice to at least say \"it has been tested at load level X and works\n> fine\". Failing that, all we have is anecdotes and conjecture. :/\n> \n> -- \n> __\n> /\n> | Paul Ramsey\n> | Refractions Research\n> | Email: pramsey@refractions.net\n> | Phone: (250) 885-0632\n> \\_\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n", "msg_date": "Tue, 30 Jul 2002 09:47:19 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: TPC-* Benchmarks" } ]
[ { "msg_contents": "\nI had occasion (and a perfectly good reason) to install 7.1.3 on \na fresh server [1]. Installation succeeded as normal, data failed\nto load because some relation names go beyond\n31 characters. <smack forehead>, alter NAMEDATALEN,\nrecompile. initdb fails immediately after the message\n\"creating template1 database in...\". There followed much\nwrinkling of brows, gnashing of directories, recompilation from afresh\netcera, during the course of which I noticed the following output from\nconfigure:\n\n configure: warning:\n*** Without Bison you will not be able to build PostgreSQL from CVS or \n*** change any of the parser definition files. You can obtain Bison from\n*** a GNU mirror site. (If you are using the official distribution of \n*** PostgreSQL then you do not need to worry about this because the Bison\n*** output is pre-generated.) To use a different yacc program (possible, \n*** but not recommended), set the environment variable YACC before running\n*** 'configure'. \n\nThis being because I had omitted to install Bison. As I wasn't building from\nCVS, and mucking around with the parser definition files was not high on\nmy list of priorities, it was only much later I hit on the idea of installing\nBison, then rebuilding with higher NAMEDATALEN. And as if by magic\ninitdb succeeded. (Note: initdb with no Bison but unchanged NAMEDATALEN\nalso succeeded).\n\nFor reference, 7.2.1 exhibits exactly the same behaviour [2]. I have poked\naround but have no idea what I'm looking for, so I'm not sure if this\nis intended behaviour.\n\nQuestions:\n- Does src/include/postgres_ext.h count as a parser definition file?\n- If not shouldn't the above warning include something like \"And\n don't even think about changing NAMEDATALEN\"?\n\nApologies if I've missed something obvious.\n\n\nIan Barwick\nbarwick@gmx.net\n\n\n[1] FreeBSD 4.6, installing from source\n[2] Sample output from initdb:\n\nian > /home/ian/devel/postgres/pg721a/bin/initdb -D /tmp/pg721\nThe files belonging to this database system will be owned by user \"ian\".\nThis user must also own the server process.\n\ncreating directory /tmp/pg721... ok\ncreating directory /tmp/pg721/base... ok\ncreating directory /tmp/pg721/global... ok\ncreating directory /tmp/pg721/pg_xlog... ok\ncreating directory /tmp/pg721/pg_clog... ok\ncreating template1 database in /tmp/pg721/base/1... \ninitdb failed.\nRemoving /tmp/pg721.\n\n\n", "msg_date": "Tue, 30 Jul 2002 00:11:47 +0200", "msg_from": "Ian Barwick <barwick@gmx.net>", "msg_from_op": true, "msg_subject": "No bison and NAMEDATALEN > 31: initdb failure?" }, { "msg_contents": "Ian Barwick <barwick@gmx.net> writes:\n> - Does src/include/postgres_ext.h count as a parser definition file?\n\nNo, it doesn't. Your experience sounds like you may have neglected to\ndo a full rebuild after altering NAMEDATALEN. (By default, we don't\ncompute object-file dependencies, so it's up to you to do \"make clean\"\nafter changing fundamental parameters.) It's unlikely that installing\nBison per se affected anything --- unless possibly you had corrupted\ncopies of gram.c etc.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 Jul 2002 18:23:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: No bison and NAMEDATALEN > 31: initdb failure? " }, { "msg_contents": "On Tuesday 30 July 2002 00:23, Tom Lane wrote:\n> Ian Barwick <barwick@gmx.net> writes:\n> > - Does src/include/postgres_ext.h count as a parser definition file?\n>\n> No, it doesn't. Your experience sounds like you may have neglected to\n> do a full rebuild after altering NAMEDATALEN. (By default, we don't\n> compute object-file dependencies, so it's up to you to do \"make clean\"\n> after changing fundamental parameters.) \n\nExactly what I thought. I actually deleted the source tree and unpacked\nat a different location, several times. It was only after installing initdb \nBison that initdb suddenly worked...\n\n> It's unlikely that installing\n> Bison per se affected anything --- unless possibly you had corrupted\n> copies of gram.c etc.\n\n... tried the same with a fresh source download on a Linux machine\nand got similar results.\n\nClearly something odd is happening, but I assume it's a local problem. As\nit's not the most important issue right now, will see if I can make any sense \n(and report back should I find anything of interest).\n\nThanks,\n\n\nIan Barwick\nbarwick@gmx.net\n\nPS The subject should have read \"(...) NAMEDATELEN > 32 (...)\", i.e. the\ndefault value.\n\n", "msg_date": "Wed, 31 Jul 2002 23:34:40 +0200", "msg_from": "Ian Barwick <barwick@gmx.net>", "msg_from_op": true, "msg_subject": "Re: No bison and NAMEDATALEN > 31: initdb failure?" } ]
[ { "msg_contents": "The query without alias is working fine. Thanks for the performance hint, we\nwere actually using this query on very big tables and definitely we would\nhave slipped into performance problems with aliases. Now we are safe. Thanks\na lot Marc.\n\n-Yuva\nSr. Java Developer\nwww.ebates.com\n\n\n-----Original Message-----\nFrom: Marc Lavergne [mailto:mlavergne-pub@richlava.com]\nSent: Monday, July 29, 2002 2:31 PM\nTo: Yuva Chandolu\nCc: 'pgsql-hackers@postgresql.org'\nSubject: Re: [HACKERS] outer join help...\n\n\nLooks fine, you may want to rephrase it as:\n\nselect yt1_name, yt1_descr, yt2_name, yt2_descr, yt3_name, yt3_descr \nfrom yuva_test1 left outer join yuva_test2 on yt1_id = yt2_id\n left outer join yuva_test3 on yt1_id = yt3_id\n\nto make it more legible. The alias is overkill in this case since you \ndon't have any duplicate tables.\n\nYuva Chandolu wrote:\n> Hi,\n> \n> I tried yuva_test1 left outer join yuva_test2 and yuva_test1 left outer\njoin\n> yuva_test3 in the same query in Oracle. I tried the following query in\n> postgres and it worked...\n> \n> select yt1_name, yt1_descr, yt2_name, yt2_descr, yt3_name, yt3_descr from\n> (yuva_test1 left outer join yuva_test2 on yt1_id = yt2_id) as A left outer\n> join yuva_test3 on yt1_id = yt3_id\n> \n> I have used table alias technique and I got the same results as with\nOracle.\n> \n> Could you please tell me if the above query is correct or not, because\nsome\n> times wrong queries may give correct results with test data and they fail\n> when we try with live data.\n> \n> Thanks\n> Yuva\n> \n> -----Original Message-----\n> From: Andrew Sullivan [mailto:andrew@libertyrms.info]\n> Sent: Monday, July 29, 2002 1:27 PM\n> To: Yuva Chandolu\n> Subject: Re: [HACKERS] outer join help...\n> \n> \n> On Mon, Jul 29, 2002 at 01:07:43PM -0700, Yuva Chandolu wrote:\n> \n>>Hi,\n>>\n>>I need small help in outer joins in postgresql. We have three tables\n> \n> created\n> \n>>using the following scripts\n>>\n>>CREATE TABLE \"yuva_test1\" (\n>> \"yt1_id\" numeric(16, 0), \n>> \"yt1_name\" varchar(16) NOT NULL, \n>> \"yt1_descr\" varchar(32)\n>>);\n>>\n>>CREATE TABLE \"yuva_test2\" (\n>> \"yt2_id\" numeric(16, 0), \n>> \"yt2_name\" varchar(16) NOT NULL, \n>> \"yt2_descr\" varchar(32)\n>>);\n>>\n>>CREATE TABLE \"yuva_test3\" (\n>> \"yt3_id\" numeric(16, 0), \n>> \"yt3_name\" varchar(16) NOT NULL, \n>> \"yt3_descr\" varchar(32)\n>>);\n>>\n>>When I run the query \"select yt1_name, yt1_descr, yt2_name, yt2_descr,\n>>yt3_name, yt3_descr from yuva_test1, yuva_test2, yuva_test3 where yt1_id =\n>>yt2_id(+) and yt1_id = yt3_id(+)\", it works fine with Oracle(created same\n>>tables and data on Oracle database) and gives the results as expected.\n> \n> \n> select yt1_name, yt1_descr, yt2_name, yt2_descr, yt3_name, yt3_descr\n> from yuva_test1 [left? right? I don't know the Oracle syntax] outer\n> join yuva_test2 on yt1_id=yt2_id [left|right] outer join yuva_test3\n> on yt1_id = yt3_id \n> \n> is what you want, I think.\n> \n> A\n> \n\n", "msg_date": "Mon, 29 Jul 2002 15:21:20 -0700", "msg_from": "Yuva Chandolu <ychandolu@ebates.com>", "msg_from_op": true, "msg_subject": "Re: outer join help..." } ]
[ { "msg_contents": "\nCurt Sampson wrote:\n> I'm still waiting to find out just what advantage table inheritance\n> offers. I've asked a couple of times here, and nobody has even started\n> to come up with anything.\n\n\nTable inheritance offers data model extensibility. New (derived) tables\ncan be added to the system, and will work with existing code that\nopperates on the base tables, without having to hack up all the code.\n\nInherited indexes etc. would be nice, but it's the inability to have\nreferential integrity against a base table that picks up child table\nrows that makes the current implementation useles.\n\nI would rather see it fixed than junked, and better yet extended. It\nwould be incredibly useful in real-world projects with complex data\nmodels like OpenACS.\n", "msg_date": "29 Jul 2002 18:27:40 -0600", "msg_from": "Stephen Deasey <stephen@bollocks.net>", "msg_from_op": true, "msg_subject": "Re: Why is MySQL more chosen over PostgreSQL" }, { "msg_contents": "On 29 Jul 2002 18:27:40 MDT, the world broke into rejoicing as\nStephen Deasey <stephen@bollocks.net> said:\n> Curt Sampson wrote:\n>> I'm still waiting to find out just what advantage table inheritance\n>> offers. I've asked a couple of times here, and nobody has even\n>> started to come up with anything.\n\n> Table inheritance offers data model extensibility. New (derived) tables\n> can be added to the system, and will work with existing code that\n> operates on the base tables, without having to hack up all the code.\n\nBut it kind of begs the question of why you're creating the new table in\nthe first place.\n\nThe new table certainly _won't_ work with existing code, at least from\nthe perspective that the existing code doesn't _refer_ to that table.\n\nThe same is not true for views; if you create a new view on a table, the\nexisting code that refers to the table in either \"raw\" form or in other\nviews already exist will certainly continue to work.\n\n> Inherited indexes etc. would be nice, but it's the inability to have\n> referential integrity against a base table that picks up child table\n> rows that makes the current implementation useles.\n\nViews will certainly inherit indices, and continue to maintain\nreferential integrity against the \"base\" table.\n\n> I would rather see it fixed than junked, and better yet extended. It\n> would be incredibly useful in real-world projects with complex data\n> models like OpenACS.\n\nHave they found views to be an unacceptable alternative?\n\nI just don't see that there's _all_ that much about table inheritance\nthat is really fundamentally wonderful.\n\nThe \"incredibly useful\" thing, it seems to me, would be to provide tools\nthat make it less necessary to create additional tables. To be sure,\nviews do that.\n\nI'm not quite sure what INHERITS buys us that we don't get from\n SELECT * into new_table from old_table\n\nINHERITS may automagically draw in some contraints that have been\nspecifically tied to old_table that wouldn't be drawn by \"SELECT * INTO\nNEW_TABLE\"; the latter _will_ inherit anything that is tied to the\ntypes.\n\nI'd _much_ rather have five views on one table than five tables.\n--\n(reverse (concatenate 'string \"gro.gultn@\" \"enworbbc\"))\nhttp://cbbrowne.com/info/spreadsheets.html\n\"Everything should be made as simple as possible, but not simpler.\"\n-- Albert Einstein\n", "msg_date": "Fri, 02 Aug 2002 13:39:44 -0400", "msg_from": "cbbrowne@cbbrowne.com", "msg_from_op": false, "msg_subject": "Table inheritance versus views" }, { "msg_contents": "On Fri, 2002-08-02 at 22:39, cbbrowne@cbbrowne.com wrote:\n> On 29 Jul 2002 18:27:40 MDT, the world broke into rejoicing as\n> Stephen Deasey <stephen@bollocks.net> said:\n> > Curt Sampson wrote:\n> >> I'm still waiting to find out just what advantage table inheritance\n> >> offers. I've asked a couple of times here, and nobody has even\n> >> started to come up with anything.\n> \n> > Table inheritance offers data model extensibility. New (derived) tables\n> > can be added to the system, and will work with existing code that\n> > operates on the base tables, without having to hack up all the code.\n> \n> But it kind of begs the question of why you're creating the new table in\n> the first place.\n> \n> The new table certainly _won't_ work with existing code, at least from\n> the perspective that the existing code doesn't _refer_ to that table.\n\nThe beuty of OO is that it does not need to :\n\nhannu=# create table animal (name text, legcount int);\nCREATE\nhannu=# insert into animal values('pig',4);\nINSERT 34183 1\nhannu=# select * from animal;\n name | legcount \n------+----------\n pig | 4\n(1 row)\n\nhannu=# create table bird (wingcount int) inherits (animal);\nCREATE\nhannu=# insert into bird values('hen',2,2);\nINSERT 34189 1\nhannu=# select * from animal;\n name | legcount \n------+----------\n pig | 4\n hen | 2\n(2 rows)\n\n------------------\nHannu\n\n", "msg_date": "03 Aug 2002 18:10:44 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Table inheritance versus views" }, { "msg_contents": "\n> On Fri, 2002-08-02 at 22:39, cbbrowne@cbbrowne.com wrote:\n> \n>>On 29 Jul 2002 18:27:40 MDT, the world broke into rejoicing as\n>>Stephen Deasey <stephen@bollocks.net> said:\n>>\n>>>Curt Sampson wrote:\n>>>\n>>>>I'm still waiting to find out just what advantage table inheritance\n>>>>offers. I've asked a couple of times here, and nobody has even\n>>>>started to come up with anything.\n>>>\n>>>Table inheritance offers data model extensibility. New (derived) tables\n>>>can be added to the system, and will work with existing code that\n>>>operates on the base tables, without having to hack up all the code.\n>>\n>>But it kind of begs the question of why you're creating the new table in\n>>the first place.\n>>\n>>The new table certainly _won't_ work with existing code, at least from\n>>the perspective that the existing code doesn't _refer_ to that table.\n\nSince OpenACS has been brought up in this thread, I thought I'd join the \nlist for a day or two and offer my perspective as the project manager.\n\n1. Yes, we use views in our quasi-object oriented data model. They're \nautomatically generated when content types are built by the content \nrepository, for instance.\n\n2. Yes, you can model anything you can model with PG's OO extensions \nusing views. If you haven't implemented some way to generate the view \nautomatically then a bit more work is required compared to using PG's OO \nextensions.\n\n3. The view approach requires joins on all the subtype tables. If I \ndeclare type 'foo' then the view that returns all of foo's columns joins \non all the subtype tables, while in the PG OO case all of foo's columns \nare stored in foo meaning I can get them all back with a simple query on \nthe table. The PG OO approach can be considerably more efficient than \nthe view approach, and this is important to some folks, no matter how \nmany appeals to authority are made to various bibles on relational \ntheory written by Date and Darwen.\n\n4. The killer that makes the current implementation unusable for us is \nthe fact that there's no form of indexing that spans all the tables \ninherited from a base type. This means there's no cheap enforcement of \nuniqueness constraints across a set of object types, among other things. \n Being able to inherit indexes and constraints would greatly increase \nthe utility of PG's OO extensions.\n\n5. If PG's OO extensions included inheritance of indexes and \nconstraints, there's no doubt we'd use them in the OpenACS project, \nbecause when researching PG we compared datamodels written in this style \nvs. modelling the object relationships manually with automatically \ngenerated views. We found the datamodel written using PG's OO \nextensions not only potentially more efficient, but more readable as well.\n\nAs far as whether or not there's a significant maintenance cost \nassociated with keeping the existing OO stuff in PG, Tom Lane's voice is \nauthorative while, when it comes to PG internals, Curt Sampson doesn't \nknow squat.\n\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n", "msg_date": "Sat, 03 Aug 2002 09:47:54 -0700", "msg_from": "Don Baccus <dhogaza@pacifier.com>", "msg_from_op": false, "msg_subject": "Re: Table inheritance versus views" }, { "msg_contents": "On 29 Jul 2002, Stephen Deasey wrote:\n\n> Table inheritance offers data model extensibility. New (derived) tables\n> can be added to the system, and will work with existing code that\n> opperates on the base tables, without having to hack up all the code.\n\nAnd why does this not work with the standard relational mechanism?\n(Where a \"derived table\" would be just another table with a foreign\nkey pointing back to the base table.)\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Sun, 4 Aug 2002 15:41:21 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Why is MySQL more chosen over PostgreSQL" }, { "msg_contents": "On 3 Aug 2002, Hannu Krosing wrote:\n\n> hannu=# create table animal (name text, legcount int);\n> CREATE\n> hannu=# insert into animal values('pig',4);\n> INSERT 34183 1\n> hannu=# select * from animal;\n> name | legcount\n> ------+----------\n> pig | 4\n> (1 row)\n>\n> hannu=# create table bird (wingcount int) inherits (animal);\n> CREATE\n> hannu=# insert into bird values('hen',2,2);\n> INSERT 34189 1\n> hannu=# select * from animal;\n> name | legcount\n> ------+----------\n> pig | 4\n> hen | 2\n> (2 rows)\n\nYou can do this just as well with views. In posgres, it's harder\nonly because you're forced to create rules for updating views. But\nit's possible to have the system automatically do the right thing.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Sun, 4 Aug 2002 15:50:08 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Table inheritance versus views" } ]
[ { "msg_contents": "> As for why PostgreSQL is less popular than MySQL, I think it is all\n> momentum from 1996 when MySQL worked and we sometimes crashed. Looking\n> forward, I don't know many people who choose MySQL _if_ they consider\n> both PostgreSQL and MySQL, so the discussions people have over MySQL vs.\n> PostgreSQL are valuable because they get people to consider MySQL\n> alternatives, and once they do, they usually choose PostgreSQL.\n>\n> As for momentum, we still have a smaller userbase than MySQL, but we are\n> increasing our userbase at a fast rate, perhaps faster than MySQL at\n> this point.\n\nI think the fact that the PHP guys _pride_ themselves on having built-in\nMySQL support is another huge reason. They look at it is an example of what\ncan be achieved with integration. The FreeBSD PHP port, as another example,\nhas 'MySQL support' ticked by default. Not quite so much work is put into\nPHP's PostgreSQL support as MySQL's, so it's often buggy (tell me about it).\n\nAlso, the utter lack of knowledge about relational theory and SQL is a\nfactor in both newbies and self-taught developers. For instance, in the\nlast few days I have answered questions like these on PHP Builder:\n\n\"I use SELECT * FROM table WHERE a = 3. How do I get all rows? Can I put a\n= ALL or something?\"\n\n\"Why don't my javascript variables work in my SQL statements?\"\n\n\"I have two tables and a referencing ID, and I keep getting rows in my child\ntable that don't match a row in the parent table, what is a query that I can\nrun regularly to remove these problem rows?\"\n\n...and so on...\n\nWhy would someone asking the above questions use anything other than the\n'default' PHP database?\n\nChris\n\n", "msg_date": "Tue, 30 Jul 2002 09:38:00 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?" } ]
[ { "msg_contents": "Because of a runaway process on postgresql.org, CVS is working\nintermittently, if at all. I have contacted Marc.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 Jul 2002 21:42:00 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "CVS broken" } ]
[ { "msg_contents": "Can anyone fix this?\n\n$ cvs up\ncan't create temporary directory /tmp/cvs-serv40296\nNo space left on device\n--\nTatsuo Ishii\n", "msg_date": "Tue, 30 Jul 2002 10:51:08 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "CVS server problem" }, { "msg_contents": "Seems the CVS server is not working correctly. I just deleted my CVS\ntree and did a fresh checkout of the pgsql module. Everything seemingly\nwent well. After the check out completed, I did:\n\n[gcope@mouse pgsql]$ ./configure --with-tcl --with-java --with-python\n--with-perl \nchecking build system type... i686-pc-linux-gnu\nchecking host system type... i686-pc-linux-gnu\nchecking which template to use... linux\nchecking whether to build with 64-bit integer date/time support... no\nchecking whether to build with recode support... no\nchecking whether NLS is wanted... no\nchecking for default port number... 5432\nchecking for default soft limit on number of connections... 32\nchecking for gcc... gcc\nchecking for C compiler default output... a.out\nchecking whether the C compiler works... yes\nchecking whether we are cross compiling... no\nchecking for suffix of executables... \nchecking for suffix of object files... o\nchecking whether we are using the GNU C compiler... yes\nchecking whether gcc accepts -g... yes\n./configure: ./src/template/linux: No such file or directory\n\nSo, I did, \"./configure\" which yields the same result. So, thinking\nmaybe I just had a poorly timed checkout, I did an update. Doing so,\nlooks like this:\n[gcope@mouse pgsql]$ cvs -z3 update -dP\n? config.log\ncvs server: Updating .\ncvs server: Updating ChangeLogs\ncvs server: Updating MIGRATION\ncvs server: Updating config\n.\n.\n.\nsrc/backend/utils/mb/conversion_procs/utf8_and_iso8859_1\ncvs server: Updating\nsrc/backend/utils/mb/conversion_procs/utf8_and_johab\ncvs server: Updating src/backend/utils/mb/conversion_procs/utf8_and_sjis\ncvs server: cannot open directory\n/projects/cvsroot/pgsql/src/backend/utils/mb/conversion_procs/utf8_and_sjis: No such file or directory\ncvs server: skipping directory\nsrc/backend/utils/mb/conversion_procs/utf8_and_sjis\ncvs server: Updating src/backend/utils/mb/conversion_procs/utf8_and_tcvn\ncvs server: cannot open directory\n/projects/cvsroot/pgsql/src/backend/utils/mb/conversion_procs/utf8_and_tcvn: No such file or directory\ncvs server: skipping directory\nsrc/backend/utils/mb/conversion_procs/utf8_and_tcvn\ncvs server: Updating src/backend/utils/mb/conversion_procs/utf8_and_uhc\ncvs server: cannot open directory\n/projects/cvsroot/pgsql/src/backend/utils/mb/conversion_procs/utf8_and_uhc: No such file or directory\ncvs server: skipping directory\nsrc/backend/utils/mb/conversion_procs/utf8_and_uhc\ncvs server: Updating src/backend/utils/misc\ncvs server: cannot open directory\n/projects/cvsroot/pgsql/src/backend/utils/misc: No such file or\ndirectory\ncvs server: skipping directory src/backend/utils/misc\ncvs server: Updating src/backend/utils/mmgr\ncvs server: cannot open directory\n/projects/cvsroot/pgsql/src/backend/utils/mmgr: No such file or\ndirectory\ncvs server: skipping directory src/backend/utils/mmgr\ncvs server: Updating src/backend/utils/sort\ncvs server: cannot open directory\n/projects/cvsroot/pgsql/src/backend/utils/sort: No such file or\ndirectory\ncvs server: skipping directory src/backend/utils/sort\ncvs server: Updating src/backend/utils/time\ncvs server: cannot open directory\n/projects/cvsroot/pgsql/src/backend/utils/time: No such file or\ndirectory\ncvs server: skipping directory src/backend/utils/time\ncvs server: Updating src/bin\ncvs server: cannot open directory /projects/cvsroot/pgsql/src/bin: No\nsuch file or directory\ncvs server: skipping directory src/bin\n\n\nSo, I'm fairly sure something is awry.\n\nGreg\n\n\nOn Mon, 2002-07-29 at 20:51, Tatsuo Ishii wrote:\n> Can anyone fix this?\n> \n> $ cvs up\n> can't create temporary directory /tmp/cvs-serv40296\n> No space left on device\n> --\n> Tatsuo Ishii\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org", "msg_date": "01 Aug 2002 17:26:58 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: CVS server problem!" }, { "msg_contents": "\nShould be fixed now ... I found a rsync.core file, so it looks like the\nchanges may have been more extensive then rsync could handle ... just ran\nit manually (or, rather, am running as I type this), so by the time you\nreceive, a checkout should grab the right structures ...\n\nLet me know if it works now for you ...\n\n\nOn 1 Aug 2002, Greg Copeland wrote:\n\n> Seems the CVS server is not working correctly. I just deleted my CVS\n> tree and did a fresh checkout of the pgsql module. Everything seemingly\n> went well. After the check out completed, I did:\n>\n> [gcope@mouse pgsql]$ ./configure --with-tcl --with-java --with-python\n> --with-perl\n> checking build system type... i686-pc-linux-gnu\n> checking host system type... i686-pc-linux-gnu\n> checking which template to use... linux\n> checking whether to build with 64-bit integer date/time support... no\n> checking whether to build with recode support... no\n> checking whether NLS is wanted... no\n> checking for default port number... 5432\n> checking for default soft limit on number of connections... 32\n> checking for gcc... gcc\n> checking for C compiler default output... a.out\n> checking whether the C compiler works... yes\n> checking whether we are cross compiling... no\n> checking for suffix of executables...\n> checking for suffix of object files... o\n> checking whether we are using the GNU C compiler... yes\n> checking whether gcc accepts -g... yes\n> ./configure: ./src/template/linux: No such file or directory\n>\n> So, I did, \"./configure\" which yields the same result. So, thinking\n> maybe I just had a poorly timed checkout, I did an update. Doing so,\n> looks like this:\n> [gcope@mouse pgsql]$ cvs -z3 update -dP\n> ? config.log\n> cvs server: Updating .\n> cvs server: Updating ChangeLogs\n> cvs server: Updating MIGRATION\n> cvs server: Updating config\n> .\n> .\n> .\n> src/backend/utils/mb/conversion_procs/utf8_and_iso8859_1\n> cvs server: Updating\n> src/backend/utils/mb/conversion_procs/utf8_and_johab\n> cvs server: Updating src/backend/utils/mb/conversion_procs/utf8_and_sjis\n> cvs server: cannot open directory\n> /projects/cvsroot/pgsql/src/backend/utils/mb/conversion_procs/utf8_and_sjis: No such file or directory\n> cvs server: skipping directory\n> src/backend/utils/mb/conversion_procs/utf8_and_sjis\n> cvs server: Updating src/backend/utils/mb/conversion_procs/utf8_and_tcvn\n> cvs server: cannot open directory\n> /projects/cvsroot/pgsql/src/backend/utils/mb/conversion_procs/utf8_and_tcvn: No such file or directory\n> cvs server: skipping directory\n> src/backend/utils/mb/conversion_procs/utf8_and_tcvn\n> cvs server: Updating src/backend/utils/mb/conversion_procs/utf8_and_uhc\n> cvs server: cannot open directory\n> /projects/cvsroot/pgsql/src/backend/utils/mb/conversion_procs/utf8_and_uhc: No such file or directory\n> cvs server: skipping directory\n> src/backend/utils/mb/conversion_procs/utf8_and_uhc\n> cvs server: Updating src/backend/utils/misc\n> cvs server: cannot open directory\n> /projects/cvsroot/pgsql/src/backend/utils/misc: No such file or\n> directory\n> cvs server: skipping directory src/backend/utils/misc\n> cvs server: Updating src/backend/utils/mmgr\n> cvs server: cannot open directory\n> /projects/cvsroot/pgsql/src/backend/utils/mmgr: No such file or\n> directory\n> cvs server: skipping directory src/backend/utils/mmgr\n> cvs server: Updating src/backend/utils/sort\n> cvs server: cannot open directory\n> /projects/cvsroot/pgsql/src/backend/utils/sort: No such file or\n> directory\n> cvs server: skipping directory src/backend/utils/sort\n> cvs server: Updating src/backend/utils/time\n> cvs server: cannot open directory\n> /projects/cvsroot/pgsql/src/backend/utils/time: No such file or\n> directory\n> cvs server: skipping directory src/backend/utils/time\n> cvs server: Updating src/bin\n> cvs server: cannot open directory /projects/cvsroot/pgsql/src/bin: No\n> such file or directory\n> cvs server: skipping directory src/bin\n>\n>\n> So, I'm fairly sure something is awry.\n>\n> Greg\n>\n>\n> On Mon, 2002-07-29 at 20:51, Tatsuo Ishii wrote:\n> > Can anyone fix this?\n> >\n> > $ cvs up\n> > can't create temporary directory /tmp/cvs-serv40296\n> > No space left on device\n> > --\n> > Tatsuo Ishii\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> >\n> > http://archives.postgresql.org\n>\n>\n\n", "msg_date": "Thu, 1 Aug 2002 20:31:36 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: CVS server problem!" }, { "msg_contents": "Yes, it's compiling now...thanks.\n\nGreg\n\nOn Thu, 2002-08-01 at 18:31, Marc G. Fournier wrote:\n> \n> Should be fixed now ... I found a rsync.core file, so it looks like the\n> changes may have been more extensive then rsync could handle ... just ran\n> it manually (or, rather, am running as I type this), so by the time you\n> receive, a checkout should grab the right structures ...\n> \n> Let me know if it works now for you ...\n> \n> \n> On 1 Aug 2002, Greg Copeland wrote:\n> \n> > Seems the CVS server is not working correctly. I just deleted my CVS\n> > tree and did a fresh checkout of the pgsql module. Everything seemingly\n> > went well. After the check out completed, I did:\n> >\n> > [gcope@mouse pgsql]$ ./configure --with-tcl --with-java --with-python\n> > --with-perl\n> > checking build system type... i686-pc-linux-gnu\n> > checking host system type... i686-pc-linux-gnu\n> > checking which template to use... linux\n> > checking whether to build with 64-bit integer date/time support... no\n> > checking whether to build with recode support... no\n> > checking whether NLS is wanted... no\n> > checking for default port number... 5432\n> > checking for default soft limit on number of connections... 32\n> > checking for gcc... gcc\n> > checking for C compiler default output... a.out\n> > checking whether the C compiler works... yes\n> > checking whether we are cross compiling... no\n> > checking for suffix of executables...\n> > checking for suffix of object files... o\n> > checking whether we are using the GNU C compiler... yes\n> > checking whether gcc accepts -g... yes\n> > ./configure: ./src/template/linux: No such file or directory\n> >\n> > So, I did, \"./configure\" which yields the same result. So, thinking\n> > maybe I just had a poorly timed checkout, I did an update. Doing so,\n> > looks like this:\n> > [gcope@mouse pgsql]$ cvs -z3 update -dP\n> > ? config.log\n> > cvs server: Updating .\n> > cvs server: Updating ChangeLogs\n> > cvs server: Updating MIGRATION\n> > cvs server: Updating config\n> > .\n> > .\n> > .\n> > src/backend/utils/mb/conversion_procs/utf8_and_iso8859_1\n> > cvs server: Updating\n> > src/backend/utils/mb/conversion_procs/utf8_and_johab\n> > cvs server: Updating src/backend/utils/mb/conversion_procs/utf8_and_sjis\n> > cvs server: cannot open directory\n> > /projects/cvsroot/pgsql/src/backend/utils/mb/conversion_procs/utf8_and_sjis: No such file or directory\n> > cvs server: skipping directory\n> > src/backend/utils/mb/conversion_procs/utf8_and_sjis\n> > cvs server: Updating src/backend/utils/mb/conversion_procs/utf8_and_tcvn\n> > cvs server: cannot open directory\n> > /projects/cvsroot/pgsql/src/backend/utils/mb/conversion_procs/utf8_and_tcvn: No such file or directory\n> > cvs server: skipping directory\n> > src/backend/utils/mb/conversion_procs/utf8_and_tcvn\n> > cvs server: Updating src/backend/utils/mb/conversion_procs/utf8_and_uhc\n> > cvs server: cannot open directory\n> > /projects/cvsroot/pgsql/src/backend/utils/mb/conversion_procs/utf8_and_uhc: No such file or directory\n> > cvs server: skipping directory\n> > src/backend/utils/mb/conversion_procs/utf8_and_uhc\n> > cvs server: Updating src/backend/utils/misc\n> > cvs server: cannot open directory\n> > /projects/cvsroot/pgsql/src/backend/utils/misc: No such file or\n> > directory\n> > cvs server: skipping directory src/backend/utils/misc\n> > cvs server: Updating src/backend/utils/mmgr\n> > cvs server: cannot open directory\n> > /projects/cvsroot/pgsql/src/backend/utils/mmgr: No such file or\n> > directory\n> > cvs server: skipping directory src/backend/utils/mmgr\n> > cvs server: Updating src/backend/utils/sort\n> > cvs server: cannot open directory\n> > /projects/cvsroot/pgsql/src/backend/utils/sort: No such file or\n> > directory\n> > cvs server: skipping directory src/backend/utils/sort\n> > cvs server: Updating src/backend/utils/time\n> > cvs server: cannot open directory\n> > /projects/cvsroot/pgsql/src/backend/utils/time: No such file or\n> > directory\n> > cvs server: skipping directory src/backend/utils/time\n> > cvs server: Updating src/bin\n> > cvs server: cannot open directory /projects/cvsroot/pgsql/src/bin: No\n> > such file or directory\n> > cvs server: skipping directory src/bin\n> >\n> >\n> > So, I'm fairly sure something is awry.\n> >\n> > Greg\n> >\n> >\n> > On Mon, 2002-07-29 at 20:51, Tatsuo Ishii wrote:\n> > > Can anyone fix this?\n> > >\n> > > $ cvs up\n> > > can't create temporary directory /tmp/cvs-serv40296\n> > > No space left on device\n> > > --\n> > > Tatsuo Ishii\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 6: Have you searched our list archives?\n> > >\n> > > http://archives.postgresql.org\n> >\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster", "msg_date": "01 Aug 2002 19:43:31 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: CVS server problem!" } ]
[ { "msg_contents": "I've developed patches to be able to specify the location of the WAL\ndirectory, with the default location being where it is now. The patches\ndefine a new environment variable PGXLOG (a la PGDATA) and postmaster,\npostgres, initdb and pg_ctl have been taught to recognize a new command\nline switch \"-X\" a la \"-D\".\n\nComments or suggestions?\n\nI'm intending to head towards finer control of locations of tables and\nindices next by implementing some notion of named storage area, perhaps\nincluding the \"tablespace\" nomenclature though it would not be the same\nthing as in Oracle since it would not be fixed size but more akin to the\n\"secondary locations\" that we support now for entire databases.\n\nThere has been some discussion of this already but I'm not recalling\nthat someone has picked this up yet. Comments or suggestions?\n\n - Thomas\n", "msg_date": "Mon, 29 Jul 2002 18:54:22 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "WAL file location" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> I've developed patches to be able to specify the location of the WAL\n> directory, with the default location being where it is now. The patches\n> define a new environment variable PGXLOG (a la PGDATA) and postmaster,\n> postgres, initdb and pg_ctl have been taught to recognize a new command\n> line switch \"-X\" a la \"-D\".\n\nUh ... if I randomly modify PGXLOG and restart the postmaster, what\nhappens? Unless you've added code to *move* pg_xlog, this doesn't seem\nlike a good idea.\n\nMore generally, I do not like depending on postmaster environment\nvariables --- our experience with environment variables for database\nlocations has been uniformly bad, and so ISTM that extending that\nmechanism into pg_xlog is exactly the wrong direction to head.\n\nThe current mechanism for moving pg_xlog around is to create a symlink\nfrom $PGDATA/pg_xlog to someplace else. I'd be all in favor of creating\nsome code to help automate moving pg_xlog that way, but I don't think\nintroducing an environment variable will improve matters.\n\n> I'm intending to head towards finer control of locations of tables and\n> indices next by implementing some notion of named storage area, perhaps\n> including the \"tablespace\" nomenclature though it would not be the same\n> thing as in Oracle since it would not be fixed size but more akin to the\n> \"secondary locations\" that we support now for entire databases.\n\nThe existing secondary-location mechanism is horrible. Please do not\nemulate it...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Jul 2002 00:08:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL file location " }, { "msg_contents": "On Mon, 29 Jul 2002, Thomas Lockhart wrote:\n\n> I've developed patches to be able to specify the location of the WAL\n> directory, with the default location being where it is now. The patches\n> define a new environment variable PGXLOG (a la PGDATA) and postmaster,\n> postgres, initdb and pg_ctl have been taught to recognize a new command\n> line switch \"-X\" a la \"-D\".\n\nWhat's the advantage of this over just using a symlink?\n\n> I'm intending to head towards finer control of locations of tables and\n> indices next by implementing some notion of named storage area, perhaps\n> including the \"tablespace\" nomenclature....\n\nThis I would really love to have.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Tue, 30 Jul 2002 13:13:36 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: WAL file location" }, { "msg_contents": "On Tue, 30 Jul 2002, Tom Lane wrote:\n\n> More generally, I do not like depending on postmaster environment\n> variables --- our experience with environment variables for database\n> locations has been uniformly bad....\n> The existing secondary-location mechanism is horrible. Please do not\n> emulate it...\n\nRight. I get really, really worried about security issues when I see\nsomething like \"just specify an environment variable.\" Who knows what\nthe heck else is in the environment.\n\nI'd really like to see that removed and replaced with a configuration file\nor something similar.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Tue, 30 Jul 2002 13:14:54 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: WAL file location " }, { "msg_contents": "Tom Lane wrote:\n> Thomas Lockhart <lockhart@fourpalms.org> writes:\n> > I've developed patches to be able to specify the location of the WAL\n> > directory, with the default location being where it is now. The patches\n> > define a new environment variable PGXLOG (a la PGDATA) and postmaster,\n> > postgres, initdb and pg_ctl have been taught to recognize a new command\n> > line switch \"-X\" a la \"-D\".\n> \n> Uh ... if I randomly modify PGXLOG and restart the postmaster, what\n> happens? Unless you've added code to *move* pg_xlog, this doesn't seem\n> like a good idea.\n> \n> More generally, I do not like depending on postmaster environment\n> variables --- our experience with environment variables for database\n> locations has been uniformly bad, and so ISTM that extending that\n> mechanism into pg_xlog is exactly the wrong direction to head.\n> \n> The current mechanism for moving pg_xlog around is to create a symlink\n> from $PGDATA/pg_xlog to someplace else. I'd be all in favor of creating\n> some code to help automate moving pg_xlog that way, but I don't think\n> introducing an environment variable will improve matters.\n\n100% agree.\n\n> > I'm intending to head towards finer control of locations of tables and\n> > indices next by implementing some notion of named storage area, perhaps\n> > including the \"tablespace\" nomenclature though it would not be the same\n> > thing as in Oracle since it would not be fixed size but more akin to the\n> > \"secondary locations\" that we support now for entire databases.\n> \n> The existing secondary-location mechanism is horrible. Please do not\n> emulate it...\n\n200% agree. ;-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 30 Jul 2002 00:56:53 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL file location" }, { "msg_contents": "> > I've developed patches to be able to specify the location of the WAL\n> > directory, with the default location being where it is now. The patches\n> > define a new environment variable PGXLOG (a la PGDATA) and postmaster,\n> > postgres, initdb and pg_ctl have been taught to recognize a new command\n> > line switch \"-X\" a la \"-D\".\n> What's the advantage of this over just using a symlink?\n\nIt is supported by the installation environment, and does not require\nthe explicit three steps of\n\n1) creating a new directory area\n2) moving files to the new area\n3) creating a symlink to point to the new area\n\nThe default behavior for the patch is exactly what happens now, with the\nlocation plopped into $PGDATA/pg_xlog/\n\n> > I'm intending to head towards finer control of locations of tables and\n> > indices next by implementing some notion of named storage area, perhaps\n> > including the \"tablespace\" nomenclature....\n> This I would really love to have.\n\nYup. These are all pieces of overall resource management for PostgreSQL.\n\n - Thomas\n", "msg_date": "Mon, 29 Jul 2002 22:56:49 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: WAL file location" }, { "msg_contents": "> > I've developed patches to be able to specify the location of the WAL\n> > directory, with the default location being where it is now. The patches\n> > define a new environment variable PGXLOG (a la PGDATA) and postmaster,\n> > postgres, initdb and pg_ctl have been taught to recognize a new command\n> > line switch \"-X\" a la \"-D\".\n> Uh ... if I randomly modify PGXLOG and restart the postmaster, what\n> happens? Unless you've added code to *move* pg_xlog, this doesn't seem\n> like a good idea.\n\nPerhaps you don't remember the current (and future) behavior of\nPostgreSQL. If it does not find the WAL files it declines to start up.\nThe behavior is very similar to that for the data area. If it does not\nexist, the postmaster declines to start.\n\nAs noted above, the default behavior remains the same as now. So what is\nthe objection precisely?\n\n> More generally, I do not like depending on postmaster environment\n> variables --- our experience with environment variables for database\n> locations has been uniformly bad, and so ISTM that extending that\n> mechanism into pg_xlog is exactly the wrong direction to head.\n\nUnsupported allegation. My experience with environment variables for\ndatabase locations has been uniformly good (also an unsupported\nallegation, but what the heck). And you will note that, as with PGDATA,\nthe environment variable is not required. So you can easily stay away\nfrom that which you do not like, for whatever reason.\n\n From my experience with other DBMSes, storage management was always a\nweak point. You have suggested in the past that hardcoding paths into\nthe database itself is the Right Way, but in my large Ingres production\nsystems that was precisely the weak point in their support; you were\nhamstrung when planning and expanding storage by choices you made way\nback when the the system was first installed.\n\nLinking external definitions to internal logical areas enhances\nflexibility, it is not required and does not compromise the system.\nRefusing to allow them at all just limits options with no offsetting\nbenefit.\n\n> The current mechanism for moving pg_xlog around is to create a symlink\n> from $PGDATA/pg_xlog to someplace else. I'd be all in favor of creating\n> some code to help automate moving pg_xlog that way, but I don't think\n> introducing an environment variable will improve matters.\n\nYour opinion is noted.\n\n> > I'm intending to head towards finer control of locations of tables and\n> > indices next by implementing some notion of named storage area, perhaps\n> > including the \"tablespace\" nomenclature though it would not be the same\n> > thing as in Oracle since it would not be fixed size but more akin to the\n> > \"secondary locations\" that we support now for entire databases.\n> The existing secondary-location mechanism is horrible. Please do not\n> emulate it...\n\nI've not quite understood why you have a strong opinion on something you\ndon't care about and don't care to contribute to.\n\n - Thomas\n", "msg_date": "Mon, 29 Jul 2002 23:09:28 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: WAL file location" }, { "msg_contents": "On Mon, 29 Jul 2002, Thomas Lockhart wrote:\n\n> It is supported by the installation environment, and does not require\n> the explicit three steps of\n>\n> 1) creating a new directory area\n> 2) moving files to the new area\n> 3) creating a symlink to point to the new area\n\nSo basically it gives you the ability to initdb and have your log files\nelsewhere without having to shutdown, move the log, link, and restart.\nIs there anything else it adds?\n\nBTW, you mention in another message that environment variables work\nwell for you. Well, they are a security problem waiting to happen,\nIMHO. Do you have any objections to having a file containing a list\nof the various data directories? Maybe we could put the log directory\nin it, too, and have PGDATA point to that file, so we'd need only one\nenvironment variable? (And then we'd have a more obviously accessable\nlist of where everything is, as well.)\n\nI tend to side with you on not putting these paths in the database\nitself; it can make restores rather hairy.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Tue, 30 Jul 2002 20:10:02 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: WAL file location" }, { "msg_contents": "On Tue, Jul 30, 2002 at 08:10:02PM +0900, Curt Sampson wrote:\n\n> BTW, you mention in another message that environment variables work\n> well for you. Well, they are a security problem waiting to happen,\n> IMHO. Do you have any objections to having a file containing a list\n> of the various data directories? Maybe we could put the log directory\n> in it, too, and have PGDATA point to that file, so we'd need only one\n> environment variable? (And then we'd have a more obviously accessable\n> list of where everything is, as well.)\n\nI guess I'm dumb, but I'm not seeing how these environment variables\nare a big security risk. It's true, however, that putting such\nsettings in the config file or something might be better, if only\nbecause that limits the number of places where various config things\nhappen.\n\nIn any case, it'd be a _very good_ thing to have a tablespace-like\nfacility. Its lack is a real drawback of PostgreSQL for anyone\nlooking to manage a large installation. RAID is your friend, of\ncourse, but one can get a real boost to both performance and\nflexibility by adding this sort of feature, and anything that moves\nPostgreSQL closer to such a goal is, in my view, nothing but a good\nthing.\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 30 Jul 2002 12:45:50 -0400", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: WAL file location" }, { "msg_contents": "Andrew Sullivan wrote:\n> On Tue, Jul 30, 2002 at 08:10:02PM +0900, Curt Sampson wrote:\n> \n> > BTW, you mention in another message that environment variables work\n> > well for you. Well, they are a security problem waiting to happen,\n> > IMHO. Do you have any objections to having a file containing a list\n> > of the various data directories? Maybe we could put the log directory\n> > in it, too, and have PGDATA point to that file, so we'd need only one\n> > environment variable? (And then we'd have a more obviously accessable\n> > list of where everything is, as well.)\n> \n> I guess I'm dumb, but I'm not seeing how these environment variables\n> are a big security risk. It's true, however, that putting such\n> settings in the config file or something might be better, if only\n> because that limits the number of places where various config things\n> happen.\n> \n> In any case, it'd be a _very good_ thing to have a tablespace-like\n> facility. Its lack is a real drawback of PostgreSQL for anyone\n> looking to manage a large installation. RAID is your friend, of\n> course, but one can get a real boost to both performance and\n> flexibility by adding this sort of feature, and anything that moves\n> PostgreSQL closer to such a goal is, in my view, nothing but a good\n> thing.\n\nWhy not put the WAL location in postgresql.conf. Seems like the logical\nlocation for it. You could define tablespaces there in the future too\nif you prefer.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 30 Jul 2002 12:49:01 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL file location" }, { "msg_contents": "On Tuesday 30 July 2002 07:10 am, Curt Sampson wrote:\n> BTW, you mention in another message that environment variables work\n> well for you. Well, they are a security problem waiting to happen,\n> IMHO. Do you have any objections to having a file containing a list\n> of the various data directories? Maybe we could put the log directory\n> in it, too, and have PGDATA point to that file, so we'd need only one\n> environment variable? (And then we'd have a more obviously accessable\n> list of where everything is, as well.)\n\n$PGDATA/postgresql.conf just needs extending in this direction. There is a \npatch to do most of this already -- just not the WAL stuff. Due to the heat \nit generated the last time, and the fact that we were in beta at the time, \nthe author of that patch left the list.\n\nNow, let me make the statement that the environment in this case is not likely \nto be a security issue any worse than having the stuff in postgresql.conf, as \nany attacker that can poison the postmaster environment can probably poison \npostgresql.conf. Such poisoning isn't an issue here, as postmaster is just \ngoing to gripe about the WAL files being missing, or it's going to create new \nones. Since postmaster doesn't run as root, it can't be used to overwrite \nsystem files, the typcial target for environment poisoning.\n\nYou might want to see about reading the archives -- even though I know they \ntend to be broken whenever you want to search them. The idea you mention has \nnot only been brought up, but has been thoroughly discussed at length, and a \npatch exists for the majority of the locations in question, just not WAL. I \nhave some of the discussion locally archived, but not the original patch. \nSearch on 'Explicit config patch'. Also see 'Thoughts on the location of \nconfiguration files' and 'Explicit configuration file'. \n\nExplaining what you mean by the potential security implications would be nice. \n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 30 Jul 2002 12:57:55 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: WAL file location" }, { "msg_contents": "Andrew Sullivan <andrew@libertyrms.info> writes:\n> I guess I'm dumb, but I'm not seeing how these environment variables\n> are a big security risk.\n\nThe trouble with relying on environment variables for paths (especially\npaths to places that we might scribble on) is that the postmaster has\nno idea which strings in its environment were actually intended for that\nuse, and which were not.\n\nAs an example, the postmaster very likely has $HOME in its environment.\nThis means that anyone with createdb privilege can try to create a\ndatabase in the postgres user's home directory. It's relatively\nharmless (since what will actually get mkdir'd is some name like\n/home/postgres/base/173918, which likely can't overwrite anything\ninteresting) but it's still not a good idea.\n\n$PWD would be another likely attack point, and possibly one could do\nsomething with $PATH, not to mention any custom environment variables\nthat might happen to exist in the local environment.\n\nIf we add more environment-variable-dependent mechanisms to allow more\ndifferent things to be done, we increase substantially the odds of\ncreating an exploitable security hole.\n\n> In any case, it'd be a _very good_ thing to have a tablespace-like\n> facility.\n\nAbsolutely. But let's not drive it off environment variables.\nA config file is far safer.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Jul 2002 14:05:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL file location " }, { "msg_contents": "On Tue, Jul 30, 2002 at 02:05:57PM -0400, Tom Lane wrote:\n> \n> If we add more environment-variable-dependent mechanisms to allow more\n> different things to be done, we increase substantially the odds of\n> creating an exploitable security hole.\n\nOk, true enough, but I'm not sure that a config file or any other\nsuch mechanism is any safer. As Lamar Owen said, anyone who can\npoison the postgres user's environment can likely do evil things to\npostgresql.conf as well. Still, environment variables _are_ a\nnotorious weak point for crackers.\n\nAs I said, I don't much care how it is implemented, but I think\n_that_ it is implemented is important, at least for our (Liberty's)\nuses. If the only way it's going to be done is to accept a potential\nsecurity risk, maybe the answer is to allow the security risk, but\nset by default to off.\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 30 Jul 2002 14:19:46 -0400", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: WAL file location" }, { "msg_contents": "Andrew Sullivan <andrew@libertyrms.info> writes:\n> On Tue, Jul 30, 2002 at 02:05:57PM -0400, Tom Lane wrote:\n>> If we add more environment-variable-dependent mechanisms to allow more\n>> different things to be done, we increase substantially the odds of\n>> creating an exploitable security hole.\n\n> Ok, true enough, but I'm not sure that a config file or any other\n> such mechanism is any safer. As Lamar Owen said, anyone who can\n> poison the postgres user's environment can likely do evil things to\n> postgresql.conf as well.\n\nWho said anything about poisoning the environment? My point was that\nthere will be strings in the environment that were put there perfectly\nlegitimately, but could still serve as an attack vehicle.\n\nThe weakness of the existing database-locations-are-environment-variables\nfeature is really that the attacker gets to choose which environment\nvariable gets used, and so he can use a variable intended to serve\npurpose A for some other purpose B. If A and B are sufficiently\ndifferent then you got trouble --- and since we are talking about a\npurpose B that involves writing on something, there's definitely a risk.\n\nA mechanism based only on a fixed environment variable name doesn't\ncreate the sort of threat I'm contemplating. For example, if the\npostmaster always and only looked at $PGXLOG to find the xlog then\nyou'd not have this type of risk. But Thomas said he was basing the\nfeature on database locations, and in the absence of seeing the code\nI don't know if he's creating a security hole or not.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Jul 2002 14:34:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL file location " }, { "msg_contents": "\nIn my logic, we have PGDATA environment variable for the server only so\nthe server can find the /data directory. After that, everything should\nbe in /data. I see no reason to make it an environment variable.\n\nIn fact, a file in /data should be able to track the xlog directory a\nlot better than an evironment variable will.\n\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Andrew Sullivan <andrew@libertyrms.info> writes:\n> > On Tue, Jul 30, 2002 at 02:05:57PM -0400, Tom Lane wrote:\n> >> If we add more environment-variable-dependent mechanisms to allow more\n> >> different things to be done, we increase substantially the odds of\n> >> creating an exploitable security hole.\n> \n> > Ok, true enough, but I'm not sure that a config file or any other\n> > such mechanism is any safer. As Lamar Owen said, anyone who can\n> > poison the postgres user's environment can likely do evil things to\n> > postgresql.conf as well.\n> \n> Who said anything about poisoning the environment? My point was that\n> there will be strings in the environment that were put there perfectly\n> legitimately, but could still serve as an attack vehicle.\n> \n> The weakness of the existing database-locations-are-environment-variables\n> feature is really that the attacker gets to choose which environment\n> variable gets used, and so he can use a variable intended to serve\n> purpose A for some other purpose B. If A and B are sufficiently\n> different then you got trouble --- and since we are talking about a\n> purpose B that involves writing on something, there's definitely a risk.\n> \n> A mechanism based only on a fixed environment variable name doesn't\n> create the sort of threat I'm contemplating. For example, if the\n> postmaster always and only looked at $PGXLOG to find the xlog then\n> you'd not have this type of risk. But Thomas said he was basing the\n> feature on database locations, and in the absence of seeing the code\n> I don't know if he's creating a security hole or not.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 30 Jul 2002 14:37:12 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL file location" }, { "msg_contents": "On Tuesday 30 July 2002 02:34 pm, Tom Lane wrote:\n> Andrew Sullivan <andrew@libertyrms.info> writes:\n> > On Tue, Jul 30, 2002 at 02:05:57PM -0400, Tom Lane wrote:\n> >> If we add more environment-variable-dependent mechanisms to allow more\n> >> different things to be done, we increase substantially the odds of\n> >> creating an exploitable security hole.\n\n> > Ok, true enough, but I'm not sure that a config file or any other\n> > such mechanism is any safer. As Lamar Owen said, anyone who can\n> > poison the postgres user's environment can likely do evil things to\n> > postgresql.conf as well.\n\n> Who said anything about poisoning the environment? My point was that\n> there will be strings in the environment that were put there perfectly\n> legitimately, but could still serve as an attack vehicle.\n\nI said it. In any case, using strings that are in the environment requires an \nuntrusted PL, or a C function. Regardless, if such a hole was exploited the \nonly security risk to the system at large is that posed by the postgres user, \nwhich, IMHO, shouldn't even have write access to its own executables. And if \nsomeone can exploit the environment in that way, with a server-side function, \nthen that same person will be able to execute arbitrary code as the \npostmaster run user anyway, without any environment variables being accessed. \nUnless environment access is allowed in trusted functions....\n\nAlthough that is one reason the HOME for the RPMset is /var/lib/pgsql, a place \npostgres has free rein anyway.\n\n> The weakness of the existing database-locations-are-environment-variables\n> feature is really that the attacker gets to choose which environment\n> variable gets used, and so he can use a variable intended to serve\n> purpose A for some other purpose B. If A and B are sufficiently\n> different then you got trouble --- and since we are talking about a\n> purpose B that involves writing on something, there's definitely a risk.\n\nTo data already owned by postgres only. I wouldn't mind seeing an example of \nsuch an exploit, for educational purposes.\n\nHaving said all that, I still believe that this is something tailor-made for \npostgresql.conf.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 30 Jul 2002 15:50:25 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: WAL file location" }, { "msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> Having said all that, I still believe that this is something tailor-made for \n> postgresql.conf.\n\nWell, exactly. Regardless of how serious you may think the security\nargument is, it still remains that a config-file entry seems the ideal\nway to do it. I can't see any good argument in favor of relying on\nenvironment variables instead. They don't bring any new functionality\nto the party; and we have an awful lot of work invested in putting all\nsorts of functionality into the GUC module. I think that doing\nconfiguration-like stuff outside the GUC framework is now something that\nwe should resist --- or at least have a darn good reason for it when we\ndo it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Jul 2002 16:17:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL file location " }, { "msg_contents": "Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > Having said all that, I still believe that this is something tailor-made for \n> > postgresql.conf.\n> \n> Well, exactly. Regardless of how serious you may think the security\n> argument is, it still remains that a config-file entry seems the ideal\n> way to do it. I can't see any good argument in favor of relying on\n> environment variables instead. They don't bring any new functionality\n> to the party; and we have an awful lot of work invested in putting all\n> sorts of functionality into the GUC module. I think that doing\n> configuration-like stuff outside the GUC framework is now something that\n> we should resist --- or at least have a darn good reason for it when we\n> do it.\n\nThomas, are you going to extend this to locations for any table/index? \nSeems whatever we do for WAL should fix in that scheme.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 30 Jul 2002 16:20:08 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL file location" }, { "msg_contents": "Thomas Lockhart writes:\n\n> I've developed patches to be able to specify the location of the WAL\n> directory, with the default location being where it is now. The patches\n> define a new environment variable PGXLOG (a la PGDATA) and postmaster,\n> postgres, initdb and pg_ctl have been taught to recognize a new command\n> line switch \"-X\" a la \"-D\".\n\nI'm not in favor of keeping this sort of information in environment\nvariables or in a command-line option. It should be kept in permanent\nstorage in the data area. (In other words, it should be stored in a\nfile.) Consider, tomorrow someone wants to move the commit log or wants\nto keep duplicate copies of either log. We should have some extensible\nstructure in place before we start moving things around.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 30 Jul 2002 23:41:41 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: WAL file location" }, { "msg_contents": "> The trouble with relying on environment variables for paths (especially\n> paths to places that we might scribble on) is that the postmaster has\n> no idea which strings in its environment were actually intended for that\n> use, and which were not.\n\nTrue, in the simplest implementation (Peter E. has suggested extensions\nto address this complaint). But not relevant to security under likely\nscenarios. See below.\n\n> As an example, the postmaster very likely has $HOME in its environment.\n> This means that anyone with createdb privilege can try to create a\n> database in the postgres user's home directory. It's relatively\n> harmless (since what will actually get mkdir'd is some name like\n> /home/postgres/base/173918, which likely can't overwrite anything\n> interesting) but it's still not a good idea.\n\nActually, this can not happen! You will see that the combination of\ninitlocation, environment variables, and other backend requirements will\nforce this kind of exploit to fail, since PostgreSQL requires a\nstructure of directories (like $PGDATA/data/base) but the environment\nvariable is only allowed to prepend the \"data/base/oid\" part of the\npath.\n\nSo the directory *must* be set up properly beforehand, and must have the\ncorrect structure, greatly reducing if not eliminating possible\nexploits.\n\nThis is *not* possible in any scenerio for which the postmaster or a\nserver instance or a client connection has full control over the\nattributes and definitions of data storage areas. Decoupling them\n(requiring two distinct orthogonal mechanisms) is what enhances security\nand data integrity. None of the \"I hate environment variables\"\ndiscussions have addressed this issue.\n\n> $PWD would be another likely attack point, and possibly one could do\n> something with $PATH, not to mention any custom environment variables\n> that might happen to exist in the local environment.\n\nAgain, not likely possible. See above.\n\n> If we add more environment-variable-dependent mechanisms to allow more\n> different things to be done, we increase substantially the odds of\n> creating an exploitable security hole.\n\nNo. See above.\n\n> > In any case, it'd be a _very good_ thing to have a tablespace-like\n> > facility.\n> Absolutely. But let's not drive it off environment variables.\n> A config file is far safer.\n\nDisagree, but in a friendly sort of way ;) I will likely implement both,\nif either. Along the way I will give some specific use cases so we don't\ngo 'round on this topic every time...\n\n - Thomas\n", "msg_date": "Tue, 30 Jul 2002 15:02:35 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: WAL file location" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n>> If we add more environment-variable-dependent mechanisms to allow more\n>> different things to be done, we increase substantially the odds of\n>> creating an exploitable security hole.\n\n> No. See above.\n\nYour argument seems to reduce to \"it's not insecure because we have\nthese backup checks in place\". Sure, but why should we use a\nconfiguration-specifying mechanism that even potentially has a security\nrisk, when it offers no real advantage over a mechanism that does not?\n\n> Disagree, but in a friendly sort of way ;) I will likely implement both,\n> if either. Along the way I will give some specific use cases so we don't\n> go 'round on this topic every time...\n\nI'd like to see the use case that justifies environment variables as an\neasier way to set Postgres parameters than a config file. In general\nthey are not easy to use, because it's so easy to start the postmaster\nin the wrong environment. We used to constantly see problems from\npeople who had different environments when they started PG by hand (from\nan interactive shell) vs when it got launched from a boot script.\nWe've reduced those problems by reducing PG's sensitivity to environment\nsettings, and I think we should continue to reduce it. Not increase it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Jul 2002 18:21:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL file location " }, { "msg_contents": "...\n> Thomas, are you going to extend this to locations for any table/index?\n> Seems whatever we do for WAL should fix in that scheme.\n\nYes, the longer-term goal is enabling table/index-specific locations.\nI'm not certain whether WAL can use *exactly* the same mechanism, since\n\n1) the location for WAL is (currently) not particularly related to the\ndirectory structure for other resources such as databases and tables.\n\n2) postmaster may want to access WAL-kinds of information before having\naccess to the global database info.\n\nI'll have a question for -hackers very soon on why I seem to be having\ntrouble adding a column to pg_class (which will end up being an OID for\nthe internally supported view of what \"locations\" are). I'm getting\naccess violations after adding a column which is initialized to zero and\nnever explicitly used in the code...\n\n - Thomas\n", "msg_date": "Tue, 30 Jul 2002 15:50:12 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: WAL file location" }, { "msg_contents": "On Tue, 30 Jul 2002, Lamar Owen wrote:\n\n> Now, let me make the statement that the environment in this case is\n> not likely to be a security issue any worse than having the stuff\n> in postgresql.conf, as any attacker that can poison the postmaster\n> environment can probably poison postgresql.conf.\n\nUnfortunately, the environment is already \"pre-poisoned.\" Typically the\nenvironment is full of variables that have nothing to do with postgres\nbut which have paths pointing to various places. This is the sort of\nthing that might allow you to exploit an otherwise unexploitable bug in\npostgres.\n\nPotgres not being able to use any of that information would be one layer\nof security. You might argue that it's not a big one, but it's often just\ndumb little things like this that give you remote exploits.\n\n> Since postmaster doesn't run as root, it can't be used to overwrite\n> system files, the typcial target for environment poisoning.\n\nSo? It can still be used to read some files on the system, which\nmight provide useful information to an attacker. And future additions\nto postgres might change the situation. Say, for example, that someone\nadded the ability to store data on raw devices. Now you have to worry\nthat someone might be able to get postgres to write rubbish to some\nraw devices it has access to if an environment variable has /dev in it.\n\nSimplicty is always a big help to security. Rather than spending time\ndoing a big, complex analysis of just why we think using the environment\nvariables are safe, it's much simpler just not to use them. And if\nwe re-used existing configuration file processing code to get the\ninformation we need, we'd also be removing some code from the system,\nthus removing the potential for bugs in that code.\n\nThe discussion in the archives seems quite positive about the patch,\nexcept for one or two recalcitrant people that disagree with everyone\nelse. And in the very first post I found, Tom Lane said:\n\n This whole thread makes me more and more uncomfortable about the\n fact that the postmaster/backend pay attention to environment\n variables at all. An explicit configuration file would seem a better\n answer.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Wed, 31 Jul 2002 08:21:09 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: WAL file location" }, { "msg_contents": "On Tue, 30 Jul 2002, Lamar Owen wrote:\n\n> On Tuesday 30 July 2002 02:34 pm, Tom Lane wrote:\n>\n> > Who said anything about poisoning the environment? My point was that\n> > there will be strings in the environment that were put there perfectly\n> > legitimately, but could still serve as an attack vehicle.\n>\n> I said it. In any case, using strings that are in the environment\n> requires an untrusted PL, or a C function.\n\nAh. See, we already have a failure in a security analysis here. This\ncommand:\n\n CREATE DATABASE foo WITH LOCATION = 'BAR'\n\nuses a string that's in the environment.\n\n> Regardless, if such a hole was exploited the only security risk to\n> the system at large is that posed by the postgres user, which, IMHO,\n> shouldn't even have write access to its own executables.\n\nSo what you're saying is that we should make the opportunity for people\nto configure the system in an insecure manner?\n\nConfiguration errors by administrators are probably the number one\ncause of security breaches, you know.\n\n> I wouldn't mind seeing an example of such an exploit, for\n> educational purposes.\n\nI don't have one. But consider a couple of possibilities:\n\n1. The exploit can't exist until someone adds more code to postgres.\nSo maybe it doesn't exist in 7.3, but will appear in 7.4.\n\n2. The exploit is there, but nobody has figured it out yet. The recent\nBIND resolver library vulnerability has been in that code for at least\nten years, but it was only last month that someone figured out that it\nwas even there, much less how to exploit it.\n\nI've been securing systems since I started an ISP in 1995, and so I've\nseen a lot of security vulnerabilities come and go, and I've got a bit\nof a feel for what kinds of things are typically exploited. And this one\none just screams, \"potential security vulnerability!\" to me.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Wed, 31 Jul 2002 08:46:04 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: WAL file location" }, { "msg_contents": "...\n> I've been securing systems since I started an ISP in 1995, and so I've\n> seen a lot of security vulnerabilities come and go, and I've got a bit\n> of a feel for what kinds of things are typically exploited. And this one\n> one just screams, \"potential security vulnerability!\" to me.\n\nSure, there is screaming all over the place :)\n\nBut the zeroth-order issue is not security. It is storage management for\nlarge databases. Any scheme we have for accomplishing that must hold up\nto scrutiny, but we can not refuse to proceed just because there are\n\"lions tigers and bears\" out there.\n\nI know you are being thoughtful about the issues, but the most secure\ndatabase is one which is not running. The most robust database is the\none with no data. We're pushing past that into large data management\nissues and have to find a way through the forest. Security will be one\naspect by which we measure the solution. Scalability and robustness are\nother issues, and there are still others. We'll talk about them all\nbefore we are done ;)\n\n - Thomas\n", "msg_date": "Tue, 30 Jul 2002 17:00:07 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: WAL file location" }, { "msg_contents": "On Tue, 30 Jul 2002, Thomas Lockhart wrote:\n\n> But the zeroth-order issue is not security. It is storage management for\n> large databases. Any scheme we have for accomplishing that must hold up\n> to scrutiny, but we can not refuse to proceed just because there are\n> \"lions tigers and bears\" out there.\n\nWell, I'm not sure how using a config file rather than environment\nvariables is stopping us from proceeding....\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Wed, 31 Jul 2002 09:09:28 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: WAL file location" }, { "msg_contents": "On Tuesday 30 July 2002 07:46 pm, Curt Sampson wrote:\n> On Tue, 30 Jul 2002, Lamar Owen wrote:\n> > I said it. In any case, using strings that are in the environment\n> > requires an untrusted PL, or a C function.\n\n> Ah. See, we already have a failure in a security analysis here. This\n> command:\n\n> CREATE DATABASE foo WITH LOCATION = 'BAR'\n\n> uses a string that's in the environment.\n\nAnd requires you to be a database superuser anyway. You got something better? \n:-) If you're the database superuser, you can do anything you want inside \nthe database. Your analysis here is faulty.\n\n> So what you're saying is that we should make the opportunity for people\n> to configure the system in an insecure manner?\n\nNo, what I'm saying is that there is no such thing as absolute security -- and \ntime is better spent where there is a measureable result. If a security hole \nrequires root to exploit, then it's not a hole. Show me a case where an \nenvvar can be exploited by an unprivileged database user without accessing a \nuser written C function or some other function in an untrusted PL.\n\n> Configuration errors by administrators are probably the number one\n> cause of security breaches, you know.\n\nNo. Failure to keep up with security updates is the number one cause of \nsecurity breaches. I guess you could call that a configuration problem of \nsorts. Been there; done that. Experienced one hack in -- caught it in the \nact. But I _have_ been there, and I have had to clean up other people's \nconfiguration errors.\n\n> I've been securing systems since I started an ISP in 1995, and so I've\n> seen a lot of security vulnerabilities come and go, and I've got a bit\n> of a feel for what kinds of things are typically exploited. And this one\n> one just screams, \"potential security vulnerability!\" to me.\n\nBut, just like any other vulnerability, an admin must ask the question 'is a \nsuccessful exploit a problem?' Again, if an exploit requires root to \nactivate, then it's not a problem in reality. If I have to be the database \nsuperuser to activate an ennvar exploit in postgresql, then it's not a \nvulnerability, as I have more powerful tools at my disposal as superuser. \nThings such as DROP DATABASE.\n\nNow if a normal user can easily exploit it remotely (like the two in a row for \nOpenBSD in the past month), then it's an issue. A big issue.\n\nYou just have to keep perspective. And I'm not going to put myself as any \nauthority on the subject, but I do have a couple of years in the trenches, \nhaving admined systems for over 15 years. I've been at it long enough to \nrealize that I am most certainly fallible.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 30 Jul 2002 23:18:16 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: WAL file location" }, { "msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n>> Ah. See, we already have a failure in a security analysis here. This\n>> command:\n>> CREATE DATABASE foo WITH LOCATION = 'BAR'\n>> uses a string that's in the environment.\n\n> And requires you to be a database superuser anyway.\n\nCREATE DATABASE does not require superuser privs, only createdb\nwhich is not usually considered particular dangerous.\n\nWhether you think that there is a potentially-exploitable security hole\nhere is not really the issue. The point is that two different arguments\nhave been advanced against using environment variables for configuration\n(if you weren't counting, (1) possible security issues now or in the\nfuture and (2) lack of consistency between manual and boot-script\nstartup), while zero (as in 0, nil, nada) arguments have been advanced\nin favor of using environment variables instead of configuration files.\nI do not see why we are debating the negative when there is absolutely\nno case on the positive side.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Jul 2002 23:51:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL file location " }, { "msg_contents": "On Tuesday 30 July 2002 11:51 pm, Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> >> CREATE DATABASE foo WITH LOCATION = 'BAR'\n> > And requires you to be a database superuser anyway.\n\n> CREATE DATABASE does not require superuser privs, only createdb\n> which is not usually considered particular dangerous.\n\nPardon my misspeak, as there are those two components to the privs. My error. \nTypically normal users aren't given create database privileges -- at least on \nmy systems.\n\nAnd, again, I'm completely for the idea of this being in postgresql.conf. But \nI'm not convinced that the security angle is a valid reason. The consistency \nreason is enough alone to warrant it being that way.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Wed, 31 Jul 2002 00:01:08 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: WAL file location" }, { "msg_contents": "Lamar Owen wrote:\n> On Tuesday 30 July 2002 11:51 pm, Tom Lane wrote:\n> > Lamar Owen <lamar.owen@wgcr.org> writes:\n> > >> CREATE DATABASE foo WITH LOCATION = 'BAR'\n> > > And requires you to be a database superuser anyway.\n> \n> > CREATE DATABASE does not require superuser privs, only createdb\n> > which is not usually considered particular dangerous.\n> \n> Pardon my misspeak, as there are those two components to the privs. My error. \n> Typically normal users aren't given create database privileges -- at least on \n> my systems.\n> \n> And, again, I'm completely for the idea of this being in postgresql.conf. But \n> I'm not convinced that the security angle is a valid reason. The consistency \n> reason is enough alone to warrant it being that way.\n\nAgreed. Consistency argues for the postgresql.conf solution, not\nsecurity. Also, I would like to see initlocation removed as soon as we\nget a 100% functional replacement. We have fielded too many questions\nabout how to set it up.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 31 Jul 2002 00:17:12 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL file location" }, { "msg_contents": "On Tue, 30 Jul 2002, Lamar Owen wrote:\n\n> On Tuesday 30 July 2002 07:46 pm, Curt Sampson wrote:\n>\n> > Ah. See, we already have a failure in a security analysis here. This\n> > command:\n>\n> > CREATE DATABASE foo WITH LOCATION = 'BAR'\n>\n> > uses a string that's in the environment.\n>\n> And requires you to be a database superuser anyway.\n\nYup. So once again, we're getting in to the loop \"well, if you do\nthis, this other layer of security protects from some other thing\nand blah blah blah.\"\n\nGiven the choice between doing something simple that eliminates one\npossible avenue of security holes, or doing an extensive, error-prone\nanalysis, to try to prove that that avenue doesn't have any holes and is\nnot likely to have any in the future, which is going to be more secure?\n\n> Show me a case where an envvar can be exploited by an\n> unprivileged database user without accessing a user written C function\n> or some other function in an untrusted PL.\n\nWell, if this is your approach to security, we're just going to\nhave to stop arguing here. The correct approach to security is not,\n\"leave this line of attack open, if we can't show how it could\nfail\" but \"close off that line of attack even if we can't show how\nit would fail.\" If you don't agree with that, you're in disagreement\nwith me and every Internet security expert out there.\n\n> No. Failure to keep up with security updates is the number one cause of\n> security breaches.\n\nBzzt!\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Wed, 31 Jul 2002 13:36:44 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: WAL file location" }, { "msg_contents": "On Wed, 31 Jul 2002, Lamar Owen wrote:\n\n> On Tuesday 30 July 2002 11:51 pm, Tom Lane wrote:\n> > Lamar Owen <lamar.owen@wgcr.org> writes:\n> > >> CREATE DATABASE foo WITH LOCATION = 'BAR'\n> > > And requires you to be a database superuser anyway.\n>\n> > CREATE DATABASE does not require superuser privs, only createdb\n> > which is not usually considered particular dangerous.\n>\n> Pardon my misspeak, as there are those two components to the privs. My error.\n> Typically normal users aren't given create database privileges -- at\n> least on my systems.\n>\n> ...But I'm not convinced that the security angle is a\n> valid reason. The consistency reason is enough alone to warrant it\n> being that way.\n\nWe've already had three incorrect security analysis of this in the\nspace of a couple of hours, from people are reasonably familiar\nwith postgres and (presumably) use it all the time, and you think\nthis is not a security problem?!\n\nAnyway, I'll shut up now.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Wed, 31 Jul 2002 13:42:25 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: WAL file location" }, { "msg_contents": "> Whether you think that there is a potentially-exploitable security hole\n> here is not really the issue. The point is that two different arguments\n> have been advanced against using environment variables for configuration\n> (if you weren't counting, (1) possible security issues now or in the\n> future and (2) lack of consistency between manual and boot-script\n> startup), while zero (as in 0, nil, nada) arguments have been advanced\n> in favor of using environment variables instead of configuration files.\n\nI have been counting, in both English and Spanish. Other folks can count\ntoo, and no point in just being pissy about it. You haven't been\nlistening ;)\n\nI've discussed these issues in the past, and we get stuck in the same\nplace. You don't like environment variables and have advanced two\nhypothetical issues with no specific plausible case to back it up. I\nhave pointed out the utility and desirability otherwise. Frankly, I'm\nnot sure why you are pushing so hard to make sure that we accomplish\nnothing in this area, while minimizing the joys of working out the\nissues. In any case, the main work is in the internal mechanisms, not in\nthe exterior varnish.\n\n From my experience as a designer, developer, and operator of large data\nhandling systems *without* adequate decoupling of disk topology from\ninternal resource definitions (Ingres just didn't do it right), I'll\npoint out that it is an issue. A big issue. With real-life examples to\nback it up. If the PostgreSQL solution continues to be an issue, we can\ncontinue to discuss *productive* alternatives. But there is nothing in\nthe work ahead which paints us into a corner.\n\nAs you may already know (read the docs to freshen up if not) environment\nvariables are not required to be used for the current implementation of\nlocations. It is supported, and I recommend their use. But absolute\npaths can be used also; I implemented both strategies to accommodate the\ndifference in opinion on the pros and cons of each approach. Nothing has\nto be different in the upcoming work. The behavior of initlocation has\nbeen absolutely no burden on -hackers for the nearly *5 years* that it\nhas been available, and that is the best evidence that we're just\ntalking through hats. Let's get on with it, or at least get back to\nbeing civil.\n\n - Thomas\n", "msg_date": "Tue, 30 Jul 2002 22:48:39 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: WAL file location" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> ... The behavior of initlocation has\n> been absolutely no burden on -hackers for the nearly *5 years* that it\n> has been available, and that is the best evidence that we're just\n> talking through hats. Let's get on with it, or at least get back to\n> being civil.\n\nI do apologize if you felt I was being uncivil; that wasn't my\nintention. Nor do I want to overstate the importance of the issue;\nas you say, this is just a small user-interface detail, not the meat\nof the feature.\n\nBut ... my recollection is that we've had a *huge* number of complaints\nabout the initlocation behavior, at least by comparison to the number\nof people using the feature. No one can understand how it works,\nlet alone how to configure it so that it works reliably. I really\nfail to understand why you want to drive this new feature off environment\nvariables. You say you've \"pointed out the utility and desirability\"\nof doing it that way, but I sure missed it; would you explain again?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 31 Jul 2002 02:00:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL file location " }, { "msg_contents": "...\n> Agreed. Consistency argues for the postgresql.conf solution, not\n> security. Also, I would like to see initlocation removed as soon as we\n> get a 100% functional replacement. We have fielded too many questions\n> about how to set it up.\n\nHmm. I'm not sure the best way to look, but I was able to find three or\nfour questions since 1999 on the mailing lists (used \"initlocation\nproblem help\"; most other choices got lots of false hits).\n\nAnd they do seem to sometimes involve the inability to type commands or\nto define environment variables. I'll avoid the snide remarks about DBAs\nwho don't know an envar from a hole in the wall (oops...). This day of\nsarcasm is getting sort of fun. Can't wait to take a shower and move on\nto a more polite tomorrow though ;)\n\nbtw, a \"100% functional replacement\" is escaping definition so far. The\n\"no envar\" camp has not thought through the issues yet, though the\nissues can be found in the threads. Better to decide what the\nrequirements are before throwing out the solution.\n\n - Thomas\n", "msg_date": "Tue, 30 Jul 2002 23:13:38 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: WAL file location" }, { "msg_contents": "On Tue, 30 Jul 2002, Thomas Lockhart wrote:\n\n> The \"no envar\" camp has not thought through the issues yet, though the\n> issues can be found in the threads. Better to decide what the\n> requirements are before throwing out the solution.\n\nOk, so what issues has the \"no envvar\" camp not yet dealt with? What's\nmissing in that patch posted here a while back to specify your data\nfiles in the configuration file? (Presumably we'd just add the log file\nto that in a similar way.)\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Wed, 31 Jul 2002 15:27:41 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: WAL file location" }, { "msg_contents": "...\n> But ... my recollection is that we've had a *huge* number of complaints\n> about the initlocation behavior, at least by comparison to the number\n> of people using the feature. No one can understand how it works,\n> let alone how to configure it so that it works reliably. I really\n> fail to understand why you want to drive this new feature off environment\n> variables. You say you've \"pointed out the utility and desirability\"\n> of doing it that way, but I sure missed it; would you explain again?\n\nTomorrow, when I've got my sarcasm back in the cellar ;)\n\nAnd what the heck are you doing up this late???!!!\n\n - Thomas\n", "msg_date": "Tue, 30 Jul 2002 23:41:41 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: WAL file location" }, { "msg_contents": "On Wed, Jul 31, 2002 at 02:00:52AM -0400, Tom Lane wrote:\n> let alone how to configure it so that it works reliably. I really\n> fail to understand why you want to drive this new feature off environment\n> variables. You say you've \"pointed out the utility and desirability\"\n> of doing it that way, but I sure missed it; would you explain again?\n\nWithout wishing to argue for one direction or another, do I have this\ndescription of the options right:\n\na.\tThe system uses no environment variables at all; some other\nmethod is used to determine where the config file is (maybe compiled\ninto the code);\nb.\tThe system might use only one environment variable, which\nsets the data dir;\nc.\tThe system (in some cases optionally) uses several\nenvironment variables to set options. Some of these may be set in\nthe config file as well.\n\nIf I understand it, nobody is really arguing for (a).\n\nI think the argument for (b), from a security point of view, is that\nit is simpler: fewer variables offer fewer points of attack. Also,\nfrom the point of view of support, (b) is simpler, because with only\none possible environment variable issue, there will be fewer\ntroubles. (I have my doubts about the latter, but never mind that\nnow.)\n\nI think the argument for (c) is that it is maximally flexible. \nAllowing a DBA to manage things in whatever way s/he is comfortable\nallows for competent administration. If it is a potential foot gun,\nwell, there are already plenty of those.\n\nIs this a fair account?\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n", "msg_date": "Wed, 31 Jul 2002 08:41:45 -0400", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: WAL file location" }, { "msg_contents": "...\n> Is this a fair account?\n\nYes. You may note that we have not explored the implementation details\non any of these, so the attributes of each are not cast in stone (except\nfor the purposes of argument of course ;)\n\n - Thomas\n", "msg_date": "Wed, 31 Jul 2002 07:01:49 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: WAL file location" }, { "msg_contents": "Andrew Sullivan <andrew@libertyrms.info> writes:\n> a.\tThe system uses no environment variables at all; some other\n> method is used to determine where the config file is (maybe compiled\n> into the code);\n\n> If I understand it, nobody is really arguing for (a).\n\nI am. I see absolutely no advantage in depending on environment\nvariables rather than a config file. Here's another point beyond the\nones I've made already: config files are self-documenting if we set them\nup in the style used by postgresql.conf (ie, comments showing all the\nallowed settings) --- self-documenting with respect to both what you\nmight do, and what you actually have done in the running system.\nEnvironment variables are not; do you know exactly which strings in your\nenvironment affect Postgres, or what other settings you might have made\nbut didn't? Where would you go to find out? (This is partly a failure\nof documentation, no doubt, but the point about a config file is that it\noffers an extremely obvious place to find out.) Also, how could you\nfind out the actual configuration of a running server ... especially\nif you are admining it remotely? We have SHOW for GUC variables, and\nnothing at all for environment variables.\n\nBottom line: we have an extremely nice configuration engine in place\nalready. I really fail to understand why we want to ignore it and\nemulate inferior pre-GUC approaches.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 31 Jul 2002 10:23:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL file location " }, { "msg_contents": "On Wed, Jul 31, 2002 at 10:23:07AM -0400, Tom Lane wrote:\n> Andrew Sullivan <andrew@libertyrms.info> writes:\n> > a.\tThe system uses no environment variables at all; some other\n> > method is used to determine where the config file is (maybe compiled\n> > into the code);\n> \n> > If I understand it, nobody is really arguing for (a).\n> \n> I am. I see absolutely no advantage in depending on environment\n\nOk, how then would one set the location of the config file? Though I\nmentioned it, I don't really thing that compiled-in is an option: I\ndon't want to have to have four versions of the binary to just to run\nfour postmasters on four ports. Maybe a --with-config-file option to\nstart the postmaster?\n\nAnd I presume this is all for the server only, right? Nobody is\ntalking about getting rid of (for instance) $PGPORT for clients,\nright? (I'm sorry if I seem obtuse, or if this is really none of my\nbusiness, since I'm not offering to fix this up, since I can't. But\nI'm very keen to make sure that administration of large postgres\ninstallations doesn't become terribly difficult.)\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n", "msg_date": "Wed, 31 Jul 2002 10:50:03 -0400", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: WAL file location" }, { "msg_contents": "Andrew Sullivan <andrew@libertyrms.info> writes:\n> Ok, how then would one set the location of the config file?\n\nThe config file itself has to be found the same way we do it now,\nobviously: either a command line argument or the environment variable\n$PGDATA. But that's a red herring. This thread is not about where you\nfind the config file, it's about locations for other files such as WAL\nlogs and tablespaces.\n\n> And I presume this is all for the server only, right? Nobody is\n> talking about getting rid of (for instance) $PGPORT for clients,\n> right?\n\nI wasn't.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 31 Jul 2002 11:09:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL file location " }, { "msg_contents": "On Wed, Jul 31, 2002 at 11:09:58AM -0400, Tom Lane wrote:\n> Andrew Sullivan <andrew@libertyrms.info> writes:\n> > Ok, how then would one set the location of the config file?\n> \n> The config file itself has to be found the same way we do it now,\n> obviously: either a command line argument or the environment variable\n> $PGDATA.\n\nThat was my option (b): one environment variable to locate the file,\nand nothing else. I can think of ways that might be a little\nawkward, but not intolerably so. Thanks for the clarification.\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n", "msg_date": "Wed, 31 Jul 2002 11:30:37 -0400", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: WAL file location" }, { "msg_contents": "Bruce Momjian writes:\n\n> Thomas, are you going to extend this to locations for any table/index?\n> Seems whatever we do for WAL should fix in that scheme.\n\nThe trick is that it might not. For relations you simply need a system\ntable mapping location names to file system locations, and you can add and\nremove those mappings at will. Moving an object between locations can be\naccomplished by locking the object down, moving the files, updating the\nsystem catalogs, and unlocking.\n\nBut the location of the WAL logs, and the commit logs, if anyone is\nthinking in that direction, needs to be known to initdb. And if you want\nto move them later you'd need to halt the entire system or implement some\nsmarts for transition. So I'm not sure if these things fit under one\numbrella. It would probably be nice if they did.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 31 Jul 2002 19:16:40 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: WAL file location" }, { "msg_contents": "On Wed, 31 Jul 2002, Andrew Sullivan wrote:\n\n> Ok, how then would one set the location of the config file?\n\nOption on the command line. Works for lots of different servers out\nthere already (BIND, apache, etc.).\n\nWhether we also want to emulate them using a compiled-in default if the\ncommand line option is not specified, I don't know. I would tend to\nprefer not, because that might change from system to system, and also\nif someone leaves the \"default\" config file around but isn't using it,\nyou can accidently start up postgres with the wrong config. But I won't\nargue that point heavily.\n\nI hate environment variables for servers because the environment\nchanges, is hard to detect on some systems (ps always shows you the\ncommand line unless you muck with it), etc.\n\n> And I presume this is all for the server only, right? Nobody is\n> talking about getting rid of (for instance) $PGPORT for clients,\n> right?\n\nI'm certainly not wanting to get rid of it on the client. I won't go\ninto the reasons unless anybody really cares....\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Thu, 1 Aug 2002 12:10:55 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: WAL file location" }, { "msg_contents": "On Wed, 31 Jul 2002, Peter Eisentraut wrote:\n\n> But the location of the WAL logs, and the commit logs, if anyone is\n> thinking in that direction, needs to be known to initdb. And if you want\n> to move them later you'd need to halt the entire system....\n\nI don't see this as a big problem. Right now you also have to halt the\nentire system to move them. And there are other configuration file\noptions which also cannot be changed after the system has started.\n\nI don't see a big need for being able to move the log files around\nwhen the system is running. If there is such a need, let's make it\na separate feature from being able to specify the location in the\nlogfile, and implement it separately so we don't slow down the\noriginal feature we're all looking for.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Thu, 1 Aug 2002 12:13:08 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: WAL file location" }, { "msg_contents": "Curt Sampson wrote:\n> On Wed, 31 Jul 2002, Andrew Sullivan wrote:\n> \n> > Ok, how then would one set the location of the config file?\n> \n> Option on the command line. Works for lots of different servers out\n> there already (BIND, apache, etc.).\n> \n> Whether we also want to emulate them using a compiled-in default if the\n> command line option is not specified, I don't know. I would tend to\n> prefer not, because that might change from system to system, and also\n> if someone leaves the \"default\" config file around but isn't using it,\n> you can accidently start up postgres with the wrong config. But I won't\n> argue that point heavily.\n> \n> I hate environment variables for servers because the environment\n> changes, is hard to detect on some systems (ps always shows you the\n> command line unless you muck with it), etc.\n> \n> > And I presume this is all for the server only, right? Nobody is\n> > talking about getting rid of (for instance) $PGPORT for clients,\n> > right?\n> \n> I'm certainly not wanting to get rid of it on the client. I won't go\n> into the reasons unless anybody really cares....\n\nI am wondering why we even want to specify the WAL location anywhere\nexcept as a flag to initdb. If you specify a location at initdb time,\nit creates the /xlog directory, then symlinks it into /data.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 31 Jul 2002 23:20:35 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL file location" }, { "msg_contents": "On Wed, Jul 31, 2002 at 11:20:35PM -0400, Bruce Momjian wrote:\n> \n> I am wondering why we even want to specify the WAL location anywhere\n> except as a flag to initdb. If you specify a location at initdb time,\n> it creates the /xlog directory, then symlinks it into /data.\n\nI thought the whole point of it was to make it easy to move WAL. \nWhich is certainly a Good Thing.\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 1 Aug 2002 11:35:38 -0400", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: WAL file location" }, { "msg_contents": "On Wed, 2002-07-31 at 22:20, Bruce Momjian wrote:\n> I am wondering why we even want to specify the WAL location anywhere\n> except as a flag to initdb. If you specify a location at initdb time,\n> it creates the /xlog directory, then symlinks it into /data.\n> \n\nDoes this have any negative implications for Win32 ports?\n\nGreg", "msg_date": "02 Aug 2002 09:51:27 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: WAL file location" }, { "msg_contents": "> > I am wondering why we even want to specify the WAL location anywhere\n> > except as a flag to initdb. If you specify a location at initdb time,\n> > it creates the /xlog directory, then symlinks it into /data.\n> Does this have any negative implications for Win32 ports?\n\nSure. the symlinks thing was just a suggestion. Everything else is\nportable for sure... Or is there some other area you are concerned\nabout?\n\n - Thomas\n", "msg_date": "Fri, 02 Aug 2002 11:46:15 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: WAL file location" }, { "msg_contents": "On Fri, 2 Aug 2002, Thomas Lockhart wrote:\n\n> > > I am wondering why we even want to specify the WAL location anywhere\n> > > except as a flag to initdb. If you specify a location at initdb time,\n> > > it creates the /xlog directory, then symlinks it into /data.\n> > Does this have any negative implications for Win32 ports?\n> \n> Sure. the symlinks thing was just a suggestion. Everything else is\n> portable for sure... Or is there some other area you are concerned\n> about?\n\nNTFS does support symlinks. It's just not very well known, but the gnu \nutilities for windows can let you create soft links. \n\nScott Marlowe\n\n", "msg_date": "Fri, 2 Aug 2002 13:07:14 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: WAL file location" }, { "msg_contents": "On Fri, 2002-08-02 at 13:46, Thomas Lockhart wrote:\n> > > I am wondering why we even want to specify the WAL location anywhere\n> > > except as a flag to initdb. If you specify a location at initdb time,\n> > > it creates the /xlog directory, then symlinks it into /data.\n> > Does this have any negative implications for Win32 ports?\n> \n> Sure. the symlinks thing was just a suggestion. Everything else is\n> portable for sure... Or is there some other area you are concerned\n> about?\n\nWell, as another poster pointed out, Cygwin does support soft links but\nI was also under the impression that lots of Win32 related development\nwas underway. I wasn't sure if those plans called for the use of Cygwin\nor not.\n\nI was just trying to highlight a possible cause for concern...as I\nhonestly don't know how it relates to the current Win32 efforts.\n\nGreg", "msg_date": "02 Aug 2002 15:07:04 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: WAL file location" }, { "msg_contents": "Thomas Lockhart wrote:\n> > > I am wondering why we even want to specify the WAL location anywhere\n> > > except as a flag to initdb. If you specify a location at initdb time,\n> > > it creates the /xlog directory, then symlinks it into /data.\n> > Does this have any negative implications for Win32 ports?\n> \n> Sure. the symlinks thing was just a suggestion. Everything else is\n> portable for sure... Or is there some other area you are concerned\n> about?\n\nI was just wondering why we would deal with environment variables or\npostgresql.conf settings. Just make it an initdb flag, create it in the\ndesired location with a symlink in /data and then we don't have to do\nany more work for WAL locations unless people want to move it around\nafter then initdb'ed, in which case they have to do it manually.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 2 Aug 2002 21:12:39 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL file location" }, { "msg_contents": "...\n> I was just wondering why we would deal with environment variables or\n> postgresql.conf settings. Just make it an initdb flag, create it in the\n> desired location with a symlink in /data and then we don't have to do\n> any more work for WAL locations unless people want to move it around\n> after then initdb'ed, in which case they have to do it manually.\n\nWell, I have the same reaction to symlinks as some others might have to\nenvironment variables ;) Symlinks are inherently evil for determining\nfundamental properties of our database, and inherently evil for\ndetermining locations of files within our database.\n\nThey don't scale, they are not portable, and it is difficult for\napplications (like the Postgres backend) to know that they are dealing\nwith a simlink or a real file.\n\n - Thomas\n", "msg_date": "Fri, 02 Aug 2002 19:28:34 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: WAL file location" }, { "msg_contents": "Thomas Lockhart wrote:\n> ...\n> > I was just wondering why we would deal with environment variables or\n> > postgresql.conf settings. Just make it an initdb flag, create it in the\n> > desired location with a symlink in /data and then we don't have to do\n> > any more work for WAL locations unless people want to move it around\n> > after then initdb'ed, in which case they have to do it manually.\n> \n> Well, I have the same reaction to symlinks as some others might have to\n> environment variables ;) Symlinks are inherently evil for determining\n> fundamental properties of our database, and inherently evil for\n> determining locations of files within our database.\n> \n> They don't scale, they are not portable, and it is difficult for\n> applications (like the Postgres backend) to know that they are dealing\n> with a simlink or a real file.\n\nOK, I understand now, though I personally like symlinks. At least I\nunderstand your point of view.\n\n From my perspective, I think they are portable, they do scale unless you\nare talking about tons of symlinks, which we aren't, and I don't think\nPostgreSQL has to care whether they are symlinks or not.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 2 Aug 2002 22:36:05 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL file location" }, { "msg_contents": "On Fri, 2 Aug 2002, Thomas Lockhart wrote:\n\n> [Symlinks] don't scale,\n\nGiven that we have only one directory for the log file, this would not\nappear to be a problem.\n\n> they are not portable,\n\nThat's certainly a problem if we intend to run on systems without them.\n\n> and it is difficult for\n> applications (like the Postgres backend) to know that they are dealing\n> with a simlink or a real file.\n\nEr...that's the whole point of symlinks.\n\nNot that I really care either way about the whole issue, so long\nas we do *something*.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Sat, 3 Aug 2002 20:42:46 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: WAL file location" } ]
[ { "msg_contents": "Hi guys,\n\nWhen I try to do a cvs up I get this:\ncan't create temporary directory /var/tmp/cvs-serv39998\nNo space left on device \n\nAnd on the 15min SGML docs site there's this:\n\nChanges in this build:\ncan't create temporary directory /var/tmp/cvs-serv39998\nNo space left on device \n\n\nChris\n\n", "msg_date": "Tue, 30 Jul 2002 09:56:30 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "CVS server munted" }, { "msg_contents": "\nam looking at it now ... trying to figure out where we're using >30gig of\nspace righ tnow :(\n\nOn Tue, 30 Jul 2002, Christopher Kings-Lynne wrote:\n\n> Hi guys,\n>\n> When I try to do a cvs up I get this:\n> can't create temporary directory /var/tmp/cvs-serv39998\n> No space left on device\n>\n> And on the 15min SGML docs site there's this:\n>\n> Changes in this build:\n> can't create temporary directory /var/tmp/cvs-serv39998\n> No space left on device\n>\n>\n> Chris\n>\n>\n\n", "msg_date": "Mon, 29 Jul 2002 23:54:48 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: CVS server munted" } ]
[ { "msg_contents": "The current ODBC drivers on the website don't appear to work with recent\ndevelopment.\n\nI'm able to login, but immediately after it throws the error 'Blank'\nwith a large negative number above (-2^16 or so).\n\n\nI don't have Visual C, so would it be possible for someone to build a\nnew (working) driver and send it over -- possibly put it onto the odbc\nwebsite as 'beta'?\n\n\nAttempted to use it with PGAdmin II and MS Access. Both threw the same\nerror.\n\n\n\n", "msg_date": "29 Jul 2002 23:05:17 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "ODBC Drivers Broken" } ]
[ { "msg_contents": "\nd'oh, always helps go rotate those damn logs ... auto-rotation now in\nplace, shouldn't happen again for a good long time (had >6gig taken up in\nvery very big log files *sigh*) ...\n\n\n", "msg_date": "Tue, 30 Jul 2002 00:19:38 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "disk space problem ..." } ]
[ { "msg_contents": "In my original statement_timeout code, if a query string had multiple\nstatements, I would time the statements individually. I have modified\nit so it now times the entire string collectively.\n\nDo people realize that if you pass a single string to the backend, it\nmakes the string into a single transaction?\n\n\t\"UPDATE tab SET x=1;UPDATE tab SET y=2;\"\n\nis treated as one transaction.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 30 Jul 2002 01:09:55 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "statement_timeout" } ]
[ { "msg_contents": "CVSROOT:\t/cvsroot\nModule name:\tpgsql\nChanges by:\ttgl@postgresql.org\t02/07/30 01:24:56\n\nModified files:\n\tdoc/src/sgml : catalogs.sgml xindex.sgml \n\tsrc/tutorial : complex.source \n\nLog message:\n\tRewrite xindex.sgml for CREATE OPERATOR CLASS. catalogs.sgml finally\n\tcontains descriptions of every single system table. Update 'complex'\n\ttutorial example too.\n\n", "msg_date": "Tue, 30 Jul 2002 01:24:56 -0400 (EDT)", "msg_from": "tgl@postgresql.org (Tom Lane)", "msg_from_op": true, "msg_subject": "pgsql/ oc/src/sgml/catalogs.sgml oc/src/sgml/x ..." }, { "msg_contents": "What's with this manual page?\n\nhttp://candle.pha.pa.us/main/writings/pgsql/sgml/gist.html\n\nSeems like it's almost accidental - like a copied and pasted email. It\ndoesn't look like it should be there?\n\nChris\n\n", "msg_date": "Tue, 30 Jul 2002 14:06:27 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Weird manual page" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> What's with this manual page?\n> http://candle.pha.pa.us/main/writings/pgsql/sgml/gist.html\n\nGiST is, um, *very* poorly documented. Feel free to submit doc patches.\n(I've just committed a few tidbits in xindex.sgml, but much more work\nis needed.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Jul 2002 02:20:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Weird manual page " }, { "msg_contents": "> > What's with this manual page?\n> > http://candle.pha.pa.us/main/writings/pgsql/sgml/gist.html\n> \n> GiST is, um, *very* poorly documented. Feel free to submit doc patches.\n> (I've just committed a few tidbits in xindex.sgml, but much more work\n> is needed.)\n\nWeeeeeell...first I'd have to learn GiST...\n\nChris\n\n", "msg_date": "Tue, 30 Jul 2002 14:28:49 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Weird manual page " }, { "msg_contents": "On Tue, 30 Jul 2002, Tom Lane wrote:\n\n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > What's with this manual page?\n> > http://candle.pha.pa.us/main/writings/pgsql/sgml/gist.html\n>\n> GiST is, um, *very* poorly documented. Feel free to submit doc patches.\n> (I've just committed a few tidbits in xindex.sgml, but much more work\n> is needed.)\n>\n\nWe have some docs\n\nhttp://www.sai.msu.su/~megera/postgres/gist/\n\nthere is introduction\nhttp://www.sai.msu.su/~megera/postgres/gist/doc/intro.shtml\n\nand description of current GiST interface\nhttp://www.sai.msu.su/~megera/postgres/gist/doc/gist-inteface-r.shtml\nit's written in Russian.\nI could try to translate if needed.\n\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 30 Jul 2002 10:06:03 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: Weird manual page " }, { "msg_contents": "> We have some docs...\n> it's written in Russian.\n> I could try to translate if needed.\n\nThat would be great! Perhaps someone would help to go through and edit\nit after you do a first cut, so you don't need to spend time working on\nexact phrasing but rather on the content itself. I admire your command\nof English (and Russian of course) but in my limited experience with\nlanguages it seems that much time can be spent trying to get it \"just\nright\"...\n\n - Thomas\n", "msg_date": "Tue, 30 Jul 2002 06:43:13 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Weird manual page" }, { "msg_contents": "\nI wonder if we should just point to Oleg's URL from our docs. That way,\nas he adds stuff, people can go there to get it.\n\n---------------------------------------------------------------------------\n\nOleg Bartunov wrote:\n> On Tue, 30 Jul 2002, Tom Lane wrote:\n> \n> > \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > > What's with this manual page?\n> > > http://candle.pha.pa.us/main/writings/pgsql/sgml/gist.html\n> >\n> > GiST is, um, *very* poorly documented. Feel free to submit doc patches.\n> > (I've just committed a few tidbits in xindex.sgml, but much more work\n> > is needed.)\n> >\n> \n> We have some docs\n> \n> http://www.sai.msu.su/~megera/postgres/gist/\n> \n> there is introduction\n> http://www.sai.msu.su/~megera/postgres/gist/doc/intro.shtml\n> \n> and description of current GiST interface\n> http://www.sai.msu.su/~megera/postgres/gist/doc/gist-inteface-r.shtml\n> it's written in Russian.\n> I could try to translate if needed.\n> \n> > \t\t\tregards, tom lane\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n> >\n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 30 Jul 2002 11:25:37 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Weird manual page" }, { "msg_contents": "On Tue, 30 Jul 2002, Bruce Momjian wrote:\n\n>\n> I wonder if we should just point to Oleg's URL from our docs. That way,\n> as he adds stuff, people can go there to get it.\n\nI think better to have basic GiST documentation shipped and a point\nto our page. We have complete GiST API documented in Russian and\na little intro in English, so most work is already done. We certainly\nneed to write info with real-life examples, but for the moment we\nhave no time. There is a bunch of GiST based contrib modules at least.\nI'll try to translate GiST API this week and would be happy if somebody\nintegrate it with Postgres documentation.\n\n>\n> ---------------------------------------------------------------------------\n>\n> Oleg Bartunov wrote:\n> > On Tue, 30 Jul 2002, Tom Lane wrote:\n> >\n> > > \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > > > What's with this manual page?\n> > > > http://candle.pha.pa.us/main/writings/pgsql/sgml/gist.html\n> > >\n> > > GiST is, um, *very* poorly documented. Feel free to submit doc patches.\n> > > (I've just committed a few tidbits in xindex.sgml, but much more work\n> > > is needed.)\n> > >\n> >\n> > We have some docs\n> >\n> > http://www.sai.msu.su/~megera/postgres/gist/\n> >\n> > there is introduction\n> > http://www.sai.msu.su/~megera/postgres/gist/doc/intro.shtml\n> >\n> > and description of current GiST interface\n> > http://www.sai.msu.su/~megera/postgres/gist/doc/gist-inteface-r.shtml\n> > it's written in Russian.\n> > I could try to translate if needed.\n> >\n> > > \t\t\tregards, tom lane\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > > subscribe-nomail command to majordomo@postgresql.org so that your\n> > > message can get through to the mailing list cleanly\n> > >\n> >\n> > \tRegards,\n> > \t\tOleg\n> > _____________________________________________________________\n> > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > Sternberg Astronomical Institute, Moscow University (Russia)\n> > Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> > phone: +007(095)939-16-83, +007(095)939-23-83\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> >\n>\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 30 Jul 2002 18:36:45 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: Weird manual page" }, { "msg_contents": "> I wonder if we should just point to Oleg's URL from our docs. That way,\n> as he adds stuff, people can go there to get it.\n\nOleg has offered to help integrate this into the main documentation set,\nand he certainly understands how to work with those for patching etc.\nThere is no downside to having this as a fundamental part of the docs\nafaict.\n\n - Thomas\n", "msg_date": "Tue, 30 Jul 2002 08:40:02 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Weird manual page" } ]
[ { "msg_contents": "We've discussed at least a couple of times before that it would be nice\nto be able to create stand-alone composite types. Tom mentioned that\nideally this would be done as part of a refactoring of system tables so\nthat attributes belonged to pg_type, instead of belonging to pg_class.\nBut it wasn't clear that this approach was worth the effort,\nparticularly due to backwards compatability breakage.\n\nRecently Tom mentioned another alternative (see:\nhttp://archives.postgresql.org/pgsql-hackers/2002-07/msg00788.php for\nmore). The basic idea was to \"create a new 'dummy' relkind for a\npg_class entry that isn't a real relation, but merely a front for a\ncomposite type in pg_type.\"\n\nBased on Tom's suggestion, I propose the following:\n\n1. Define a new pg_class relkind as 'c' for composite. Currently relkind\n can be: 'S' sequence, 'i' index, 'r' relation, 's' special, 't'\n toast, and 'v' view.\n\n2. Borrow the needed parts from CREATE and DROP VIEW to implement a new\n form of the CREATE TYPE command, with syntax something like:\n\n CREATE TYPE typename AS ( column_name data_type [, ... ] )\n\n This would add a pg_class entry of relkind 'c', and add a new\n pg_type entry of typtype 'c', with typrelid pointing to the\n pg_class entry. Essentially, this new stand-alone composite type\n looks a lot like a view without any rules.\n\n3. Modify CREATE FUNCTION to allow the implicit creation of a dependent\n composite type, e.g.:\n\n CREATE [ OR REPLACE ] FUNCTION name ( [ argtype [, ...] ] )\n RETURNS [setof] { data_type | (column_name data_type [, ... ]) }...\n\n This would automatically create a stand-alone composite type with a\n system generated name for the function. Thanks to the new dependency\n tracking, the implicit composite type would go away if the function\n is dropped.\n\n\nComments, objections, or thoughts?\n\nThanks,\n\nJoe\n\n\n", "msg_date": "Mon, 29 Jul 2002 22:50:41 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Proposal: stand-alone composite types" }, { "msg_contents": "> 3. Modify CREATE FUNCTION to allow the implicit creation of a dependent\n> composite type, e.g.:\n>\n> CREATE [ OR REPLACE ] FUNCTION name ( [ argtype [, ...] ] )\n> RETURNS [setof] { data_type | (column_name data_type [, ... ]) }...\n>\n> This would automatically create a stand-alone composite type with a\n> system generated name for the function. Thanks to the new dependency\n> tracking, the implicit composite type would go away if the function\n> is dropped.\n>\n>\n> Comments, objections, or thoughts?\n\nI'm just licking my lips in anticipation of converting my entire website to\nSRFs ;)\n\nChris\n\n", "msg_date": "Tue, 30 Jul 2002 14:00:46 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Proposal: stand-alone composite types" }, { "msg_contents": "Joe Conway wrote:\n> 2. Borrow the needed parts from CREATE and DROP VIEW to implement a new\n> form of the CREATE TYPE command, with syntax something like:\n> \n> CREATE TYPE typename AS ( column_name data_type [, ... ] )\n> \n> This would add a pg_class entry of relkind 'c', and add a new\n> pg_type entry of typtype 'c', with typrelid pointing to the\n> pg_class entry. Essentially, this new stand-alone composite type\n> looks a lot like a view without any rules.\n\nI'm working on stand-alone composite types and running into a \nreduce/reduce problem with the grammer. Any suggestions would be \nappreciated. Here's what I have:\n\nDefineStmt:\n CREATE AGGREGATE func_name definition\n {\n . . .\n }\n | CREATE TYPE_P qualified_name AS\n '(' TableFuncElementList ')'\n {\n CompositeTypeStmt *n = makeNode(CompositeTypeStmt);\n n->typevar = $3;\n n->coldeflist = $6;\n $$ = (Node *)n;\n }\n\nThanks,\n\nJoe\n\n", "msg_date": "Wed, 07 Aug 2002 09:55:47 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: stand-alone composite types" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> I'm working on stand-alone composite types and running into a \n> reduce/reduce problem with the grammer. Any suggestions would be \n> appreciated. Here's what I have:\n\n> DefineStmt:\n> | CREATE TYPE_P qualified_name AS\n> '(' TableFuncElementList ')'\n\nUse any_name, not qualified_name. As-is, you're forcing the parser\nto try to distinguish the two forms of CREATE TYPE before it can\nsee anything that would tell the difference.\n\nIn hindsight I think it was a mistake to set up RangeVar/qualified_name\nas a distinct reduction path from non-relation qualified names ---\nwe'd have been better off using a single production and a uniform\nintermediate representation. But I haven't had time to investigate\nsimplifying the grammar that way.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Aug 2002 23:25:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: stand-alone composite types " }, { "msg_contents": "Joe Conway wrote:\n > Based on Tom's suggestion, I propose the following:\n >\n > 1. Define a new pg_class relkind as 'c' for composite. Currently relkind\n > can be: 'S' sequence, 'i' index, 'r' relation, 's' special, 't'\n > toast, and 'v' view.\n >\n > 2. Borrow the needed parts from CREATE and DROP VIEW to implement a new\n > form of the CREATE TYPE command, with syntax something like:\n >\n > CREATE TYPE typename AS ( column_name data_type [, ... ] )\n >\n > This would add a pg_class entry of relkind 'c', and add a new\n > pg_type entry of typtype 'c', with typrelid pointing to the\n > pg_class entry. Essentially, this new stand-alone composite type\n > looks a lot like a view without any rules.\n\nItems 1 and 2 from the proposal above are implemented in the attached\npatch. I was able to get rid of the reduce/reduce conflict with Tom's \nhelp (thanks Tom!).\n\ntest=# CREATE TYPE compfoo AS (f1 int, f2 int);\nCREATE TYPE\ntest=# CREATE FUNCTION getfoo() RETURNS SETOF compfoo AS 'SELECT fooid,\nfoosubid FROM foo' LANGUAGE SQL;\nCREATE FUNCTION\ntest=# SELECT * FROM getfoo();\n f1 | f2\n----+----\n 1 | 1\n 1 | 2\n(2 rows)\n\ntest=# DROP TYPE compfoo;\nNOTICE: function getfoo() depends on type compfoo\nERROR: Cannot drop relation compfoo because other objects depend on it\n Use DROP ... CASCADE to drop the dependent objects too\ntest=# DROP TYPE compfoo CASCADE;\nNOTICE: Drop cascades to function getfoo()\nDROP TYPE\n\nPasses all regression tests (well, I'm on RedHat 7.3, so there are three \n\"expected\" failures). Doc and regression adjustments included. If there \nare no objections, please apply.\n\nThanks,\n\nJoe", "msg_date": "Wed, 07 Aug 2002 21:26:03 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "stand-alone composite types patch (was [HACKERS] Proposal:\n stand-alone\n\tcomposite types)" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Items 1 and 2 from the proposal above are implemented in the attached\n> patch.\n\nGood first cut, but...\n\nI don't like the way you did the dependencies. In a normal relation,\nthe type row is internally dependent on the class row. This causes the\ntype to automatically go away if the relation is dropped, and it also\nprevents an attempt to manually drop the type directly (without dropping\nthe relation). For a composite type, of course we want exactly the\nopposite behavior: the class row should internally depend on the type,\nnot vice versa.\n\nIf you did it that way then you'd not need that ugly kluge in\nRemoveType. What you'd need instead is some smarts (a kluge!?) in\nsetting up the dependency. Currently that dependency is made in\nTypeCreate which doesn't know what sort of relation it's creating\na type for. Probably the best answer is to pull that particular\ndependency out of TypeCreate, and make it (in the proper direction)\nin AddNewRelationType.\n\nAlso, I'm not following the point of the separation between\nDefineCompositeType and DefineCompositeTypeRelation; nor do I see a need\nfor a CommandCounterIncrement call in there.\n\nYou have missed a number of places where this new relkind ought to\nbe special-cased the same way RELKIND_VIEW is --- for example\nCheckAttributeNames and AddNewAttributeTuples, since a composite type\npresumably shouldn't have system columns associated. I'd counsel\nlooking at all references to RELKIND_VIEW to see which places also need\nto check for RELKIND_COMPOSITE_TYPE.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Aug 2002 00:55:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: stand-alone composite types patch (was [HACKERS] Proposal:\n\tstand-alone composite types)" }, { "msg_contents": "Tom Lane wrote:\n> If you did it that way then you'd not need that ugly kluge in\n> RemoveType. What you'd need instead is some smarts (a kluge!?) in\n> setting up the dependency. Currently that dependency is made in\n> TypeCreate which doesn't know what sort of relation it's creating\n> a type for. Probably the best answer is to pull that particular\n> dependency out of TypeCreate, and make it (in the proper direction)\n> in AddNewRelationType.\n\nOK -- I'll take a look.\n\n> Also, I'm not following the point of the separation between\n> DefineCompositeType and DefineCompositeTypeRelation; nor do I see a need\n> for a CommandCounterIncrement call in there.\n\nWell the next thing I was going to work on after this was an implicitly \ncreated composite type when creating a function. I thought maybe the \nCommandCounterIncrement would be needed so that the type could be \ncreated and then immediately used by the function. In any case, I'll \ncombine the two functions.\n\n\n> You have missed a number of places where this new relkind ought to\n> be special-cased the same way RELKIND_VIEW is --- for example\n> CheckAttributeNames and AddNewAttributeTuples, since a composite type\n> presumably shouldn't have system columns associated. I'd counsel\n> looking at all references to RELKIND_VIEW to see which places also need\n> to check for RELKIND_COMPOSITE_TYPE.\n\nYeah, after I fired off the post it occurred to me that I had neglected \nto do that. I was just going through that exercise now.\n\nThanks for the (quick!) review. Round two will be probably sometime \ntomorrow.\n\nJoe\n\n\n", "msg_date": "Wed, 07 Aug 2002 22:06:47 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: stand-alone composite types patch (was [HACKERS] Proposal:" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Tom Lane wrote:\n>> Also, I'm not following the point of the separation between\n>> DefineCompositeType and DefineCompositeTypeRelation; nor do I see a need\n>> for a CommandCounterIncrement call in there.\n\n> Well the next thing I was going to work on after this was an implicitly \n> created composite type when creating a function. I thought maybe the \n> CommandCounterIncrement would be needed so that the type could be \n> created and then immediately used by the function.\n\nHm. Maybe --- it would depend on whether the function-creating code\nactually tried to look at the type definition, as opposed to just using\nits OID. (You'll probably want DefineCompositeType to return the type\nOID, btw.) In any case, I'd be inclined to put the CCI call in the\ncaller not the callee, so it's only done when actually needed. It's\nsurely not needed for a standalone CREATE TYPE command.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Aug 2002 01:21:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: stand-alone composite types patch (was [HACKERS] Proposal:\n\tstand-alone composite types)" }, { "msg_contents": "Tom Lane wrote:\n> You have missed a number of places where this new relkind ought to\n> be special-cased the same way RELKIND_VIEW is --- for example\n> CheckAttributeNames and AddNewAttributeTuples, since a composite type\n> presumably shouldn't have system columns associated. I'd counsel\n> looking at all references to RELKIND_VIEW to see which places also need\n> to check for RELKIND_COMPOSITE_TYPE.\n\nOne of the places I missed was pg_dump.c. In working on pg_dump support, \nI ran across a problem:\n\ntest=# CREATE TYPE \"MyInt42\" (internallength = 4,input = int4in,output = \nint4out,alignment = int4,default = 42,passedbyvalue);\nCREATE TYPE\ntest=# CREATE TYPE \"compfoo\" AS (f1 \"MyInt42\", f2 integer);\nCREATE TYPE\ntest=# drop type compfoo;\nDROP TYPE\ntest=# CREATE TYPE \"compfoo\" AS (f1 \"MyInt42\", f2 \"integer\");\nERROR: Type \"integer\" does not exist\ntest=# create table tbl_0 (f1 \"integer\");\nERROR: Type \"integer\" does not exist\ntest=# create table tbl_0 (f1 \"MyInt42\");\nCREATE TABLE\ntest=# drop table tbl_0 ;\nDROP TABLE\ntest=# create table tbl_0 (f1 integer);\nCREATE TABLE\n\nShouldn't \"integer\" be recognized as a valid type?\n\nThanks,\n\nJoe\n\n", "msg_date": "Thu, 08 Aug 2002 14:11:38 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: stand-alone composite types patch (was [HACKERS] Proposal:" }, { "msg_contents": "Tom Lane wrote:\n> If you did it that way then you'd not need that ugly kluge in\n> RemoveType. What you'd need instead is some smarts (a kluge!?) in\n> setting up the dependency. Currently that dependency is made in\n> TypeCreate which doesn't know what sort of relation it's creating\n> a type for. Probably the best answer is to pull that particular\n> dependency out of TypeCreate, and make it (in the proper direction)\n> in AddNewRelationType.\n\nFixed.\n\n> Also, I'm not following the point of the separation between\n> DefineCompositeType and DefineCompositeTypeRelation; nor do I see a need\n> for a CommandCounterIncrement call in there.\n\nFixed.\n\n\n> You have missed a number of places where this new relkind ought to\n> be special-cased the same way RELKIND_VIEW is --- for example\n> CheckAttributeNames and AddNewAttributeTuples, since a composite type\n> presumably shouldn't have system columns associated. I'd counsel\n> looking at all references to RELKIND_VIEW to see which places also need\n> to check for RELKIND_COMPOSITE_TYPE.\n\nYup, I had missed lots of things, not the least of which was pg_dump. \nNew patch attached includes pg_dump, psql (\\dT), docs, and regression \nsupport.\n\nThere is also a small adjustment to the expected output file for \nselect-having. I was getting a regression failure based on ordering of \nthe results, so I added ORDER BY clauses.\n\nPasses all regression tests. If no more objections, please apply.\n\nThanks,\n\nJoe", "msg_date": "Thu, 08 Aug 2002 15:48:41 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: stand-alone composite types patch (was [HACKERS] Proposal:" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Shouldn't \"integer\" be recognized as a valid type?\n\nNot when double-quoted; you are being overzealous about quoting.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Aug 2002 18:57:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: stand-alone composite types patch (was [HACKERS] Proposal:\n\tstand-alone composite types)" }, { "msg_contents": "Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n>>Shouldn't \"integer\" be recognized as a valid type?\n> Not when double-quoted; you are being overzealous about quoting.\n\nI figured that out (new patch works \"correctly\"), but it does seem \ninconsistent:\n\ntest=# create table tbl_0 (f1 \"integer\");\nERROR: Type \"integer\" does not exist\ntest=# create table \"tbl_0\" (\"f1\" integer);\nCREATE TABLE\ntest=# select * from tbl_0 ;\n f1\n----\n(0 rows)\n\ntest=# select f1 from tbl_0 ;\n f1\n----\n(0 rows)\n\ntest=# select \"f1\" from tbl_0 ;\n f1\n----\n(0 rows)\n\n\nFor table and column identifiers, if they were defined in all lowercase \nI can either quote them, or not -- it works either way. Same for *user* \ndefined data types:\n\n\ntest=# CREATE TYPE myint42 ( internallength = 4, input = int4in, output \n= int4out, default = '42', alignment = int4, storage = plain, \npassedbyvalue);\nCREATE TYPE\ntest=# create table \"tbl_1\" (\"f1\" myint42);\nCREATE TABLE\ntest=# create table \"tbl_2\" (\"f1\" \"myint42\");\nCREATE TABLE\ntest=# \\d tbl_2\n Table \"tbl_2\"\n Column | Type | Modifiers\n--------+---------+-----------\n f1 | myint42 |\n\n\nBut *internal* data types only work unquoted.\n\nJoe\n\n", "msg_date": "Thu, 08 Aug 2002 16:40:00 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: stand-alone composite types patch (was [HACKERS] Proposal:" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Tom Lane wrote:\n> Shouldn't \"integer\" be recognized as a valid type?\n>> Not when double-quoted; you are being overzealous about quoting.\n\n> I figured that out (new patch works \"correctly\"), but it does seem \n> inconsistent:\n\nWould you expect \"point[]\" and point[] to be the same?\n\nThe actual type name is int4. int4 and \"int4\" act the same. Aliases\nlike int and integer are special keywords and so are not recognized\nwhen quoted.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Aug 2002 23:51:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: stand-alone composite types patch (was [HACKERS] Proposal:\n\tstand-alone composite types)" }, { "msg_contents": "Joe Conway wrote:\n> There is also a small adjustment to the expected output file for \n> select-having. I was getting a regression failure based on ordering of \n> the results, so I added ORDER BY clauses.\n\nI now see that the select-having fails differently on my PC at home from \nthe one at work. At home I see in postgresql.conf:\n\nLC_MESSAGES = 'C'\nLC_MONETARY = 'C'\nLC_NUMERIC = 'C'\nLC_TIME = 'C'\n\nand at work:\n\nLC_MESSAGES = 'en_US'\nLC_MONETARY = 'en_US'\nLC_NUMERIC = 'en_US'\nLC_TIME = 'en_US'\n\nI have been running `make installcheck` instead of `make check`. I \ngather that `make installcheck` does not set LOCALE to 'C' (as make \ncheck does -- I think). Should it?\n\nPlease rip the select-having pieces out of this patch when it is \napplied, or let me know and I'll submit another patch.\n\nThanks,\n\nJoe\n\n", "msg_date": "Fri, 09 Aug 2002 09:01:56 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: stand-alone composite types patch (was [HACKERS] Proposal:" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Joe Conway wrote:\n>> There is also a small adjustment to the expected output file for \n>> select-having. I was getting a regression failure based on ordering of \n>> the results, so I added ORDER BY clauses.\n\n> I now see that the select-having fails differently on my PC at home from \n> the one at work. At home I see in postgresql.conf:\n\n> LC_MESSAGES = 'C'\n> LC_MONETARY = 'C'\n> LC_NUMERIC = 'C'\n> LC_TIME = 'C'\n\n> and at work:\n\n> LC_MESSAGES = 'en_US'\n> LC_MONETARY = 'en_US'\n> LC_NUMERIC = 'en_US'\n> LC_TIME = 'en_US'\n\n> I have been running `make installcheck` instead of `make check`. I \n> gather that `make installcheck` does not set LOCALE to 'C' (as make \n> check does -- I think). Should it?\n\nThe problem is that LC_COLLATE is (presumably) also en_US at work;\n\"make installcheck\" hasn't got any way of changing the collation of\nthe preinstalled server, because that was frozen at initdb. See\nthe discussion in the regression-test documentation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Aug 2002 12:26:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: stand-alone composite types patch (was [HACKERS] Proposal: " }, { "msg_contents": "Joe Conway writes:\n\n> 3. Modify CREATE FUNCTION to allow the implicit creation of a dependent\n> composite type, e.g.:\n\nForgive this blunt question, but: Why?\n\nOf course I can see the answer, it's convenient, but wouldn't the system\nbe more consistent overall if all functions and types are declared\nexplicitly?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sat, 10 Aug 2002 00:53:03 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Proposal: stand-alone composite types" }, { "msg_contents": "Peter Eisentraut wrote:\n> Joe Conway writes:\n>>3. Modify CREATE FUNCTION to allow the implicit creation of a dependent\n>> composite type, e.g.:\n> \n> Forgive this blunt question, but: Why?\n\nNow's a *great* time for a blunt question because I haven't started \nactively working on this yet. Much better than after I'm done. ;-)\n\n\n> Of course I can see the answer, it's convenient, but wouldn't the system\n> be more consistent overall if all functions and types are declared\n> explicitly?\n> \n\nAnd of couse you are correct. It is almost purely convenience. My \nreasoning was this: if I am creating a function which returns a \ncomposite type, then the fact that a named composite type exists is \nsuperfluous to me. It would be more natural for me to do:\n\n CREATE FUNCTION foo() RETURNS SETOF (f1 int, f2 text);\n\nthan to do:\n\n CREATE TYPE some_arbitrary_name AS (f1 int, f2 text);\n CREATE FUNCTION foo() RETURNS SETOF some_arbitrary_name;\n\nBut I admit it is only a \"nice-to-have\", not a \"need-to-have\".\n\nHow do others feel? Do we want to be able to implicitly create a \ncomposite type during function creation? Or is it unneeded bloat?\n\nI prefer the former, but don't have a strong argument against the latter.\n\nJoe\n\n\n", "msg_date": "Fri, 09 Aug 2002 16:03:55 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: stand-alone composite types" }, { "msg_contents": "I think it buys the same as SERIAL does for sequences.\n\nIs it likely to have more than one function using a complex type like\nthat? If not, then allowing it's creation (not enforcing) could be\nuseful.\n\n\nOn Fri, 2002-08-09 at 19:03, Joe Conway wrote:\n> Peter Eisentraut wrote:\n> > Joe Conway writes:\n> >>3. Modify CREATE FUNCTION to allow the implicit creation of a dependent\n> >> composite type, e.g.:\n> > \n> > Forgive this blunt question, but: Why?\n> \n> Now's a *great* time for a blunt question because I haven't started \n> actively working on this yet. Much better than after I'm done. ;-)\n> \n> \n> > Of course I can see the answer, it's convenient, but wouldn't the system\n> > be more consistent overall if all functions and types are declared\n> > explicitly?\n> > \n> \n> And of couse you are correct. It is almost purely convenience. My \n> reasoning was this: if I am creating a function which returns a \n> composite type, then the fact that a named composite type exists is \n> superfluous to me. It would be more natural for me to do:\n> \n> CREATE FUNCTION foo() RETURNS SETOF (f1 int, f2 text);\n> \n> than to do:\n> \n> CREATE TYPE some_arbitrary_name AS (f1 int, f2 text);\n> CREATE FUNCTION foo() RETURNS SETOF some_arbitrary_name;\n> \n> But I admit it is only a \"nice-to-have\", not a \"need-to-have\".\n> \n> How do others feel? Do we want to be able to implicitly create a \n> composite type during function creation? Or is it unneeded bloat?\n> \n> I prefer the former, but don't have a strong argument against the latter.\n> \n> Joe\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n\n", "msg_date": "09 Aug 2002 19:08:11 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Proposal: stand-alone composite types" }, { "msg_contents": "Rod Taylor wrote:\n> I think it buys the same as SERIAL does for sequences.\n\nThat's a great analogy.\n\n> Is it likely to have more than one function using a complex type like\n> that? If not, then allowing it's creation (not enforcing) could be\n> useful.\n\nThat's what I was thinking. In cases where you want to use the type for \nseveral functions, use CREATE TYPE. If you only need the type for one \nfunction, let the function creation process manage it for you.\n\nJoe\n\n", "msg_date": "Fri, 09 Aug 2002 16:13:30 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: stand-alone composite types" }, { "msg_contents": "\n> > Is it likely to have more than one function using a complex type like\n> > that? If not, then allowing it's creation (not enforcing) could be\n> > useful.\n> \n> That's what I was thinking. In cases where you want to use the type for \n> several functions, use CREATE TYPE. If you only need the type for one \n> function, let the function creation process manage it for you.\n\nSo long as the type dissapears with the drop of the function. But don't\nmake stuff you don't clean up :)\n\n\n", "msg_date": "09 Aug 2002 19:46:28 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Proposal: stand-alone composite types" }, { "msg_contents": "> That's what I was thinking. In cases where you want to use the type for\n> several functions, use CREATE TYPE. If you only need the type for one\n> function, let the function creation process manage it for you.\n\nIt would be nice then to have some mechanism for converting the\n\"automatic type\" to a named type which could be used elsewhere.\nOtherwise one would need to garbage collect the separate stuff later,\nwhich would probably go into the \"not so convenient\" category of\nfeatures...\n\n - Thomas\n", "msg_date": "Fri, 09 Aug 2002 17:26:30 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Proposal: stand-alone composite types" }, { "msg_contents": "Thomas Lockhart wrote:\n>>That's what I was thinking. In cases where you want to use the type for\n>>several functions, use CREATE TYPE. If you only need the type for one\n>>function, let the function creation process manage it for you.\n> \n> It would be nice then to have some mechanism for converting the\n> \"automatic type\" to a named type which could be used elsewhere.\n> Otherwise one would need to garbage collect the separate stuff later,\n> which would probably go into the \"not so convenient\" category of\n> features...\n\nWell I think that could be handled with the new dependency tracking \nsystem. Same as the SERIAL/sequence analogy -- when you drop the \nfunction, the type would automatically and transparently also get dropped.\n\nJoe\n\n\n", "msg_date": "Fri, 09 Aug 2002 17:36:46 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: stand-alone composite types" }, { "msg_contents": "Joe Conway wrote:\n> Thomas Lockhart wrote:\n> >>That's what I was thinking. In cases where you want to use the type for\n> >>several functions, use CREATE TYPE. If you only need the type for one\n> >>function, let the function creation process manage it for you.\n> > \n> > It would be nice then to have some mechanism for converting the\n> > \"automatic type\" to a named type which could be used elsewhere.\n> > Otherwise one would need to garbage collect the separate stuff later,\n> > which would probably go into the \"not so convenient\" category of\n> > features...\n> \n> Well I think that could be handled with the new dependency tracking \n> system. Same as the SERIAL/sequence analogy -- when you drop the \n> function, the type would automatically and transparently also get dropped.\n\nAll this type extension stuff is complex. If we can make it easier for\npeople to get started with it, we should.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 9 Aug 2002 21:59:30 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: stand-alone composite types" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Joe Conway writes:\n>> 3. Modify CREATE FUNCTION to allow the implicit creation of a dependent\n>> composite type, e.g.:\n\n> Forgive this blunt question, but: Why?\n\n> Of course I can see the answer, it's convenient, but wouldn't the system\n> be more consistent overall if all functions and types are declared\n> explicitly?\n\nI was wondering about that too, in particular: what name are you going\nto give to the implicit type, and what if it conflicts?\n\nThe already-accepted mechanism for anonymous function-result types for\nRECORD functions doesn't have that problem, because it has no need to\ncreate a catalog entry for the anonymous type. But I'm not sure what\nto do for record types that need to be present in the catalogs.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 10 Aug 2002 00:52:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: stand-alone composite types " }, { "msg_contents": "Tom Lane wrote:\n> I was wondering about that too, in particular: what name are you going\n> to give to the implicit type, and what if it conflicts?\n> \n> The already-accepted mechanism for anonymous function-result types for\n> RECORD functions doesn't have that problem, because it has no need to\n> create a catalog entry for the anonymous type. But I'm not sure what\n> to do for record types that need to be present in the catalogs.\n\nI was intending to use the same naming method used for SERIAL sequences.\n\nBut since the poll from this afternoon only showed weak support and \nrelatively strong objections, I'm OK with putting this aside for now. If \nenough people seem interested once they start using table functions in \n7.3, we can always resurrect this idea.\n\nThe most important changes (IMHO) were the \"anonymous type\" and \"CREATE \nTYPE x AS()\" pieces anyway, so I'm happy where we are (at least once the \nstand-alone composite type patch is applied ;) ). Onward and upward...\n\nJoe\n\n\n\n", "msg_date": "Fri, 09 Aug 2002 22:12:26 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: stand-alone composite types" }, { "msg_contents": "> than to do:\n>\n> CREATE TYPE some_arbitrary_name AS (f1 int, f2 text);\n> CREATE FUNCTION foo() RETURNS SETOF some_arbitrary_name;\n>\n> But I admit it is only a \"nice-to-have\", not a \"need-to-have\".\n>\n> How do others feel? Do we want to be able to implicitly create a\n> composite type during function creation? Or is it unneeded bloat?\n>\n> I prefer the former, but don't have a strong argument against the latter.\n\nThe former is super sweet, but does require some extra catalog entries for\nevery procedure - but that's the DBA's problem. They can always use the\nlatter syntax. The format syntax is cool and easy and it Should Just Work\nfor newbies...\n\nChris\n\n\n", "msg_date": "Sat, 10 Aug 2002 17:58:17 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Proposal: stand-alone composite types" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nJoe Conway wrote:\n> Tom Lane wrote:\n> > If you did it that way then you'd not need that ugly kluge in\n> > RemoveType. What you'd need instead is some smarts (a kluge!?) in\n> > setting up the dependency. Currently that dependency is made in\n> > TypeCreate which doesn't know what sort of relation it's creating\n> > a type for. Probably the best answer is to pull that particular\n> > dependency out of TypeCreate, and make it (in the proper direction)\n> > in AddNewRelationType.\n> \n> Fixed.\n> \n> > Also, I'm not following the point of the separation between\n> > DefineCompositeType and DefineCompositeTypeRelation; nor do I see a need\n> > for a CommandCounterIncrement call in there.\n> \n> Fixed.\n> \n> \n> > You have missed a number of places where this new relkind ought to\n> > be special-cased the same way RELKIND_VIEW is --- for example\n> > CheckAttributeNames and AddNewAttributeTuples, since a composite type\n> > presumably shouldn't have system columns associated. I'd counsel\n> > looking at all references to RELKIND_VIEW to see which places also need\n> > to check for RELKIND_COMPOSITE_TYPE.\n> \n> Yup, I had missed lots of things, not the least of which was pg_dump. \n> New patch attached includes pg_dump, psql (\\dT), docs, and regression \n> support.\n> \n> There is also a small adjustment to the expected output file for \n> select-having. I was getting a regression failure based on ordering of \n> the results, so I added ORDER BY clauses.\n> \n> Passes all regression tests. If no more objections, please apply.\n> \n> Thanks,\n> \n> Joe\n> \n\n> Index: doc/src/sgml/ref/create_type.sgml\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/doc/src/sgml/ref/create_type.sgml,v\n> retrieving revision 1.30\n> diff -c -r1.30 create_type.sgml\n> *** doc/src/sgml/ref/create_type.sgml\t24 Jul 2002 19:11:07 -0000\t1.30\n> --- doc/src/sgml/ref/create_type.sgml\t8 Aug 2002 14:49:57 -0000\n> ***************\n> *** 30,35 ****\n> --- 30,42 ----\n> [ , ALIGNMENT = <replaceable class=\"parameter\">alignment</replaceable> ]\n> [ , STORAGE = <replaceable class=\"parameter\">storage</replaceable> ]\n> )\n> + \n> + CREATE TYPE <replaceable class=\"parameter\">typename</replaceable> AS\n> + ( <replaceable class=\"PARAMETER\">column_definition_list</replaceable> )\n> + \n> + where <replaceable class=\"PARAMETER\">column_definition_list</replaceable> can be:\n> + \n> + ( <replaceable class=\"PARAMETER\">column_name</replaceable> <replaceable class=\"PARAMETER\">data_type</replaceable> [, ... ] )\n> </synopsis>\n> \n> <refsect2 id=\"R2-SQL-CREATETYPE-1\">\n> ***************\n> *** 138,143 ****\n> --- 145,169 ----\n> </para>\n> </listitem>\n> </varlistentry>\n> + \n> + <varlistentry>\n> + <term><replaceable class=\"PARAMETER\">column_name</replaceable></term>\n> + <listitem>\n> + <para>\n> + The name of a column of the composite type.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> + <varlistentry>\n> + <term><replaceable class=\"PARAMETER\">data_type</replaceable></term>\n> + <listitem>\n> + <para>\n> + The name of an existing data type.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> </variablelist>\n> </para>\n> </refsect2>\n> ***************\n> *** 191,199 ****\n> </para>\n> \n> <para>\n> ! <command>CREATE TYPE</command> requires the registration of two functions\n> ! (using CREATE FUNCTION) before defining the type. The\n> ! representation of a new base type is determined by\n> <replaceable class=\"parameter\">input_function</replaceable>, which\n> converts the type's external representation to an internal\n> representation usable by the\n> --- 217,225 ----\n> </para>\n> \n> <para>\n> ! The first form of <command>CREATE TYPE</command> requires the \n> ! registration of two functions (using CREATE FUNCTION) before defining the\n> ! type. The representation of a new base type is determined by\n> <replaceable class=\"parameter\">input_function</replaceable>, which\n> converts the type's external representation to an internal\n> representation usable by the\n> ***************\n> *** 288,293 ****\n> --- 314,327 ----\n> <literal>extended</literal> and <literal>external</literal> items.)\n> </para>\n> \n> + <para>\n> + The second form of <command>CREATE TYPE</command> requires a column\n> + definition list in the form ( <replaceable class=\"PARAMETER\">column_name</replaceable> \n> + <replaceable class=\"PARAMETER\">data_type</replaceable> [, ... ] ). This\n> + creates a composite type, similar to that of a TABLE or VIEW relation.\n> + A stand-alone composite type is useful as the return type of FUNCTION.\n> + </para>\n> + \n> <refsect2>\n> <title>Array Types</title>\n> \n> ***************\n> *** 370,375 ****\n> --- 404,418 ----\n> CREATE TYPE bigobj (INPUT = lo_filein, OUTPUT = lo_fileout,\n> INTERNALLENGTH = VARIABLE);\n> CREATE TABLE big_objs (id int4, obj bigobj);\n> + </programlisting>\n> + </para>\n> + \n> + <para>\n> + This example creates a composite type and uses it in\n> + a table function definition:\n> + <programlisting>\n> + CREATE TYPE compfoo AS (f1 int, f2 int);\n> + CREATE FUNCTION getfoo() RETURNS SETOF compfoo AS 'SELECT fooid, foorefid FROM foo' LANGUAGE SQL;\n> </programlisting>\n> </para>\n> </refsect1>\n> Index: src/backend/catalog/heap.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/catalog/heap.c,v\n> retrieving revision 1.219\n> diff -c -r1.219 heap.c\n> *** src/backend/catalog/heap.c\t6 Aug 2002 02:36:33 -0000\t1.219\n> --- src/backend/catalog/heap.c\t8 Aug 2002 14:49:57 -0000\n> ***************\n> *** 358,364 ****\n> \t *\n> \t * Skip this for a view, since it doesn't have system attributes.\n> \t */\n> ! \tif (relkind != RELKIND_VIEW)\n> \t{\n> \t\tfor (i = 0; i < natts; i++)\n> \t\t{\n> --- 358,364 ----\n> \t *\n> \t * Skip this for a view, since it doesn't have system attributes.\n> \t */\n> ! \tif (relkind != RELKIND_VIEW && relkind != RELKIND_COMPOSITE_TYPE)\n> \t{\n> \t\tfor (i = 0; i < natts; i++)\n> \t\t{\n> ***************\n> *** 475,481 ****\n> \t * Skip all for a view. We don't bother with making datatype\n> \t * dependencies here, since presumably all these types are pinned.\n> \t */\n> ! \tif (relkind != RELKIND_VIEW)\n> \t{\n> \t\tdpp = SysAtt;\n> \t\tfor (i = 0; i < -1 - FirstLowInvalidHeapAttributeNumber; i++)\n> --- 475,481 ----\n> \t * Skip all for a view. We don't bother with making datatype\n> \t * dependencies here, since presumably all these types are pinned.\n> \t */\n> ! \tif (relkind != RELKIND_VIEW && relkind != RELKIND_COMPOSITE_TYPE)\n> \t{\n> \t\tdpp = SysAtt;\n> \t\tfor (i = 0; i < -1 - FirstLowInvalidHeapAttributeNumber; i++)\n> ***************\n> *** 764,770 ****\n> \t/*\n> \t * We create the disk file for this relation here\n> \t */\n> ! \tif (relkind != RELKIND_VIEW)\n> \t\theap_storage_create(new_rel_desc);\n> \n> \t/*\n> --- 764,770 ----\n> \t/*\n> \t * We create the disk file for this relation here\n> \t */\n> ! \tif (relkind != RELKIND_VIEW && relkind != RELKIND_COMPOSITE_TYPE)\n> \t\theap_storage_create(new_rel_desc);\n> \n> \t/*\n> ***************\n> *** 1135,1141 ****\n> \t/*\n> \t * unlink the relation's physical file and finish up.\n> \t */\n> ! \tif (rel->rd_rel->relkind != RELKIND_VIEW)\n> \t\tsmgrunlink(DEFAULT_SMGR, rel);\n> \n> \t/*\n> --- 1135,1142 ----\n> \t/*\n> \t * unlink the relation's physical file and finish up.\n> \t */\n> ! \tif (rel->rd_rel->relkind != RELKIND_VIEW &&\n> ! \t\t\trel->rd_rel->relkind != RELKIND_COMPOSITE_TYPE)\n> \t\tsmgrunlink(DEFAULT_SMGR, rel);\n> \n> \t/*\n> Index: src/backend/catalog/namespace.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/catalog/namespace.c,v\n> retrieving revision 1.29\n> diff -c -r1.29 namespace.c\n> *** src/backend/catalog/namespace.c\t8 Aug 2002 01:44:30 -0000\t1.29\n> --- src/backend/catalog/namespace.c\t8 Aug 2002 14:49:57 -0000\n> ***************\n> *** 1578,1583 ****\n> --- 1578,1584 ----\n> \t\t\tcase RELKIND_RELATION:\n> \t\t\tcase RELKIND_SEQUENCE:\n> \t\t\tcase RELKIND_VIEW:\n> + \t\t\tcase RELKIND_COMPOSITE_TYPE:\n> \t\t\t\tAssertTupleDescHasOid(pgclass->rd_att);\n> \t\t\t\tobject.classId = RelOid_pg_class;\n> \t\t\t\tobject.objectId = HeapTupleGetOid(tuple);\n> Index: src/backend/catalog/pg_type.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/catalog/pg_type.c,v\n> retrieving revision 1.77\n> diff -c -r1.77 pg_type.c\n> *** src/backend/catalog/pg_type.c\t5 Aug 2002 03:29:16 -0000\t1.77\n> --- src/backend/catalog/pg_type.c\t8 Aug 2002 17:30:27 -0000\n> ***************\n> *** 311,325 ****\n> \n> \t\t/*\n> \t\t * If the type is a rowtype for a relation, mark it as internally\n> ! \t\t * dependent on the relation. This allows it to be auto-dropped\n> ! \t\t * when the relation is, and not otherwise.\n> \t\t */\n> \t\tif (OidIsValid(relationOid))\n> \t\t{\n> \t\t\treferenced.classId = RelOid_pg_class;\n> \t\t\treferenced.objectId = relationOid;\n> \t\t\treferenced.objectSubId = 0;\n> ! \t\t\trecordDependencyOn(&myself, &referenced, DEPENDENCY_INTERNAL);\n> \t\t}\n> \n> \t\t/*\n> --- 311,338 ----\n> \n> \t\t/*\n> \t\t * If the type is a rowtype for a relation, mark it as internally\n> ! \t\t * dependent on the relation, *unless* it is a stand-alone composite\n> ! \t\t * type relation. For the latter case, we have to reverse the\n> ! \t\t * dependency.\n> ! \t\t *\n> ! \t\t * In the former case, this allows the type to be auto-dropped\n> ! \t\t * when the relation is, and not otherwise. And in the latter,\n> ! \t\t * of course we get the opposite effect.\n> \t\t */\n> \t\tif (OidIsValid(relationOid))\n> \t\t{\n> + \t\t\tRelation\trel = relation_open(relationOid, AccessShareLock);\n> + \t\t\tchar\t\trelkind = rel->rd_rel->relkind;\n> + \t\t\trelation_close(rel, AccessShareLock);\n> + \n> \t\t\treferenced.classId = RelOid_pg_class;\n> \t\t\treferenced.objectId = relationOid;\n> \t\t\treferenced.objectSubId = 0;\n> ! \n> ! \t\t\tif (relkind != RELKIND_COMPOSITE_TYPE)\n> ! \t\t\t\trecordDependencyOn(&myself, &referenced, DEPENDENCY_INTERNAL);\n> ! \t\t\telse\n> ! \t\t\t\trecordDependencyOn(&referenced, &myself, DEPENDENCY_INTERNAL);\n> \t\t}\n> \n> \t\t/*\n> Index: src/backend/commands/copy.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/commands/copy.c,v\n> retrieving revision 1.162\n> diff -c -r1.162 copy.c\n> *** src/backend/commands/copy.c\t2 Aug 2002 18:15:06 -0000\t1.162\n> --- src/backend/commands/copy.c\t8 Aug 2002 14:49:57 -0000\n> ***************\n> *** 398,403 ****\n> --- 398,406 ----\n> \t\t\tif (rel->rd_rel->relkind == RELKIND_VIEW)\n> \t\t\t\telog(ERROR, \"You cannot copy view %s\",\n> \t\t\t\t\t RelationGetRelationName(rel));\n> + \t\t\telse if (rel->rd_rel->relkind == RELKIND_COMPOSITE_TYPE)\n> + \t\t\t\telog(ERROR, \"You cannot copy type relation %s\",\n> + \t\t\t\t\t RelationGetRelationName(rel));\n> \t\t\telse if (rel->rd_rel->relkind == RELKIND_SEQUENCE)\n> \t\t\t\telog(ERROR, \"You cannot change sequence relation %s\",\n> \t\t\t\t\t RelationGetRelationName(rel));\n> ***************\n> *** 442,447 ****\n> --- 445,453 ----\n> \t\t{\n> \t\t\tif (rel->rd_rel->relkind == RELKIND_VIEW)\n> \t\t\t\telog(ERROR, \"You cannot copy view %s\",\n> + \t\t\t\t\t RelationGetRelationName(rel));\n> + \t\t\telse if (rel->rd_rel->relkind == RELKIND_COMPOSITE_TYPE)\n> + \t\t\t\telog(ERROR, \"You cannot copy type relation %s\",\n> \t\t\t\t\t RelationGetRelationName(rel));\n> \t\t\telse if (rel->rd_rel->relkind == RELKIND_SEQUENCE)\n> \t\t\t\telog(ERROR, \"You cannot copy sequence %s\",\n> Index: src/backend/commands/tablecmds.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/commands/tablecmds.c,v\n> retrieving revision 1.28\n> diff -c -r1.28 tablecmds.c\n> *** src/backend/commands/tablecmds.c\t7 Aug 2002 21:45:01 -0000\t1.28\n> --- src/backend/commands/tablecmds.c\t8 Aug 2002 18:04:58 -0000\n> ***************\n> *** 345,350 ****\n> --- 345,354 ----\n> \t\telog(ERROR, \"TRUNCATE cannot be used on views. '%s' is a view\",\n> \t\t\t RelationGetRelationName(rel));\n> \n> + \tif (rel->rd_rel->relkind == RELKIND_COMPOSITE_TYPE)\n> + \t\telog(ERROR, \"TRUNCATE cannot be used on type relations. '%s' is a type\",\n> + \t\t\t RelationGetRelationName(rel));\n> + \n> \tif (!allowSystemTableMods && IsSystemRelation(rel))\n> \t\telog(ERROR, \"TRUNCATE cannot be used on system tables. '%s' is a system table\",\n> \t\t\t RelationGetRelationName(rel));\n> ***************\n> *** 3210,3221 ****\n> \t\tcase RELKIND_RELATION:\n> \t\tcase RELKIND_INDEX:\n> \t\tcase RELKIND_VIEW:\n> \t\tcase RELKIND_SEQUENCE:\n> \t\tcase RELKIND_TOASTVALUE:\n> \t\t\t/* ok to change owner */\n> \t\t\tbreak;\n> \t\tdefault:\n> ! \t\t\telog(ERROR, \"ALTER TABLE: relation \\\"%s\\\" is not a table, TOAST table, index, view, or sequence\",\n> \t\t\t\t NameStr(tuple_class->relname));\n> \t}\n> }\n> --- 3214,3226 ----\n> \t\tcase RELKIND_RELATION:\n> \t\tcase RELKIND_INDEX:\n> \t\tcase RELKIND_VIEW:\n> + \t\tcase RELKIND_COMPOSITE_TYPE:\n> \t\tcase RELKIND_SEQUENCE:\n> \t\tcase RELKIND_TOASTVALUE:\n> \t\t\t/* ok to change owner */\n> \t\t\tbreak;\n> \t\tdefault:\n> ! \t\t\telog(ERROR, \"ALTER TABLE: relation \\\"%s\\\" is not a table, TOAST table, index, view, type, or sequence\",\n> \t\t\t\t NameStr(tuple_class->relname));\n> \t}\n> }\n> Index: src/backend/commands/typecmds.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/commands/typecmds.c,v\n> retrieving revision 1.8\n> diff -c -r1.8 typecmds.c\n> *** src/backend/commands/typecmds.c\t24 Jul 2002 19:11:09 -0000\t1.8\n> --- src/backend/commands/typecmds.c\t8 Aug 2002 17:33:43 -0000\n> ***************\n> *** 38,43 ****\n> --- 38,44 ----\n> #include \"catalog/namespace.h\"\n> #include \"catalog/pg_type.h\"\n> #include \"commands/defrem.h\"\n> + #include \"commands/tablecmds.h\"\n> #include \"miscadmin.h\"\n> #include \"parser/parse_func.h\"\n> #include \"parser/parse_type.h\"\n> ***************\n> *** 50,56 ****\n> \n> static Oid findTypeIOFunction(List *procname, bool isOutput);\n> \n> - \n> /*\n> * DefineType\n> *\t\tRegisters a new type.\n> --- 51,56 ----\n> ***************\n> *** 665,668 ****\n> --- 665,707 ----\n> \t}\n> \n> \treturn procOid;\n> + }\n> + \n> + /*-------------------------------------------------------------------\n> + * DefineCompositeType\n> + *\n> + * Create a Composite Type relation.\n> + * `DefineRelation' does all the work, we just provide the correct\n> + * arguments!\n> + *\n> + * If the relation already exists, then 'DefineRelation' will abort\n> + * the xact...\n> + *\n> + * DefineCompositeType returns relid for use when creating\n> + * an implicit composite type during function creation\n> + *-------------------------------------------------------------------\n> + */\n> + Oid\n> + DefineCompositeType(const RangeVar *typevar, List *coldeflist)\n> + {\n> + \tCreateStmt *createStmt = makeNode(CreateStmt);\n> + \n> + \tif (coldeflist == NIL)\n> + \t\telog(ERROR, \"attempted to define composite type relation with\"\n> + \t\t\t\t\t\" no attrs\");\n> + \n> + \t/*\n> + \t * now create the parameters for keys/inheritance etc. All of them are\n> + \t * nil...\n> + \t */\n> + \tcreateStmt->relation = (RangeVar *) typevar;\n> + \tcreateStmt->tableElts = coldeflist;\n> + \tcreateStmt->inhRelations = NIL;\n> + \tcreateStmt->constraints = NIL;\n> + \tcreateStmt->hasoids = false;\n> + \n> + \t/*\n> + \t * finally create the relation...\n> + \t */\n> + \treturn DefineRelation(createStmt, RELKIND_COMPOSITE_TYPE);\n> }\n> Index: src/backend/executor/execMain.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/executor/execMain.c,v\n> retrieving revision 1.173\n> diff -c -r1.173 execMain.c\n> *** src/backend/executor/execMain.c\t7 Aug 2002 21:45:02 -0000\t1.173\n> --- src/backend/executor/execMain.c\t8 Aug 2002 14:49:57 -0000\n> ***************\n> *** 786,791 ****\n> --- 786,795 ----\n> \t\t\telog(ERROR, \"You can't change view relation %s\",\n> \t\t\t\t RelationGetRelationName(resultRelationDesc));\n> \t\t\tbreak;\n> + \t\tcase RELKIND_COMPOSITE_TYPE:\n> + \t\t\telog(ERROR, \"You can't change type relation %s\",\n> + \t\t\t\t RelationGetRelationName(resultRelationDesc));\n> + \t\t\tbreak;\n> \t}\n> \n> \tMemSet(resultRelInfo, 0, sizeof(ResultRelInfo));\n> Index: src/backend/nodes/copyfuncs.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/nodes/copyfuncs.c,v\n> retrieving revision 1.200\n> diff -c -r1.200 copyfuncs.c\n> *** src/backend/nodes/copyfuncs.c\t4 Aug 2002 19:48:09 -0000\t1.200\n> --- src/backend/nodes/copyfuncs.c\t8 Aug 2002 14:49:58 -0000\n> ***************\n> *** 2233,2238 ****\n> --- 2233,2249 ----\n> \treturn newnode;\n> }\n> \n> + static CompositeTypeStmt *\n> + _copyCompositeTypeStmt(CompositeTypeStmt *from)\n> + {\n> + \tCompositeTypeStmt *newnode = makeNode(CompositeTypeStmt);\n> + \n> + \tNode_Copy(from, newnode, typevar);\n> + \tNode_Copy(from, newnode, coldeflist);\n> + \n> + \treturn newnode;\n> + }\n> + \n> static ViewStmt *\n> _copyViewStmt(ViewStmt *from)\n> {\n> ***************\n> *** 2938,2943 ****\n> --- 2949,2957 ----\n> \t\t\tbreak;\n> \t\tcase T_TransactionStmt:\n> \t\t\tretval = _copyTransactionStmt(from);\n> + \t\t\tbreak;\n> + \t\tcase T_CompositeTypeStmt:\n> + \t\t\tretval = _copyCompositeTypeStmt(from);\n> \t\t\tbreak;\n> \t\tcase T_ViewStmt:\n> \t\t\tretval = _copyViewStmt(from);\n> Index: src/backend/nodes/equalfuncs.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/nodes/equalfuncs.c,v\n> retrieving revision 1.149\n> diff -c -r1.149 equalfuncs.c\n> *** src/backend/nodes/equalfuncs.c\t4 Aug 2002 23:49:59 -0000\t1.149\n> --- src/backend/nodes/equalfuncs.c\t8 Aug 2002 14:49:58 -0000\n> ***************\n> *** 1062,1067 ****\n> --- 1062,1078 ----\n> }\n> \n> static bool\n> + _equalCompositeTypeStmt(CompositeTypeStmt *a, CompositeTypeStmt *b)\n> + {\n> + \tif (!equal(a->typevar, b->typevar))\n> + \t\treturn false;\n> + \tif (!equal(a->coldeflist, b->coldeflist))\n> + \t\treturn false;\n> + \n> + \treturn true;\n> + }\n> + \n> + static bool\n> _equalViewStmt(ViewStmt *a, ViewStmt *b)\n> {\n> \tif (!equal(a->view, b->view))\n> ***************\n> *** 2110,2115 ****\n> --- 2121,2129 ----\n> \t\t\tbreak;\n> \t\tcase T_TransactionStmt:\n> \t\t\tretval = _equalTransactionStmt(a, b);\n> + \t\t\tbreak;\n> + \t\tcase T_CompositeTypeStmt:\n> + \t\t\tretval = _equalCompositeTypeStmt(a, b);\n> \t\t\tbreak;\n> \t\tcase T_ViewStmt:\n> \t\t\tretval = _equalViewStmt(a, b);\n> Index: src/backend/parser/gram.y\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/parser/gram.y,v\n> retrieving revision 2.357\n> diff -c -r2.357 gram.y\n> *** src/backend/parser/gram.y\t6 Aug 2002 05:40:45 -0000\t2.357\n> --- src/backend/parser/gram.y\t8 Aug 2002 14:49:58 -0000\n> ***************\n> *** 205,211 ****\n> \n> %type <list>\tstmtblock, stmtmulti,\n> \t\t\t\tOptTableElementList, TableElementList, OptInherit, definition,\n> ! \t\t\t\topt_distinct, opt_definition, func_args,\n> \t\t\t\tfunc_args_list, func_as, createfunc_opt_list\n> \t\t\t\toper_argtypes, RuleActionList, RuleActionMulti,\n> \t\t\t\topt_column_list, columnList, opt_name_list,\n> --- 205,211 ----\n> \n> %type <list>\tstmtblock, stmtmulti,\n> \t\t\t\tOptTableElementList, TableElementList, OptInherit, definition,\n> ! \t\t\t\topt_distinct, opt_definition, func_args, rowdefinition\n> \t\t\t\tfunc_args_list, func_as, createfunc_opt_list\n> \t\t\t\toper_argtypes, RuleActionList, RuleActionMulti,\n> \t\t\t\topt_column_list, columnList, opt_name_list,\n> ***************\n> *** 2247,2252 ****\n> --- 2247,2285 ----\n> \t\t\t\t\tn->definition = $4;\n> \t\t\t\t\t$$ = (Node *)n;\n> \t\t\t\t}\n> + \t\t\t| CREATE TYPE_P any_name AS rowdefinition\n> + \t\t\t\t{\n> + \t\t\t\t\tCompositeTypeStmt *n = makeNode(CompositeTypeStmt);\n> + \t\t\t\t\tRangeVar *r = makeNode(RangeVar);\n> + \n> + \t\t\t\t\tswitch (length($3))\n> + \t\t\t\t\t{\n> + \t\t\t\t\t\tcase 1:\n> + \t\t\t\t\t\t\tr->catalogname = NULL;\n> + \t\t\t\t\t\t\tr->schemaname = NULL;\n> + \t\t\t\t\t\t\tr->relname = strVal(lfirst($3));\n> + \t\t\t\t\t\t\tbreak;\n> + \t\t\t\t\t\tcase 2:\n> + \t\t\t\t\t\t\tr->catalogname = NULL;\n> + \t\t\t\t\t\t\tr->schemaname = strVal(lfirst($3));\n> + \t\t\t\t\t\t\tr->relname = strVal(lsecond($3));\n> + \t\t\t\t\t\t\tbreak;\n> + \t\t\t\t\t\tcase 3:\n> + \t\t\t\t\t\t\tr->catalogname = strVal(lfirst($3));\n> + \t\t\t\t\t\t\tr->schemaname = strVal(lsecond($3));\n> + \t\t\t\t\t\t\tr->relname = strVal(lfirst(lnext(lnext($3))));\n> + \t\t\t\t\t\t\tbreak;\n> + \t\t\t\t\t\tdefault:\n> + \t\t\t\t\t\t\telog(ERROR,\n> + \t\t\t\t\t\t\t\"Improper qualified name \"\n> + \t\t\t\t\t\t\t\"(too many dotted names): %s\",\n> + \t\t\t\t\t\t\t\t NameListToString($3));\n> + \t\t\t\t\t\t\tbreak;\n> + \t\t\t\t\t}\n> + \t\t\t\t\tn->typevar = r;\n> + \t\t\t\t\tn->coldeflist = $5;\n> + \t\t\t\t\t$$ = (Node *)n;\n> + \t\t\t\t}\n> \t\t\t| CREATE CHARACTER SET opt_as any_name GET definition opt_collate\n> \t\t\t\t{\n> \t\t\t\t\tDefineStmt *n = makeNode(DefineStmt);\n> ***************\n> *** 2255,2260 ****\n> --- 2288,2296 ----\n> \t\t\t\t\tn->definition = $7;\n> \t\t\t\t\t$$ = (Node *)n;\n> \t\t\t\t}\n> + \t\t;\n> + \n> + rowdefinition: '(' TableFuncElementList ')'\t\t\t{ $$ = $2; }\n> \t\t;\n> \n> definition: '(' def_list ')'\t\t\t\t\t\t{ $$ = $2; }\n> Index: src/backend/storage/buffer/bufmgr.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/storage/buffer/bufmgr.c,v\n> retrieving revision 1.128\n> diff -c -r1.128 bufmgr.c\n> *** src/backend/storage/buffer/bufmgr.c\t6 Aug 2002 02:36:34 -0000\t1.128\n> --- src/backend/storage/buffer/bufmgr.c\t8 Aug 2002 14:49:58 -0000\n> ***************\n> *** 1056,1061 ****\n> --- 1056,1063 ----\n> \t */\n> \tif (relation->rd_rel->relkind == RELKIND_VIEW)\n> \t\trelation->rd_nblocks = 0;\n> + \telse if (relation->rd_rel->relkind == RELKIND_COMPOSITE_TYPE)\n> + \t\trelation->rd_nblocks = 0;\n> \telse if (!relation->rd_isnew && !relation->rd_istemp)\n> \t\trelation->rd_nblocks = smgrnblocks(DEFAULT_SMGR, relation);\n> \treturn relation->rd_nblocks;\n> Index: src/backend/storage/smgr/smgr.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/storage/smgr/smgr.c,v\n> retrieving revision 1.58\n> diff -c -r1.58 smgr.c\n> *** src/backend/storage/smgr/smgr.c\t6 Aug 2002 02:36:34 -0000\t1.58\n> --- src/backend/storage/smgr/smgr.c\t8 Aug 2002 14:49:59 -0000\n> ***************\n> *** 263,268 ****\n> --- 263,270 ----\n> \n> \tif (reln->rd_rel->relkind == RELKIND_VIEW)\n> \t\treturn -1;\n> + \tif (reln->rd_rel->relkind == RELKIND_COMPOSITE_TYPE)\n> + \t\treturn -1;\n> \tif ((fd = (*(smgrsw[which].smgr_open)) (reln)) < 0)\n> \t\tif (!failOK)\n> \t\t\telog(ERROR, \"cannot open %s: %m\", RelationGetRelationName(reln));\n> Index: src/backend/tcop/postgres.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/tcop/postgres.c,v\n> retrieving revision 1.280\n> diff -c -r1.280 postgres.c\n> *** src/backend/tcop/postgres.c\t6 Aug 2002 05:24:04 -0000\t1.280\n> --- src/backend/tcop/postgres.c\t8 Aug 2002 14:49:59 -0000\n> ***************\n> *** 2264,2269 ****\n> --- 2264,2273 ----\n> \t\t\t}\n> \t\t\tbreak;\n> \n> + \t\tcase T_CompositeTypeStmt:\n> + \t\t\ttag = \"CREATE TYPE\";\n> + \t\t\tbreak;\n> + \n> \t\tcase T_ViewStmt:\n> \t\t\ttag = \"CREATE VIEW\";\n> \t\t\tbreak;\n> Index: src/backend/tcop/utility.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/tcop/utility.c,v\n> retrieving revision 1.169\n> diff -c -r1.169 utility.c\n> *** src/backend/tcop/utility.c\t7 Aug 2002 21:45:02 -0000\t1.169\n> --- src/backend/tcop/utility.c\t8 Aug 2002 14:49:59 -0000\n> ***************\n> *** 70,75 ****\n> --- 70,76 ----\n> \t{RELKIND_SEQUENCE, \"a\", \"sequence\", \"SEQUENCE\"},\n> \t{RELKIND_VIEW, \"a\", \"view\", \"VIEW\"},\n> \t{RELKIND_INDEX, \"an\", \"index\", \"INDEX\"},\n> + \t{RELKIND_COMPOSITE_TYPE, \"a\", \"type\", \"TYPE\"},\n> \t{'\\0', \"a\", \"???\", \"???\"}\n> };\n> \n> ***************\n> *** 570,575 ****\n> --- 571,589 ----\n> \t\t\t\t\t\tDefineAggregate(stmt->defnames, stmt->definition);\n> \t\t\t\t\t\tbreak;\n> \t\t\t\t}\n> + \t\t\t}\n> + \t\t\tbreak;\n> + \n> + \t\tcase T_CompositeTypeStmt:\t\t/* CREATE TYPE (composite) */\n> + \t\t\t{\n> + \t\t\t\tOid\trelid;\n> + \t\t\t\tCompositeTypeStmt *stmt = (CompositeTypeStmt *) parsetree;\n> + \n> + \t\t\t\t/*\n> + \t\t\t\t * DefineCompositeType returns relid for use when creating\n> + \t\t\t\t * an implicit composite type during function creation\n> + \t\t\t\t */\n> + \t\t\t\trelid = DefineCompositeType(stmt->typevar, stmt->coldeflist);\n> \t\t\t}\n> \t\t\tbreak;\n> \n> Index: src/backend/utils/adt/tid.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/utils/adt/tid.c,v\n> retrieving revision 1.32\n> diff -c -r1.32 tid.c\n> *** src/backend/utils/adt/tid.c\t16 Jul 2002 17:55:25 -0000\t1.32\n> --- src/backend/utils/adt/tid.c\t8 Aug 2002 14:49:59 -0000\n> ***************\n> *** 226,231 ****\n> --- 226,234 ----\n> \tif (rel->rd_rel->relkind == RELKIND_VIEW)\n> \t\treturn currtid_for_view(rel, tid);\n> \n> + \tif (rel->rd_rel->relkind == RELKIND_COMPOSITE_TYPE)\n> + \t\telog(ERROR, \"currtid can't handle type relations\");\n> + \n> \tItemPointerCopy(tid, result);\n> \theap_get_latest_tid(rel, SnapshotNow, result);\n> \n> ***************\n> *** 248,253 ****\n> --- 251,259 ----\n> \trel = heap_openrv(relrv, AccessShareLock);\n> \tif (rel->rd_rel->relkind == RELKIND_VIEW)\n> \t\treturn currtid_for_view(rel, tid);\n> + \n> + \tif (rel->rd_rel->relkind == RELKIND_COMPOSITE_TYPE)\n> + \t\telog(ERROR, \"currtid can't handle type relations\");\n> \n> \tresult = (ItemPointer) palloc(sizeof(ItemPointerData));\n> \tItemPointerCopy(tid, result);\n> Index: src/bin/pg_dump/common.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/bin/pg_dump/common.c,v\n> retrieving revision 1.67\n> diff -c -r1.67 common.c\n> *** src/bin/pg_dump/common.c\t30 Jul 2002 21:56:04 -0000\t1.67\n> --- src/bin/pg_dump/common.c\t8 Aug 2002 14:49:59 -0000\n> ***************\n> *** 215,223 ****\n> \n> \tfor (i = 0; i < numTables; i++)\n> \t{\n> ! \t\t/* Sequences and views never have parents */\n> \t\tif (tblinfo[i].relkind == RELKIND_SEQUENCE ||\n> ! \t\t\ttblinfo[i].relkind == RELKIND_VIEW)\n> \t\t\tcontinue;\n> \n> \t\t/* Don't bother computing anything for non-target tables, either */\n> --- 215,224 ----\n> \n> \tfor (i = 0; i < numTables; i++)\n> \t{\n> ! \t\t/* Sequences, views, and types never have parents */\n> \t\tif (tblinfo[i].relkind == RELKIND_SEQUENCE ||\n> ! \t\t\ttblinfo[i].relkind == RELKIND_VIEW ||\n> ! \t\t\ttblinfo[i].relkind == RELKIND_COMPOSITE_TYPE)\n> \t\t\tcontinue;\n> \n> \t\t/* Don't bother computing anything for non-target tables, either */\n> ***************\n> *** 269,277 ****\n> \n> \tfor (i = 0; i < numTables; i++)\n> \t{\n> ! \t\t/* Sequences and views never have parents */\n> \t\tif (tblinfo[i].relkind == RELKIND_SEQUENCE ||\n> ! \t\t\ttblinfo[i].relkind == RELKIND_VIEW)\n> \t\t\tcontinue;\n> \n> \t\t/* Don't bother computing anything for non-target tables, either */\n> --- 270,279 ----\n> \n> \tfor (i = 0; i < numTables; i++)\n> \t{\n> ! \t\t/* Sequences, views, and types never have parents */\n> \t\tif (tblinfo[i].relkind == RELKIND_SEQUENCE ||\n> ! \t\t\ttblinfo[i].relkind == RELKIND_VIEW ||\n> ! \t\t\ttblinfo[i].relkind == RELKIND_COMPOSITE_TYPE)\n> \t\t\tcontinue;\n> \n> \t\t/* Don't bother computing anything for non-target tables, either */\n> Index: src/bin/pg_dump/pg_dump.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/bin/pg_dump/pg_dump.c,v\n> retrieving revision 1.280\n> diff -c -r1.280 pg_dump.c\n> *** src/bin/pg_dump/pg_dump.c\t4 Aug 2002 05:03:29 -0000\t1.280\n> --- src/bin/pg_dump/pg_dump.c\t8 Aug 2002 22:42:58 -0000\n> ***************\n> *** 95,100 ****\n> --- 95,101 ----\n> \t\t\t\t\t\t\tFuncInfo *g_finfo, int numFuncs,\n> \t\t\t\t\t\t\tTypeInfo *g_tinfo, int numTypes);\n> static void dumpOneDomain(Archive *fout, TypeInfo *tinfo);\n> + static void dumpOneCompositeType(Archive *fout, TypeInfo *tinfo);\n> static void dumpOneTable(Archive *fout, TableInfo *tbinfo,\n> \t\t\t\t\t\t TableInfo *g_tblinfo);\n> static void dumpOneSequence(Archive *fout, TableInfo *tbinfo,\n> ***************\n> *** 1178,1183 ****\n> --- 1179,1188 ----\n> \t\tif (tblinfo[i].relkind == RELKIND_VIEW)\n> \t\t\tcontinue;\n> \n> + \t\t/* Skip TYPE relations */\n> + \t\tif (tblinfo[i].relkind == RELKIND_COMPOSITE_TYPE)\n> + \t\t\tcontinue;\n> + \n> \t\tif (tblinfo[i].relkind == RELKIND_SEQUENCE)\t\t/* already dumped */\n> \t\t\tcontinue;\n> \n> ***************\n> *** 1582,1587 ****\n> --- 1587,1593 ----\n> \tint\t\t\ti_usename;\n> \tint\t\t\ti_typelem;\n> \tint\t\t\ti_typrelid;\n> + \tint\t\t\ti_typrelkind;\n> \tint\t\t\ti_typtype;\n> \tint\t\t\ti_typisdefined;\n> \n> ***************\n> *** 1602,1608 ****\n> \t\tappendPQExpBuffer(query, \"SELECT pg_type.oid, typname, \"\n> \t\t\t\t\t\t \"typnamespace, \"\n> \t\t\t\t\t\t \"(select usename from pg_user where typowner = usesysid) as usename, \"\n> ! \t\t\t\t\t\t \"typelem, typrelid, typtype, typisdefined \"\n> \t\t\t\t\t\t \"FROM pg_type\");\n> \t}\n> \telse\n> --- 1608,1616 ----\n> \t\tappendPQExpBuffer(query, \"SELECT pg_type.oid, typname, \"\n> \t\t\t\t\t\t \"typnamespace, \"\n> \t\t\t\t\t\t \"(select usename from pg_user where typowner = usesysid) as usename, \"\n> ! \t\t\t\t\t\t \"typelem, typrelid, \"\n> ! \t\t\t\t\t\t \"(select relkind from pg_class where oid = typrelid) as typrelkind, \"\n> ! \t\t\t\t\t\t \"typtype, typisdefined \"\n> \t\t\t\t\t\t \"FROM pg_type\");\n> \t}\n> \telse\n> ***************\n> *** 1610,1616 ****\n> \t\tappendPQExpBuffer(query, \"SELECT pg_type.oid, typname, \"\n> \t\t\t\t\t\t \"0::oid as typnamespace, \"\n> \t\t\t\t\t\t \"(select usename from pg_user where typowner = usesysid) as usename, \"\n> ! \t\t\t\t\t\t \"typelem, typrelid, typtype, typisdefined \"\n> \t\t\t\t\t\t \"FROM pg_type\");\n> \t}\n> \n> --- 1618,1626 ----\n> \t\tappendPQExpBuffer(query, \"SELECT pg_type.oid, typname, \"\n> \t\t\t\t\t\t \"0::oid as typnamespace, \"\n> \t\t\t\t\t\t \"(select usename from pg_user where typowner = usesysid) as usename, \"\n> ! \t\t\t\t\t\t \"typelem, typrelid, \"\n> ! \t\t\t\t\t\t \"''::char as typrelkind, \"\n> ! \t\t\t\t\t\t \"typtype, typisdefined \"\n> \t\t\t\t\t\t \"FROM pg_type\");\n> \t}\n> \n> ***************\n> *** 1632,1637 ****\n> --- 1642,1648 ----\n> \ti_usename = PQfnumber(res, \"usename\");\n> \ti_typelem = PQfnumber(res, \"typelem\");\n> \ti_typrelid = PQfnumber(res, \"typrelid\");\n> + \ti_typrelkind = PQfnumber(res, \"typrelkind\");\n> \ti_typtype = PQfnumber(res, \"typtype\");\n> \ti_typisdefined = PQfnumber(res, \"typisdefined\");\n> \n> ***************\n> *** 1644,1649 ****\n> --- 1655,1661 ----\n> \t\ttinfo[i].usename = strdup(PQgetvalue(res, i, i_usename));\n> \t\ttinfo[i].typelem = strdup(PQgetvalue(res, i, i_typelem));\n> \t\ttinfo[i].typrelid = strdup(PQgetvalue(res, i, i_typrelid));\n> + \t\ttinfo[i].typrelkind = *PQgetvalue(res, i, i_typrelkind);\n> \t\ttinfo[i].typtype = *PQgetvalue(res, i, i_typtype);\n> \n> \t\t/*\n> ***************\n> *** 2109,2115 ****\n> \t\tappendPQExpBuffer(query,\n> \t\t\t\t\t\t \"SELECT pg_class.oid, relname, relacl, relkind, \"\n> \t\t\t\t\t\t \"relnamespace, \"\n> - \n> \t\t\t\t\t\t \"(select usename from pg_user where relowner = usesysid) as usename, \"\n> \t\t\t\t\t\t \"relchecks, reltriggers, \"\n> \t\t\t\t\t\t \"relhasindex, relhasrules, relhasoids \"\n> --- 2121,2126 ----\n> ***************\n> *** 2120,2125 ****\n> --- 2131,2137 ----\n> \t}\n> \telse if (g_fout->remoteVersion >= 70200)\n> \t{\n> + \t\t/* before 7.3 there were no type relations with relkind 'c' */\n> \t\tappendPQExpBuffer(query,\n> \t\t\t\t\t\t \"SELECT pg_class.oid, relname, relacl, relkind, \"\n> \t\t\t\t\t\t \"0::oid as relnamespace, \"\n> ***************\n> *** 2363,2368 ****\n> --- 2375,2384 ----\n> \t\tif (tblinfo[i].relkind == RELKIND_SEQUENCE)\n> \t\t\tcontinue;\n> \n> + \t\t/* Don't bother to collect info for type relations */\n> + \t\tif (tblinfo[i].relkind == RELKIND_COMPOSITE_TYPE)\n> + \t\t\tcontinue;\n> + \n> \t\t/* Don't bother with uninteresting tables, either */\n> \t\tif (!tblinfo[i].interesting)\n> \t\t\tcontinue;\n> ***************\n> *** 3180,3185 ****\n> --- 3196,3300 ----\n> }\n> \n> /*\n> + * dumpOneCompositeType\n> + * writes out to fout the queries to recreate a user-defined stand-alone\n> + * composite type as requested by dumpTypes\n> + */\n> + static void\n> + dumpOneCompositeType(Archive *fout, TypeInfo *tinfo)\n> + {\n> + \tPQExpBuffer q = createPQExpBuffer();\n> + \tPQExpBuffer delq = createPQExpBuffer();\n> + \tPQExpBuffer query = createPQExpBuffer();\n> + \tPGresult *res;\n> + \tint\t\t\tntups;\n> + \tchar\t *attname;\n> + \tchar\t *atttypdefn;\n> + \tchar\t *attbasetype;\n> + \tconst char *((*deps)[]);\n> + \tint\t\t\tdepIdx = 0;\n> + \tint\t\t\ti;\n> + \n> + \tdeps = malloc(sizeof(char *) * 10);\n> + \n> + \t/* Set proper schema search path so type references list correctly */\n> + \tselectSourceSchema(tinfo->typnamespace->nspname);\n> + \n> + \t/* Fetch type specific details */\n> + \t/* We assume here that remoteVersion must be at least 70300 */\n> + \n> + \tappendPQExpBuffer(query, \"SELECT a.attname, \"\n> + \t\t\t\t\t \"pg_catalog.format_type(a.atttypid, a.atttypmod) as atttypdefn, \"\n> + \t\t\t\t\t \"a.atttypid as attbasetype \"\n> + \t\t\t\t\t \"FROM pg_catalog.pg_type t, pg_catalog.pg_attribute a \"\n> + \t\t\t\t\t \"WHERE t.oid = '%s'::pg_catalog.oid \"\n> + \t\t\t\t\t \"AND a.attrelid = t.typrelid\",\n> + \t\t\t\t\t tinfo->oid);\n> + \n> + \tres = PQexec(g_conn, query->data);\n> + \tif (!res ||\n> + \t\tPQresultStatus(res) != PGRES_TUPLES_OK)\n> + \t{\n> + \t\twrite_msg(NULL, \"query to obtain type information failed: %s\", PQerrorMessage(g_conn));\n> + \t\texit_nicely();\n> + \t}\n> + \n> + \t/* Expecting at least a single result */\n> + \tntups = PQntuples(res);\n> + \tif (ntups < 1)\n> + \t{\n> + \t\twrite_msg(NULL, \"Got no rows from: %s\", query->data);\n> + \t\texit_nicely();\n> + \t}\n> + \n> + \t/* DROP must be fully qualified in case same name appears in pg_catalog */\n> + \tappendPQExpBuffer(delq, \"DROP TYPE %s.\",\n> + \t\t\t\t\t fmtId(tinfo->typnamespace->nspname, force_quotes));\n> + \tappendPQExpBuffer(delq, \"%s RESTRICT;\\n\",\n> + \t\t\t\t\t fmtId(tinfo->typname, force_quotes));\n> + \n> + \tappendPQExpBuffer(q,\n> + \t\t\t\t\t \"CREATE TYPE %s AS (\",\n> + \t\t\t\t\t fmtId(tinfo->typname, force_quotes));\n> + \n> + \tfor (i = 0; i < ntups; i++)\n> + \t{\n> + \t\tattname = PQgetvalue(res, i, PQfnumber(res, \"attname\"));\n> + \t\tatttypdefn = PQgetvalue(res, i, PQfnumber(res, \"atttypdefn\"));\n> + \t\tattbasetype = PQgetvalue(res, i, PQfnumber(res, \"attbasetype\"));\n> + \n> + \t\tif (i > 0)\n> + \t\t\tappendPQExpBuffer(q, \",\\n\\t %s %s\", attname, atttypdefn);\n> + \t\telse\n> + \t\t\tappendPQExpBuffer(q, \"%s %s\", attname, atttypdefn);\n> + \n> + \t\t/* Depends on the base type */\n> + \t\t(*deps)[depIdx++] = strdup(attbasetype);\n> + \t}\n> + \tappendPQExpBuffer(q, \");\\n\");\n> + \n> + \t(*deps)[depIdx++] = NULL;\t\t/* End of List */\n> + \n> + \tArchiveEntry(fout, tinfo->oid, tinfo->typname,\n> + \t\t\t\t tinfo->typnamespace->nspname,\n> + \t\t\t\t tinfo->usename, \"TYPE\", deps,\n> + \t\t\t\t q->data, delq->data, NULL, NULL, NULL);\n> + \n> + \t/*** Dump Type Comments ***/\n> + \tresetPQExpBuffer(q);\n> + \n> + \tappendPQExpBuffer(q, \"TYPE %s\", fmtId(tinfo->typname, force_quotes));\n> + \tdumpComment(fout, q->data,\n> + \t\t\t\ttinfo->typnamespace->nspname, tinfo->usename,\n> + \t\t\t\ttinfo->oid, \"pg_type\", 0, NULL);\n> + \n> + \tPQclear(res);\n> + \tdestroyPQExpBuffer(q);\n> + \tdestroyPQExpBuffer(delq);\n> + \tdestroyPQExpBuffer(query);\n> + }\n> + \n> + /*\n> * dumpTypes\n> *\t writes out to fout the queries to recreate all the user-defined types\n> */\n> ***************\n> *** 3195,3202 ****\n> \t\tif (!tinfo[i].typnamespace->dump)\n> \t\t\tcontinue;\n> \n> ! \t\t/* skip relation types */\n> ! \t\tif (atooid(tinfo[i].typrelid) != 0)\n> \t\t\tcontinue;\n> \n> \t\t/* skip undefined placeholder types */\n> --- 3310,3317 ----\n> \t\tif (!tinfo[i].typnamespace->dump)\n> \t\t\tcontinue;\n> \n> ! \t\t/* skip relation types for non-stand-alone type relations*/\n> ! \t\tif (atooid(tinfo[i].typrelid) != 0 && tinfo[i].typrelkind != 'c')\n> \t\t\tcontinue;\n> \n> \t\t/* skip undefined placeholder types */\n> ***************\n> *** 3214,3219 ****\n> --- 3329,3336 ----\n> \t\t\t\t\t\t\tfinfo, numFuncs, tinfo, numTypes);\n> \t\telse if (tinfo[i].typtype == 'd')\n> \t\t\tdumpOneDomain(fout, &tinfo[i]);\n> + \t\telse if (tinfo[i].typtype == 'c')\n> + \t\t\tdumpOneCompositeType(fout, &tinfo[i]);\n> \t}\n> }\n> \n> ***************\n> *** 4839,4844 ****\n> --- 4956,4962 ----\n> \n> \t\tif (tbinfo->relkind != RELKIND_SEQUENCE)\n> \t\t\tcontinue;\n> + \n> \t\tif (tbinfo->dump)\n> \t\t{\n> \t\t\tdumpOneSequence(fout, tbinfo, schemaOnly, dataOnly);\n> ***************\n> *** 4854,4859 ****\n> --- 4972,4979 ----\n> \t\t\tTableInfo\t *tbinfo = &tblinfo[i];\n> \n> \t\t\tif (tbinfo->relkind == RELKIND_SEQUENCE) /* already dumped */\n> + \t\t\t\tcontinue;\n> + \t\t\tif (tbinfo->relkind == RELKIND_COMPOSITE_TYPE) /* dumped as a type */\n> \t\t\t\tcontinue;\n> \n> \t\t\tif (tbinfo->dump)\n> Index: src/bin/pg_dump/pg_dump.h\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/bin/pg_dump/pg_dump.h,v\n> retrieving revision 1.94\n> diff -c -r1.94 pg_dump.h\n> *** src/bin/pg_dump/pg_dump.h\t2 Aug 2002 18:15:08 -0000\t1.94\n> --- src/bin/pg_dump/pg_dump.h\t8 Aug 2002 18:26:11 -0000\n> ***************\n> *** 47,52 ****\n> --- 47,53 ----\n> \tchar\t *usename;\t\t/* name of owner, or empty string */\n> \tchar\t *typelem;\t\t/* OID */\n> \tchar\t *typrelid;\t\t/* OID */\n> + \tchar\t\ttyprelkind;\t\t/* 'r', 'v', 'c', etc */\n> \tchar\t\ttyptype;\t\t/* 'b', 'c', etc */\n> \tbool\t\tisArray;\t\t/* true if user-defined array type */\n> \tbool\t\tisDefined;\t\t/* true if typisdefined */\n> Index: src/bin/psql/describe.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/bin/psql/describe.c,v\n> retrieving revision 1.57\n> diff -c -r1.57 describe.c\n> *** src/bin/psql/describe.c\t2 Aug 2002 18:15:08 -0000\t1.57\n> --- src/bin/psql/describe.c\t8 Aug 2002 22:26:38 -0000\n> ***************\n> *** 168,176 ****\n> \n> \t/*\n> \t * do not include array types (start with underscore), do not include\n> ! \t * user relations (typrelid!=0)\n> \t */\n> ! \tappendPQExpBuffer(&buf, \"FROM pg_type t\\nWHERE t.typrelid = 0 AND t.typname !~ '^_.*'\\n\");\n> \n> \tif (name)\n> \t\t/* accept either internal or external type name */\n> --- 168,179 ----\n> \n> \t/*\n> \t * do not include array types (start with underscore), do not include\n> ! \t * user relations (typrelid!=0) unless they are type relations\n> \t */\n> ! \tappendPQExpBuffer(&buf, \"FROM pg_type t\\nWHERE (t.typrelid = 0 \");\n> ! \tappendPQExpBuffer(&buf, \"OR (SELECT c.relkind = 'c' FROM pg_class c \"\n> ! \t\t\t\t\t\t\t \"where c.oid = t.typrelid)) \");\n> ! \tappendPQExpBuffer(&buf, \"AND t.typname !~ '^_.*'\\n\");\n> \n> \tif (name)\n> \t\t/* accept either internal or external type name */\n> Index: src/include/catalog/pg_class.h\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/include/catalog/pg_class.h,v\n> retrieving revision 1.70\n> diff -c -r1.70 pg_class.h\n> *** src/include/catalog/pg_class.h\t2 Aug 2002 18:15:09 -0000\t1.70\n> --- src/include/catalog/pg_class.h\t8 Aug 2002 14:49:59 -0000\n> ***************\n> *** 169,173 ****\n> --- 169,174 ----\n> #define\t\t RELKIND_UNCATALOGED\t 'u'\t\t/* temporary heap */\n> #define\t\t RELKIND_TOASTVALUE\t 't'\t\t/* moved off huge values */\n> #define\t\t RELKIND_VIEW\t\t\t 'v'\t\t/* view */\n> + #define\t\t RELKIND_COMPOSITE_TYPE 'c'\t\t/* composite type */\n> \n> #endif /* PG_CLASS_H */\n> Index: src/include/commands/defrem.h\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/include/commands/defrem.h,v\n> retrieving revision 1.43\n> diff -c -r1.43 defrem.h\n> *** src/include/commands/defrem.h\t29 Jul 2002 22:14:11 -0000\t1.43\n> --- src/include/commands/defrem.h\t8 Aug 2002 14:49:59 -0000\n> ***************\n> *** 58,63 ****\n> --- 58,64 ----\n> extern void RemoveTypeById(Oid typeOid);\n> extern void DefineDomain(CreateDomainStmt *stmt);\n> extern void RemoveDomain(List *names, DropBehavior behavior);\n> + extern Oid DefineCompositeType(const RangeVar *typevar, List *coldeflist);\n> \n> extern void DefineOpClass(CreateOpClassStmt *stmt);\n> extern void RemoveOpClass(RemoveOpClassStmt *stmt);\n> Index: src/include/nodes/nodes.h\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/include/nodes/nodes.h,v\n> retrieving revision 1.114\n> diff -c -r1.114 nodes.h\n> *** src/include/nodes/nodes.h\t29 Jul 2002 22:14:11 -0000\t1.114\n> --- src/include/nodes/nodes.h\t8 Aug 2002 14:49:59 -0000\n> ***************\n> *** 238,243 ****\n> --- 238,244 ----\n> \tT_PrivTarget,\n> \tT_InsertDefault,\n> \tT_CreateOpClassItem,\n> + \tT_CompositeTypeStmt,\n> \n> \t/*\n> \t * TAGS FOR FUNCTION-CALL CONTEXT AND RESULTINFO NODES (see fmgr.h)\n> Index: src/include/nodes/parsenodes.h\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/include/nodes/parsenodes.h,v\n> retrieving revision 1.198\n> diff -c -r1.198 parsenodes.h\n> *** src/include/nodes/parsenodes.h\t4 Aug 2002 19:48:10 -0000\t1.198\n> --- src/include/nodes/parsenodes.h\t8 Aug 2002 14:49:59 -0000\n> ***************\n> *** 1402,1407 ****\n> --- 1402,1419 ----\n> } TransactionStmt;\n> \n> /* ----------------------\n> + *\t\tCreate Type Statement, composite types\n> + * ----------------------\n> + */\n> + typedef struct CompositeTypeStmt\n> + {\n> + \tNodeTag\t\ttype;\n> + \tRangeVar *typevar;\t\t/* the composite type to be created */\n> + \tList\t *coldeflist;\t\t/* list of ColumnDef nodes */\n> + } CompositeTypeStmt;\n> + \n> + \n> + /* ----------------------\n> *\t\tCreate View Statement\n> * ----------------------\n> */\n> Index: src/pl/plpgsql/src/pl_comp.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/pl/plpgsql/src/pl_comp.c,v\n> retrieving revision 1.44\n> diff -c -r1.44 pl_comp.c\n> *** src/pl/plpgsql/src/pl_comp.c\t8 Aug 2002 01:36:04 -0000\t1.44\n> --- src/pl/plpgsql/src/pl_comp.c\t8 Aug 2002 14:50:00 -0000\n> ***************\n> *** 1030,1041 ****\n> \t}\n> \n> \t/*\n> ! \t * It must be a relation, sequence or view\n> \t */\n> \tclassStruct = (Form_pg_class) GETSTRUCT(classtup);\n> \tif (classStruct->relkind != RELKIND_RELATION &&\n> \t\tclassStruct->relkind != RELKIND_SEQUENCE &&\n> ! \t\tclassStruct->relkind != RELKIND_VIEW)\n> \t{\n> \t\tReleaseSysCache(classtup);\n> \t\tpfree(cp[0]);\n> --- 1030,1042 ----\n> \t}\n> \n> \t/*\n> ! \t * It must be a relation, sequence, view, or type\n> \t */\n> \tclassStruct = (Form_pg_class) GETSTRUCT(classtup);\n> \tif (classStruct->relkind != RELKIND_RELATION &&\n> \t\tclassStruct->relkind != RELKIND_SEQUENCE &&\n> ! \t\tclassStruct->relkind != RELKIND_VIEW &&\n> ! \t\tclassStruct->relkind != RELKIND_COMPOSITE_TYPE)\n> \t{\n> \t\tReleaseSysCache(classtup);\n> \t\tpfree(cp[0]);\n> ***************\n> *** 1130,1139 ****\n> \tif (!HeapTupleIsValid(classtup))\n> \t\telog(ERROR, \"%s: no such class\", cp[0]);\n> \tclassStruct = (Form_pg_class) GETSTRUCT(classtup);\n> ! \t/* accept relation, sequence, or view pg_class entries */\n> \tif (classStruct->relkind != RELKIND_RELATION &&\n> \t\tclassStruct->relkind != RELKIND_SEQUENCE &&\n> ! \t\tclassStruct->relkind != RELKIND_VIEW)\n> \t\telog(ERROR, \"%s isn't a table\", cp[0]);\n> \n> \t/*\n> --- 1131,1141 ----\n> \tif (!HeapTupleIsValid(classtup))\n> \t\telog(ERROR, \"%s: no such class\", cp[0]);\n> \tclassStruct = (Form_pg_class) GETSTRUCT(classtup);\n> ! \t/* accept relation, sequence, view, or type pg_class entries */\n> \tif (classStruct->relkind != RELKIND_RELATION &&\n> \t\tclassStruct->relkind != RELKIND_SEQUENCE &&\n> ! \t\tclassStruct->relkind != RELKIND_VIEW &&\n> ! \t\tclassStruct->relkind != RELKIND_COMPOSITE_TYPE)\n> \t\telog(ERROR, \"%s isn't a table\", cp[0]);\n> \n> \t/*\n> Index: src/test/regress/expected/create_type.out\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/test/regress/expected/create_type.out,v\n> retrieving revision 1.4\n> diff -c -r1.4 create_type.out\n> *** src/test/regress/expected/create_type.out\t6 Sep 2001 02:07:42 -0000\t1.4\n> --- src/test/regress/expected/create_type.out\t8 Aug 2002 14:50:00 -0000\n> ***************\n> *** 37,40 ****\n> --- 37,53 ----\n> zippo | 42\n> (1 row)\n> \n> + -- Test stand-alone composite type\n> + CREATE TYPE default_test_row AS (f1 text_w_default, f2 int42);\n> + CREATE FUNCTION get_default_test() RETURNS SETOF default_test_row AS '\n> + SELECT * FROM default_test;\n> + ' LANGUAGE SQL;\n> + SELECT * FROM get_default_test();\n> + f1 | f2 \n> + -------+----\n> + zippo | 42\n> + (1 row)\n> + \n> + DROP TYPE default_test_row CASCADE;\n> + NOTICE: Drop cascades to function get_default_test()\n> DROP TABLE default_test;\n> Index: src/test/regress/expected/select_having.out\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/test/regress/expected/select_having.out,v\n> retrieving revision 1.7\n> diff -c -r1.7 select_having.out\n> *** src/test/regress/expected/select_having.out\t26 Jun 2002 21:58:56 -0000\t1.7\n> --- src/test/regress/expected/select_having.out\t8 Aug 2002 17:48:10 -0000\n> ***************\n> *** 14,20 ****\n> INSERT INTO test_having VALUES (8, 4, 'CCCC', 'I');\n> INSERT INTO test_having VALUES (9, 4, 'CCCC', 'j');\n> SELECT b, c FROM test_having\n> ! \tGROUP BY b, c HAVING count(*) = 1;\n> b | c \n> ---+----------\n> 1 | XXXX \n> --- 14,20 ----\n> INSERT INTO test_having VALUES (8, 4, 'CCCC', 'I');\n> INSERT INTO test_having VALUES (9, 4, 'CCCC', 'j');\n> SELECT b, c FROM test_having\n> ! \tGROUP BY b, c HAVING count(*) = 1 ORDER BY b, c;\n> b | c \n> ---+----------\n> 1 | XXXX \n> ***************\n> *** 23,37 ****\n> \n> -- HAVING is equivalent to WHERE in this case\n> SELECT b, c FROM test_having\n> ! \tGROUP BY b, c HAVING b = 3; \n> b | c \n> ---+----------\n> - 3 | BBBB \n> 3 | bbbb \n> (2 rows)\n> \n> SELECT lower(c), count(c) FROM test_having\n> ! \tGROUP BY lower(c) HAVING count(*) > 2 OR min(a) = max(a);\n> lower | count \n> ----------+-------\n> bbbb | 3\n> --- 23,37 ----\n> \n> -- HAVING is equivalent to WHERE in this case\n> SELECT b, c FROM test_having\n> ! \tGROUP BY b, c HAVING b = 3 ORDER BY b, c; \n> b | c \n> ---+----------\n> 3 | bbbb \n> + 3 | BBBB \n> (2 rows)\n> \n> SELECT lower(c), count(c) FROM test_having\n> ! \tGROUP BY lower(c) HAVING count(*) > 2 OR min(a) = max(a) ORDER BY lower(c);\n> lower | count \n> ----------+-------\n> bbbb | 3\n> ***************\n> *** 40,50 ****\n> (3 rows)\n> \n> SELECT c, max(a) FROM test_having\n> ! \tGROUP BY c HAVING count(*) > 2 OR min(a) = max(a);\n> c | max \n> ----------+-----\n> - XXXX | 0\n> bbbb | 5\n> (2 rows)\n> \n> DROP TABLE test_having;\n> --- 40,50 ----\n> (3 rows)\n> \n> SELECT c, max(a) FROM test_having\n> ! \tGROUP BY c HAVING count(*) > 2 OR min(a) = max(a) ORDER BY c;\n> c | max \n> ----------+-----\n> bbbb | 5\n> + XXXX | 0\n> (2 rows)\n> \n> DROP TABLE test_having;\n> Index: src/test/regress/sql/create_type.sql\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/test/regress/sql/create_type.sql,v\n> retrieving revision 1.4\n> diff -c -r1.4 create_type.sql\n> *** src/test/regress/sql/create_type.sql\t6 Sep 2001 02:07:42 -0000\t1.4\n> --- src/test/regress/sql/create_type.sql\t8 Aug 2002 14:50:00 -0000\n> ***************\n> *** 41,44 ****\n> --- 41,56 ----\n> \n> SELECT * FROM default_test;\n> \n> + -- Test stand-alone composite type\n> + \n> + CREATE TYPE default_test_row AS (f1 text_w_default, f2 int42);\n> + \n> + CREATE FUNCTION get_default_test() RETURNS SETOF default_test_row AS '\n> + SELECT * FROM default_test;\n> + ' LANGUAGE SQL;\n> + \n> + SELECT * FROM get_default_test();\n> + \n> + DROP TYPE default_test_row CASCADE;\n> + \n> DROP TABLE default_test;\n> Index: src/test/regress/sql/select_having.sql\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/test/regress/sql/select_having.sql,v\n> retrieving revision 1.7\n> diff -c -r1.7 select_having.sql\n> *** src/test/regress/sql/select_having.sql\t26 Jun 2002 21:58:56 -0000\t1.7\n> --- src/test/regress/sql/select_having.sql\t8 Aug 2002 17:43:02 -0000\n> ***************\n> *** 16,32 ****\n> INSERT INTO test_having VALUES (9, 4, 'CCCC', 'j');\n> \n> SELECT b, c FROM test_having\n> ! \tGROUP BY b, c HAVING count(*) = 1;\n> \n> -- HAVING is equivalent to WHERE in this case\n> SELECT b, c FROM test_having\n> ! \tGROUP BY b, c HAVING b = 3; \n> \n> SELECT lower(c), count(c) FROM test_having\n> ! \tGROUP BY lower(c) HAVING count(*) > 2 OR min(a) = max(a);\n> \n> SELECT c, max(a) FROM test_having\n> ! \tGROUP BY c HAVING count(*) > 2 OR min(a) = max(a);\n> \n> DROP TABLE test_having;\n> \n> --- 16,32 ----\n> INSERT INTO test_having VALUES (9, 4, 'CCCC', 'j');\n> \n> SELECT b, c FROM test_having\n> ! \tGROUP BY b, c HAVING count(*) = 1 ORDER BY b, c; \n> \n> -- HAVING is equivalent to WHERE in this case\n> SELECT b, c FROM test_having\n> ! \tGROUP BY b, c HAVING b = 3 ORDER BY b, c; \n> \n> SELECT lower(c), count(c) FROM test_having\n> ! \tGROUP BY lower(c) HAVING count(*) > 2 OR min(a) = max(a) ORDER BY lower(c);\n> \n> SELECT c, max(a) FROM test_having\n> ! \tGROUP BY c HAVING count(*) > 2 OR min(a) = max(a) ORDER BY c;\n> \n> DROP TABLE test_having;\n> \n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 11 Aug 2002 01:09:17 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: stand-alone composite types patch (was [HACKERS] Proposal:" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Your patch has been added to the PostgreSQL unapplied patches list at:\n\n>> There is also a small adjustment to the expected output file for \n>> select-having. I was getting a regression failure based on ordering of \n>> the results, so I added ORDER BY clauses.\n\nDon't forget that that part of the patch was retracted.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 11 Aug 2002 10:44:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: stand-alone composite types patch (was [HACKERS] Proposal: " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \n>>Your patch has been added to the PostgreSQL unapplied patches list at:\n> \n> \n>>>There is also a small adjustment to the expected output file for \n>>>select-having. I was getting a regression failure based on ordering of \n>>>the results, so I added ORDER BY clauses.\n>>\n> \n> Don't forget that that part of the patch was retracted.\n\nYou should be able to simply remove the changes to:\n Index: src/test/regress/expected/select_having.out\n Index: src/test/regress/sql/select_having.sql\nfrom the patch.\n\nOr let me know if you want me to do it and resubmit.\n\nThanks,\n\nJoe\n\n\n", "msg_date": "Sun, 11 Aug 2002 08:05:02 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: stand-alone composite types patch (was [HACKERS] Proposal:" }, { "msg_contents": "\nRetracted part added. [ I hadn't noticed the retraction.]\n\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nJoe Conway wrote:\n> Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > \n> >>Your patch has been added to the PostgreSQL unapplied patches list at:\n> > \n> > \n> >>>There is also a small adjustment to the expected output file for \n> >>>select-having. I was getting a regression failure based on ordering of \n> >>>the results, so I added ORDER BY clauses.\n> >>\n> > \n> > Don't forget that that part of the patch was retracted.\n> \n> You should be able to simply remove the changes to:\n> Index: src/test/regress/expected/select_having.out\n> Index: src/test/regress/sql/select_having.sql\n> from the patch.\n> \n> Or let me know if you want me to do it and resubmit.\n> \n> Thanks,\n> \n> Joe\n> \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 12 Aug 2002 01:06:31 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: stand-alone composite types patch (was [HACKERS] Proposal:" }, { "msg_contents": "Sorry. this patch fails to apply. A change to catalog/heap.c doesn't\nseem to fit anywhere. I have attached the reject.\n\n\n---------------------------------------------------------------------------\n\nJoe Conway wrote:\n> Tom Lane wrote:\n> > If you did it that way then you'd not need that ugly kluge in\n> > RemoveType. What you'd need instead is some smarts (a kluge!?) in\n> > setting up the dependency. Currently that dependency is made in\n> > TypeCreate which doesn't know what sort of relation it's creating\n> > a type for. Probably the best answer is to pull that particular\n> > dependency out of TypeCreate, and make it (in the proper direction)\n> > in AddNewRelationType.\n> \n> Fixed.\n> \n> > Also, I'm not following the point of the separation between\n> > DefineCompositeType and DefineCompositeTypeRelation; nor do I see a need\n> > for a CommandCounterIncrement call in there.\n> \n> Fixed.\n> \n> \n> > You have missed a number of places where this new relkind ought to\n> > be special-cased the same way RELKIND_VIEW is --- for example\n> > CheckAttributeNames and AddNewAttributeTuples, since a composite type\n> > presumably shouldn't have system columns associated. I'd counsel\n> > looking at all references to RELKIND_VIEW to see which places also need\n> > to check for RELKIND_COMPOSITE_TYPE.\n> \n> Yup, I had missed lots of things, not the least of which was pg_dump. \n> New patch attached includes pg_dump, psql (\\dT), docs, and regression \n> support.\n> \n> There is also a small adjustment to the expected output file for \n> select-having. I was getting a regression failure based on ordering of \n> the results, so I added ORDER BY clauses.\n> \n> Passes all regression tests. If no more objections, please apply.\n> \n> Thanks,\n> \n> Joe\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\n***************\n*** 764,770 ****\n \t/*\n \t * We create the disk file for this relation here\n \t */\n! \tif (relkind != RELKIND_VIEW)\n \t\theap_storage_create(new_rel_desc);\n \n \t/*\n--- 764,770 ----\n \t/*\n \t * We create the disk file for this relation here\n \t */\n! \tif (relkind != RELKIND_VIEW && relkind != RELKIND_COMPOSITE_TYPE)\n \t\theap_storage_create(new_rel_desc);\n \n \t/*", "msg_date": "Wed, 14 Aug 2002 22:43:01 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: stand-alone composite types patch (was [HACKERS] Proposal:" }, { "msg_contents": "Bruce Momjian wrote:\n> Sorry. this patch fails to apply. A change to catalog/heap.c doesn't\n> seem to fit anywhere. I have attached the reject.\n> \n\nLots of code drift in just one week! I'll rework the patch and resend.\n\nJoe\n\n\n\n", "msg_date": "Wed, 14 Aug 2002 20:19:45 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: stand-alone composite types patch (was [HACKERS] Proposal:" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> [ this doesn't apply: ]\n\n> \t/*\n> \t * We create the disk file for this relation here\n> \t */\n> ! \tif (relkind != RELKIND_VIEW)\n> \t\theap_storage_create(new_rel_desc);\n \n> \t/*\n> --- 764,770 ----\n> \t/*\n> \t * We create the disk file for this relation here\n> \t */\n> ! \tif (relkind != RELKIND_VIEW && relkind != RELKIND_COMPOSITE_TYPE)\n> \t\theap_storage_create(new_rel_desc);\n \n> \t/*\n\n\nThere's no longer a separate call to heap_storage_create in that routine\n--- the right place to make the test is now in the storage_create\nboolean parameter being passed to heap_create. A simple change, but\nit passeth patch's understanding ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Aug 2002 23:30:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: stand-alone composite types patch (was [HACKERS] Proposal: " }, { "msg_contents": "Tom Lane wrote:\n> There's no longer a separate call to heap_storage_create in that routine\n> --- the right place to make the test is now in the storage_create\n> boolean parameter being passed to heap_create. A simple change, but\n> it passeth patch's understanding ...\n\nThanks.\n\nAttached is a patch against cvs tip as of 8:30 PM PST or so. Turned out \nthat even after fixing the failed hunks, there was a new spot in \nbufmgr.c which needed to be fixed (related to temp relations; \nRelationUpdateNumberOfBlocks). But thankfully the regression test code \ncaught it :-)\n\nJoe", "msg_date": "Wed, 14 Aug 2002 22:19:19 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: stand-alone composite types patch (was [HACKERS] Proposal:" }, { "msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\nJoe Conway wrote:\n> Tom Lane wrote:\n> > There's no longer a separate call to heap_storage_create in that routine\n> > --- the right place to make the test is now in the storage_create\n> > boolean parameter being passed to heap_create. A simple change, but\n> > it passeth patch's understanding ...\n> \n> Thanks.\n> \n> Attached is a patch against cvs tip as of 8:30 PM PST or so. Turned out \n> that even after fixing the failed hunks, there was a new spot in \n> bufmgr.c which needed to be fixed (related to temp relations; \n> RelationUpdateNumberOfBlocks). But thankfully the regression test code \n> caught it :-)\n> \n> Joe\n> \n\n> Index: doc/src/sgml/ref/create_type.sgml\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/doc/src/sgml/ref/create_type.sgml,v\n> retrieving revision 1.30\n> diff -c -r1.30 create_type.sgml\n> *** doc/src/sgml/ref/create_type.sgml\t24 Jul 2002 19:11:07 -0000\t1.30\n> --- doc/src/sgml/ref/create_type.sgml\t15 Aug 2002 03:06:23 -0000\n> ***************\n> *** 30,35 ****\n> --- 30,42 ----\n> [ , ALIGNMENT = <replaceable class=\"parameter\">alignment</replaceable> ]\n> [ , STORAGE = <replaceable class=\"parameter\">storage</replaceable> ]\n> )\n> + \n> + CREATE TYPE <replaceable class=\"parameter\">typename</replaceable> AS\n> + ( <replaceable class=\"PARAMETER\">column_definition_list</replaceable> )\n> + \n> + where <replaceable class=\"PARAMETER\">column_definition_list</replaceable> can be:\n> + \n> + ( <replaceable class=\"PARAMETER\">column_name</replaceable> <replaceable class=\"PARAMETER\">data_type</replaceable> [, ... ] )\n> </synopsis>\n> \n> <refsect2 id=\"R2-SQL-CREATETYPE-1\">\n> ***************\n> *** 138,143 ****\n> --- 145,169 ----\n> </para>\n> </listitem>\n> </varlistentry>\n> + \n> + <varlistentry>\n> + <term><replaceable class=\"PARAMETER\">column_name</replaceable></term>\n> + <listitem>\n> + <para>\n> + The name of a column of the composite type.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> + <varlistentry>\n> + <term><replaceable class=\"PARAMETER\">data_type</replaceable></term>\n> + <listitem>\n> + <para>\n> + The name of an existing data type.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + \n> </variablelist>\n> </para>\n> </refsect2>\n> ***************\n> *** 191,199 ****\n> </para>\n> \n> <para>\n> ! <command>CREATE TYPE</command> requires the registration of two functions\n> ! (using CREATE FUNCTION) before defining the type. The\n> ! representation of a new base type is determined by\n> <replaceable class=\"parameter\">input_function</replaceable>, which\n> converts the type's external representation to an internal\n> representation usable by the\n> --- 217,225 ----\n> </para>\n> \n> <para>\n> ! The first form of <command>CREATE TYPE</command> requires the \n> ! registration of two functions (using CREATE FUNCTION) before defining the\n> ! type. The representation of a new base type is determined by\n> <replaceable class=\"parameter\">input_function</replaceable>, which\n> converts the type's external representation to an internal\n> representation usable by the\n> ***************\n> *** 288,293 ****\n> --- 314,327 ----\n> <literal>extended</literal> and <literal>external</literal> items.)\n> </para>\n> \n> + <para>\n> + The second form of <command>CREATE TYPE</command> requires a column\n> + definition list in the form ( <replaceable class=\"PARAMETER\">column_name</replaceable> \n> + <replaceable class=\"PARAMETER\">data_type</replaceable> [, ... ] ). This\n> + creates a composite type, similar to that of a TABLE or VIEW relation.\n> + A stand-alone composite type is useful as the return type of FUNCTION.\n> + </para>\n> + \n> <refsect2>\n> <title>Array Types</title>\n> \n> ***************\n> *** 370,375 ****\n> --- 404,418 ----\n> CREATE TYPE bigobj (INPUT = lo_filein, OUTPUT = lo_fileout,\n> INTERNALLENGTH = VARIABLE);\n> CREATE TABLE big_objs (id int4, obj bigobj);\n> + </programlisting>\n> + </para>\n> + \n> + <para>\n> + This example creates a composite type and uses it in\n> + a table function definition:\n> + <programlisting>\n> + CREATE TYPE compfoo AS (f1 int, f2 int);\n> + CREATE FUNCTION getfoo() RETURNS SETOF compfoo AS 'SELECT fooid, foorefid FROM foo' LANGUAGE SQL;\n> </programlisting>\n> </para>\n> </refsect1>\n> Index: src/backend/catalog/heap.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/catalog/heap.c,v\n> retrieving revision 1.220\n> diff -c -r1.220 heap.c\n> *** src/backend/catalog/heap.c\t11 Aug 2002 21:17:34 -0000\t1.220\n> --- src/backend/catalog/heap.c\t15 Aug 2002 04:21:03 -0000\n> ***************\n> *** 357,365 ****\n> \t/*\n> \t * first check for collision with system attribute names\n> \t *\n> ! \t * Skip this for a view, since it doesn't have system attributes.\n> \t */\n> ! \tif (relkind != RELKIND_VIEW)\n> \t{\n> \t\tfor (i = 0; i < natts; i++)\n> \t\t{\n> --- 357,366 ----\n> \t/*\n> \t * first check for collision with system attribute names\n> \t *\n> ! \t * Skip this for a view and type relation, since it doesn't have system\n> ! \t * attributes.\n> \t */\n> ! \tif (relkind != RELKIND_VIEW && relkind != RELKIND_COMPOSITE_TYPE)\n> \t{\n> \t\tfor (i = 0; i < natts; i++)\n> \t\t{\n> ***************\n> *** 473,482 ****\n> \n> \t/*\n> \t * Next we add the system attributes. Skip OID if rel has no OIDs.\n> ! \t * Skip all for a view. We don't bother with making datatype\n> ! \t * dependencies here, since presumably all these types are pinned.\n> \t */\n> ! \tif (relkind != RELKIND_VIEW)\n> \t{\n> \t\tdpp = SysAtt;\n> \t\tfor (i = 0; i < -1 - FirstLowInvalidHeapAttributeNumber; i++)\n> --- 474,483 ----\n> \n> \t/*\n> \t * Next we add the system attributes. Skip OID if rel has no OIDs.\n> ! \t * Skip all for a view or type relation. We don't bother with making\n> ! \t * datatype dependencies here, since presumably all these types are pinned.\n> \t */\n> ! \tif (relkind != RELKIND_VIEW && relkind != RELKIND_COMPOSITE_TYPE)\n> \t{\n> \t\tdpp = SysAtt;\n> \t\tfor (i = 0; i < -1 - FirstLowInvalidHeapAttributeNumber; i++)\n> ***************\n> *** 689,701 ****\n> \t * physical disk file. (If we fail further down, it's the smgr's\n> \t * responsibility to remove the disk file again.)\n> \t *\n> ! \t * NB: create a physical file only if it's not a view.\n> \t */\n> \tnew_rel_desc = heap_create(relname,\n> \t\t\t\t\t\t\t relnamespace,\n> \t\t\t\t\t\t\t tupdesc,\n> \t\t\t\t\t\t\t shared_relation,\n> ! \t\t\t\t\t\t\t (relkind != RELKIND_VIEW),\n> \t\t\t\t\t\t\t allow_system_table_mods);\n> \n> \t/* Fetch the relation OID assigned by heap_create */\n> --- 690,703 ----\n> \t * physical disk file. (If we fail further down, it's the smgr's\n> \t * responsibility to remove the disk file again.)\n> \t *\n> ! \t * NB: create a physical file only if it's not a view or type relation.\n> \t */\n> \tnew_rel_desc = heap_create(relname,\n> \t\t\t\t\t\t\t relnamespace,\n> \t\t\t\t\t\t\t tupdesc,\n> \t\t\t\t\t\t\t shared_relation,\n> ! \t\t\t\t\t\t\t (relkind != RELKIND_VIEW &&\n> ! \t\t\t\t\t\t\t relkind != RELKIND_COMPOSITE_TYPE),\n> \t\t\t\t\t\t\t allow_system_table_mods);\n> \n> \t/* Fetch the relation OID assigned by heap_create */\n> ***************\n> *** 1131,1137 ****\n> \t/*\n> \t * unlink the relation's physical file and finish up.\n> \t */\n> ! \tif (rel->rd_rel->relkind != RELKIND_VIEW)\n> \t\tsmgrunlink(DEFAULT_SMGR, rel);\n> \n> \t/*\n> --- 1133,1140 ----\n> \t/*\n> \t * unlink the relation's physical file and finish up.\n> \t */\n> ! \tif (rel->rd_rel->relkind != RELKIND_VIEW &&\n> ! \t\t\trel->rd_rel->relkind != RELKIND_COMPOSITE_TYPE)\n> \t\tsmgrunlink(DEFAULT_SMGR, rel);\n> \n> \t/*\n> Index: src/backend/catalog/namespace.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/catalog/namespace.c,v\n> retrieving revision 1.30\n> diff -c -r1.30 namespace.c\n> *** src/backend/catalog/namespace.c\t9 Aug 2002 16:45:14 -0000\t1.30\n> --- src/backend/catalog/namespace.c\t15 Aug 2002 03:06:23 -0000\n> ***************\n> *** 1585,1590 ****\n> --- 1585,1591 ----\n> \t\t\tcase RELKIND_RELATION:\n> \t\t\tcase RELKIND_SEQUENCE:\n> \t\t\tcase RELKIND_VIEW:\n> + \t\t\tcase RELKIND_COMPOSITE_TYPE:\n> \t\t\t\tAssertTupleDescHasOid(pgclass->rd_att);\n> \t\t\t\tobject.classId = RelOid_pg_class;\n> \t\t\t\tobject.objectId = HeapTupleGetOid(tuple);\n> Index: src/backend/catalog/pg_type.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/catalog/pg_type.c,v\n> retrieving revision 1.77\n> diff -c -r1.77 pg_type.c\n> *** src/backend/catalog/pg_type.c\t5 Aug 2002 03:29:16 -0000\t1.77\n> --- src/backend/catalog/pg_type.c\t15 Aug 2002 03:06:23 -0000\n> ***************\n> *** 311,325 ****\n> \n> \t\t/*\n> \t\t * If the type is a rowtype for a relation, mark it as internally\n> ! \t\t * dependent on the relation. This allows it to be auto-dropped\n> ! \t\t * when the relation is, and not otherwise.\n> \t\t */\n> \t\tif (OidIsValid(relationOid))\n> \t\t{\n> \t\t\treferenced.classId = RelOid_pg_class;\n> \t\t\treferenced.objectId = relationOid;\n> \t\t\treferenced.objectSubId = 0;\n> ! \t\t\trecordDependencyOn(&myself, &referenced, DEPENDENCY_INTERNAL);\n> \t\t}\n> \n> \t\t/*\n> --- 311,338 ----\n> \n> \t\t/*\n> \t\t * If the type is a rowtype for a relation, mark it as internally\n> ! \t\t * dependent on the relation, *unless* it is a stand-alone composite\n> ! \t\t * type relation. For the latter case, we have to reverse the\n> ! \t\t * dependency.\n> ! \t\t *\n> ! \t\t * In the former case, this allows the type to be auto-dropped\n> ! \t\t * when the relation is, and not otherwise. And in the latter,\n> ! \t\t * of course we get the opposite effect.\n> \t\t */\n> \t\tif (OidIsValid(relationOid))\n> \t\t{\n> + \t\t\tRelation\trel = relation_open(relationOid, AccessShareLock);\n> + \t\t\tchar\t\trelkind = rel->rd_rel->relkind;\n> + \t\t\trelation_close(rel, AccessShareLock);\n> + \n> \t\t\treferenced.classId = RelOid_pg_class;\n> \t\t\treferenced.objectId = relationOid;\n> \t\t\treferenced.objectSubId = 0;\n> ! \n> ! \t\t\tif (relkind != RELKIND_COMPOSITE_TYPE)\n> ! \t\t\t\trecordDependencyOn(&myself, &referenced, DEPENDENCY_INTERNAL);\n> ! \t\t\telse\n> ! \t\t\t\trecordDependencyOn(&referenced, &myself, DEPENDENCY_INTERNAL);\n> \t\t}\n> \n> \t\t/*\n> Index: src/backend/commands/copy.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/commands/copy.c,v\n> retrieving revision 1.162\n> diff -c -r1.162 copy.c\n> *** src/backend/commands/copy.c\t2 Aug 2002 18:15:06 -0000\t1.162\n> --- src/backend/commands/copy.c\t15 Aug 2002 03:06:23 -0000\n> ***************\n> *** 398,403 ****\n> --- 398,406 ----\n> \t\t\tif (rel->rd_rel->relkind == RELKIND_VIEW)\n> \t\t\t\telog(ERROR, \"You cannot copy view %s\",\n> \t\t\t\t\t RelationGetRelationName(rel));\n> + \t\t\telse if (rel->rd_rel->relkind == RELKIND_COMPOSITE_TYPE)\n> + \t\t\t\telog(ERROR, \"You cannot copy type relation %s\",\n> + \t\t\t\t\t RelationGetRelationName(rel));\n> \t\t\telse if (rel->rd_rel->relkind == RELKIND_SEQUENCE)\n> \t\t\t\telog(ERROR, \"You cannot change sequence relation %s\",\n> \t\t\t\t\t RelationGetRelationName(rel));\n> ***************\n> *** 442,447 ****\n> --- 445,453 ----\n> \t\t{\n> \t\t\tif (rel->rd_rel->relkind == RELKIND_VIEW)\n> \t\t\t\telog(ERROR, \"You cannot copy view %s\",\n> + \t\t\t\t\t RelationGetRelationName(rel));\n> + \t\t\telse if (rel->rd_rel->relkind == RELKIND_COMPOSITE_TYPE)\n> + \t\t\t\telog(ERROR, \"You cannot copy type relation %s\",\n> \t\t\t\t\t RelationGetRelationName(rel));\n> \t\t\telse if (rel->rd_rel->relkind == RELKIND_SEQUENCE)\n> \t\t\t\telog(ERROR, \"You cannot copy sequence %s\",\n> Index: src/backend/commands/tablecmds.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/commands/tablecmds.c,v\n> retrieving revision 1.28\n> diff -c -r1.28 tablecmds.c\n> *** src/backend/commands/tablecmds.c\t7 Aug 2002 21:45:01 -0000\t1.28\n> --- src/backend/commands/tablecmds.c\t15 Aug 2002 03:06:23 -0000\n> ***************\n> *** 345,350 ****\n> --- 345,354 ----\n> \t\telog(ERROR, \"TRUNCATE cannot be used on views. '%s' is a view\",\n> \t\t\t RelationGetRelationName(rel));\n> \n> + \tif (rel->rd_rel->relkind == RELKIND_COMPOSITE_TYPE)\n> + \t\telog(ERROR, \"TRUNCATE cannot be used on type relations. '%s' is a type\",\n> + \t\t\t RelationGetRelationName(rel));\n> + \n> \tif (!allowSystemTableMods && IsSystemRelation(rel))\n> \t\telog(ERROR, \"TRUNCATE cannot be used on system tables. '%s' is a system table\",\n> \t\t\t RelationGetRelationName(rel));\n> ***************\n> *** 3210,3221 ****\n> \t\tcase RELKIND_RELATION:\n> \t\tcase RELKIND_INDEX:\n> \t\tcase RELKIND_VIEW:\n> \t\tcase RELKIND_SEQUENCE:\n> \t\tcase RELKIND_TOASTVALUE:\n> \t\t\t/* ok to change owner */\n> \t\t\tbreak;\n> \t\tdefault:\n> ! \t\t\telog(ERROR, \"ALTER TABLE: relation \\\"%s\\\" is not a table, TOAST table, index, view, or sequence\",\n> \t\t\t\t NameStr(tuple_class->relname));\n> \t}\n> }\n> --- 3214,3226 ----\n> \t\tcase RELKIND_RELATION:\n> \t\tcase RELKIND_INDEX:\n> \t\tcase RELKIND_VIEW:\n> + \t\tcase RELKIND_COMPOSITE_TYPE:\n> \t\tcase RELKIND_SEQUENCE:\n> \t\tcase RELKIND_TOASTVALUE:\n> \t\t\t/* ok to change owner */\n> \t\t\tbreak;\n> \t\tdefault:\n> ! \t\t\telog(ERROR, \"ALTER TABLE: relation \\\"%s\\\" is not a table, TOAST table, index, view, type, or sequence\",\n> \t\t\t\t NameStr(tuple_class->relname));\n> \t}\n> }\n> Index: src/backend/commands/typecmds.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/commands/typecmds.c,v\n> retrieving revision 1.8\n> diff -c -r1.8 typecmds.c\n> *** src/backend/commands/typecmds.c\t24 Jul 2002 19:11:09 -0000\t1.8\n> --- src/backend/commands/typecmds.c\t15 Aug 2002 03:06:23 -0000\n> ***************\n> *** 38,43 ****\n> --- 38,44 ----\n> #include \"catalog/namespace.h\"\n> #include \"catalog/pg_type.h\"\n> #include \"commands/defrem.h\"\n> + #include \"commands/tablecmds.h\"\n> #include \"miscadmin.h\"\n> #include \"parser/parse_func.h\"\n> #include \"parser/parse_type.h\"\n> ***************\n> *** 50,56 ****\n> \n> static Oid findTypeIOFunction(List *procname, bool isOutput);\n> \n> - \n> /*\n> * DefineType\n> *\t\tRegisters a new type.\n> --- 51,56 ----\n> ***************\n> *** 665,668 ****\n> --- 665,707 ----\n> \t}\n> \n> \treturn procOid;\n> + }\n> + \n> + /*-------------------------------------------------------------------\n> + * DefineCompositeType\n> + *\n> + * Create a Composite Type relation.\n> + * `DefineRelation' does all the work, we just provide the correct\n> + * arguments!\n> + *\n> + * If the relation already exists, then 'DefineRelation' will abort\n> + * the xact...\n> + *\n> + * DefineCompositeType returns relid for use when creating\n> + * an implicit composite type during function creation\n> + *-------------------------------------------------------------------\n> + */\n> + Oid\n> + DefineCompositeType(const RangeVar *typevar, List *coldeflist)\n> + {\n> + \tCreateStmt *createStmt = makeNode(CreateStmt);\n> + \n> + \tif (coldeflist == NIL)\n> + \t\telog(ERROR, \"attempted to define composite type relation with\"\n> + \t\t\t\t\t\" no attrs\");\n> + \n> + \t/*\n> + \t * now create the parameters for keys/inheritance etc. All of them are\n> + \t * nil...\n> + \t */\n> + \tcreateStmt->relation = (RangeVar *) typevar;\n> + \tcreateStmt->tableElts = coldeflist;\n> + \tcreateStmt->inhRelations = NIL;\n> + \tcreateStmt->constraints = NIL;\n> + \tcreateStmt->hasoids = false;\n> + \n> + \t/*\n> + \t * finally create the relation...\n> + \t */\n> + \treturn DefineRelation(createStmt, RELKIND_COMPOSITE_TYPE);\n> }\n> Index: src/backend/executor/execMain.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/executor/execMain.c,v\n> retrieving revision 1.173\n> diff -c -r1.173 execMain.c\n> *** src/backend/executor/execMain.c\t7 Aug 2002 21:45:02 -0000\t1.173\n> --- src/backend/executor/execMain.c\t15 Aug 2002 03:06:23 -0000\n> ***************\n> *** 786,791 ****\n> --- 786,795 ----\n> \t\t\telog(ERROR, \"You can't change view relation %s\",\n> \t\t\t\t RelationGetRelationName(resultRelationDesc));\n> \t\t\tbreak;\n> + \t\tcase RELKIND_COMPOSITE_TYPE:\n> + \t\t\telog(ERROR, \"You can't change type relation %s\",\n> + \t\t\t\t RelationGetRelationName(resultRelationDesc));\n> + \t\t\tbreak;\n> \t}\n> \n> \tMemSet(resultRelInfo, 0, sizeof(ResultRelInfo));\n> Index: src/backend/nodes/copyfuncs.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/nodes/copyfuncs.c,v\n> retrieving revision 1.200\n> diff -c -r1.200 copyfuncs.c\n> *** src/backend/nodes/copyfuncs.c\t4 Aug 2002 19:48:09 -0000\t1.200\n> --- src/backend/nodes/copyfuncs.c\t15 Aug 2002 03:06:23 -0000\n> ***************\n> *** 2233,2238 ****\n> --- 2233,2249 ----\n> \treturn newnode;\n> }\n> \n> + static CompositeTypeStmt *\n> + _copyCompositeTypeStmt(CompositeTypeStmt *from)\n> + {\n> + \tCompositeTypeStmt *newnode = makeNode(CompositeTypeStmt);\n> + \n> + \tNode_Copy(from, newnode, typevar);\n> + \tNode_Copy(from, newnode, coldeflist);\n> + \n> + \treturn newnode;\n> + }\n> + \n> static ViewStmt *\n> _copyViewStmt(ViewStmt *from)\n> {\n> ***************\n> *** 2938,2943 ****\n> --- 2949,2957 ----\n> \t\t\tbreak;\n> \t\tcase T_TransactionStmt:\n> \t\t\tretval = _copyTransactionStmt(from);\n> + \t\t\tbreak;\n> + \t\tcase T_CompositeTypeStmt:\n> + \t\t\tretval = _copyCompositeTypeStmt(from);\n> \t\t\tbreak;\n> \t\tcase T_ViewStmt:\n> \t\t\tretval = _copyViewStmt(from);\n> Index: src/backend/nodes/equalfuncs.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/nodes/equalfuncs.c,v\n> retrieving revision 1.149\n> diff -c -r1.149 equalfuncs.c\n> *** src/backend/nodes/equalfuncs.c\t4 Aug 2002 23:49:59 -0000\t1.149\n> --- src/backend/nodes/equalfuncs.c\t15 Aug 2002 03:06:23 -0000\n> ***************\n> *** 1062,1067 ****\n> --- 1062,1078 ----\n> }\n> \n> static bool\n> + _equalCompositeTypeStmt(CompositeTypeStmt *a, CompositeTypeStmt *b)\n> + {\n> + \tif (!equal(a->typevar, b->typevar))\n> + \t\treturn false;\n> + \tif (!equal(a->coldeflist, b->coldeflist))\n> + \t\treturn false;\n> + \n> + \treturn true;\n> + }\n> + \n> + static bool\n> _equalViewStmt(ViewStmt *a, ViewStmt *b)\n> {\n> \tif (!equal(a->view, b->view))\n> ***************\n> *** 2110,2115 ****\n> --- 2121,2129 ----\n> \t\t\tbreak;\n> \t\tcase T_TransactionStmt:\n> \t\t\tretval = _equalTransactionStmt(a, b);\n> + \t\t\tbreak;\n> + \t\tcase T_CompositeTypeStmt:\n> + \t\t\tretval = _equalCompositeTypeStmt(a, b);\n> \t\t\tbreak;\n> \t\tcase T_ViewStmt:\n> \t\t\tretval = _equalViewStmt(a, b);\n> Index: src/backend/parser/gram.y\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/parser/gram.y,v\n> retrieving revision 2.358\n> diff -c -r2.358 gram.y\n> *** src/backend/parser/gram.y\t10 Aug 2002 19:01:53 -0000\t2.358\n> --- src/backend/parser/gram.y\t15 Aug 2002 03:06:23 -0000\n> ***************\n> *** 205,211 ****\n> \n> %type <list>\tstmtblock, stmtmulti,\n> \t\t\t\tOptTableElementList, TableElementList, OptInherit, definition,\n> ! \t\t\t\topt_distinct, opt_definition, func_args,\n> \t\t\t\tfunc_args_list, func_as, createfunc_opt_list\n> \t\t\t\toper_argtypes, RuleActionList, RuleActionMulti,\n> \t\t\t\topt_column_list, columnList, opt_name_list,\n> --- 205,211 ----\n> \n> %type <list>\tstmtblock, stmtmulti,\n> \t\t\t\tOptTableElementList, TableElementList, OptInherit, definition,\n> ! \t\t\t\topt_distinct, opt_definition, func_args, rowdefinition\n> \t\t\t\tfunc_args_list, func_as, createfunc_opt_list\n> \t\t\t\toper_argtypes, RuleActionList, RuleActionMulti,\n> \t\t\t\topt_column_list, columnList, opt_name_list,\n> ***************\n> *** 2233,2238 ****\n> --- 2233,2271 ----\n> \t\t\t\t\tn->definition = $4;\n> \t\t\t\t\t$$ = (Node *)n;\n> \t\t\t\t}\n> + \t\t\t| CREATE TYPE_P any_name AS rowdefinition\n> + \t\t\t\t{\n> + \t\t\t\t\tCompositeTypeStmt *n = makeNode(CompositeTypeStmt);\n> + \t\t\t\t\tRangeVar *r = makeNode(RangeVar);\n> + \n> + \t\t\t\t\tswitch (length($3))\n> + \t\t\t\t\t{\n> + \t\t\t\t\t\tcase 1:\n> + \t\t\t\t\t\t\tr->catalogname = NULL;\n> + \t\t\t\t\t\t\tr->schemaname = NULL;\n> + \t\t\t\t\t\t\tr->relname = strVal(lfirst($3));\n> + \t\t\t\t\t\t\tbreak;\n> + \t\t\t\t\t\tcase 2:\n> + \t\t\t\t\t\t\tr->catalogname = NULL;\n> + \t\t\t\t\t\t\tr->schemaname = strVal(lfirst($3));\n> + \t\t\t\t\t\t\tr->relname = strVal(lsecond($3));\n> + \t\t\t\t\t\t\tbreak;\n> + \t\t\t\t\t\tcase 3:\n> + \t\t\t\t\t\t\tr->catalogname = strVal(lfirst($3));\n> + \t\t\t\t\t\t\tr->schemaname = strVal(lsecond($3));\n> + \t\t\t\t\t\t\tr->relname = strVal(lfirst(lnext(lnext($3))));\n> + \t\t\t\t\t\t\tbreak;\n> + \t\t\t\t\t\tdefault:\n> + \t\t\t\t\t\t\telog(ERROR,\n> + \t\t\t\t\t\t\t\"Improper qualified name \"\n> + \t\t\t\t\t\t\t\"(too many dotted names): %s\",\n> + \t\t\t\t\t\t\t\t NameListToString($3));\n> + \t\t\t\t\t\t\tbreak;\n> + \t\t\t\t\t}\n> + \t\t\t\t\tn->typevar = r;\n> + \t\t\t\t\tn->coldeflist = $5;\n> + \t\t\t\t\t$$ = (Node *)n;\n> + \t\t\t\t}\n> \t\t\t| CREATE CHARACTER SET opt_as any_name GET definition opt_collate\n> \t\t\t\t{\n> \t\t\t\t\tDefineStmt *n = makeNode(DefineStmt);\n> ***************\n> *** 2241,2246 ****\n> --- 2274,2282 ----\n> \t\t\t\t\tn->definition = $7;\n> \t\t\t\t\t$$ = (Node *)n;\n> \t\t\t\t}\n> + \t\t;\n> + \n> + rowdefinition: '(' TableFuncElementList ')'\t\t\t{ $$ = $2; }\n> \t\t;\n> \n> definition: '(' def_list ')'\t\t\t\t\t\t{ $$ = $2; }\n> Index: src/backend/storage/buffer/bufmgr.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/storage/buffer/bufmgr.c,v\n> retrieving revision 1.129\n> diff -c -r1.129 bufmgr.c\n> *** src/backend/storage/buffer/bufmgr.c\t11 Aug 2002 21:17:34 -0000\t1.129\n> --- src/backend/storage/buffer/bufmgr.c\t15 Aug 2002 04:47:07 -0000\n> ***************\n> *** 1051,1056 ****\n> --- 1051,1058 ----\n> \t */\n> \tif (relation->rd_rel->relkind == RELKIND_VIEW)\n> \t\trelation->rd_nblocks = 0;\n> + \telse if (relation->rd_rel->relkind == RELKIND_COMPOSITE_TYPE)\n> + \t\trelation->rd_nblocks = 0;\n> \telse if (!relation->rd_isnew && !relation->rd_istemp)\n> \t\trelation->rd_nblocks = smgrnblocks(DEFAULT_SMGR, relation);\n> \treturn relation->rd_nblocks;\n> ***************\n> *** 1068,1073 ****\n> --- 1070,1077 ----\n> RelationUpdateNumberOfBlocks(Relation relation)\n> {\n> \tif (relation->rd_rel->relkind == RELKIND_VIEW)\n> + \t\trelation->rd_nblocks = 0;\n> + \telse if (relation->rd_rel->relkind == RELKIND_COMPOSITE_TYPE)\n> \t\trelation->rd_nblocks = 0;\n> \telse\n> \t\trelation->rd_nblocks = smgrnblocks(DEFAULT_SMGR, relation);\n> Index: src/backend/storage/smgr/smgr.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/storage/smgr/smgr.c,v\n> retrieving revision 1.58\n> diff -c -r1.58 smgr.c\n> *** src/backend/storage/smgr/smgr.c\t6 Aug 2002 02:36:34 -0000\t1.58\n> --- src/backend/storage/smgr/smgr.c\t15 Aug 2002 03:06:23 -0000\n> ***************\n> *** 263,268 ****\n> --- 263,270 ----\n> \n> \tif (reln->rd_rel->relkind == RELKIND_VIEW)\n> \t\treturn -1;\n> + \tif (reln->rd_rel->relkind == RELKIND_COMPOSITE_TYPE)\n> + \t\treturn -1;\n> \tif ((fd = (*(smgrsw[which].smgr_open)) (reln)) < 0)\n> \t\tif (!failOK)\n> \t\t\telog(ERROR, \"cannot open %s: %m\", RelationGetRelationName(reln));\n> Index: src/backend/tcop/postgres.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/tcop/postgres.c,v\n> retrieving revision 1.281\n> diff -c -r1.281 postgres.c\n> *** src/backend/tcop/postgres.c\t10 Aug 2002 20:29:18 -0000\t1.281\n> --- src/backend/tcop/postgres.c\t15 Aug 2002 03:06:23 -0000\n> ***************\n> *** 2233,2238 ****\n> --- 2233,2242 ----\n> \t\t\t}\n> \t\t\tbreak;\n> \n> + \t\tcase T_CompositeTypeStmt:\n> + \t\t\ttag = \"CREATE TYPE\";\n> + \t\t\tbreak;\n> + \n> \t\tcase T_ViewStmt:\n> \t\t\ttag = \"CREATE VIEW\";\n> \t\t\tbreak;\n> Index: src/backend/tcop/utility.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/tcop/utility.c,v\n> retrieving revision 1.169\n> diff -c -r1.169 utility.c\n> *** src/backend/tcop/utility.c\t7 Aug 2002 21:45:02 -0000\t1.169\n> --- src/backend/tcop/utility.c\t15 Aug 2002 03:06:23 -0000\n> ***************\n> *** 70,75 ****\n> --- 70,76 ----\n> \t{RELKIND_SEQUENCE, \"a\", \"sequence\", \"SEQUENCE\"},\n> \t{RELKIND_VIEW, \"a\", \"view\", \"VIEW\"},\n> \t{RELKIND_INDEX, \"an\", \"index\", \"INDEX\"},\n> + \t{RELKIND_COMPOSITE_TYPE, \"a\", \"type\", \"TYPE\"},\n> \t{'\\0', \"a\", \"???\", \"???\"}\n> };\n> \n> ***************\n> *** 570,575 ****\n> --- 571,589 ----\n> \t\t\t\t\t\tDefineAggregate(stmt->defnames, stmt->definition);\n> \t\t\t\t\t\tbreak;\n> \t\t\t\t}\n> + \t\t\t}\n> + \t\t\tbreak;\n> + \n> + \t\tcase T_CompositeTypeStmt:\t\t/* CREATE TYPE (composite) */\n> + \t\t\t{\n> + \t\t\t\tOid\trelid;\n> + \t\t\t\tCompositeTypeStmt *stmt = (CompositeTypeStmt *) parsetree;\n> + \n> + \t\t\t\t/*\n> + \t\t\t\t * DefineCompositeType returns relid for use when creating\n> + \t\t\t\t * an implicit composite type during function creation\n> + \t\t\t\t */\n> + \t\t\t\trelid = DefineCompositeType(stmt->typevar, stmt->coldeflist);\n> \t\t\t}\n> \t\t\tbreak;\n> \n> Index: src/backend/utils/adt/tid.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/utils/adt/tid.c,v\n> retrieving revision 1.32\n> diff -c -r1.32 tid.c\n> *** src/backend/utils/adt/tid.c\t16 Jul 2002 17:55:25 -0000\t1.32\n> --- src/backend/utils/adt/tid.c\t15 Aug 2002 03:06:23 -0000\n> ***************\n> *** 226,231 ****\n> --- 226,234 ----\n> \tif (rel->rd_rel->relkind == RELKIND_VIEW)\n> \t\treturn currtid_for_view(rel, tid);\n> \n> + \tif (rel->rd_rel->relkind == RELKIND_COMPOSITE_TYPE)\n> + \t\telog(ERROR, \"currtid can't handle type relations\");\n> + \n> \tItemPointerCopy(tid, result);\n> \theap_get_latest_tid(rel, SnapshotNow, result);\n> \n> ***************\n> *** 248,253 ****\n> --- 251,259 ----\n> \trel = heap_openrv(relrv, AccessShareLock);\n> \tif (rel->rd_rel->relkind == RELKIND_VIEW)\n> \t\treturn currtid_for_view(rel, tid);\n> + \n> + \tif (rel->rd_rel->relkind == RELKIND_COMPOSITE_TYPE)\n> + \t\telog(ERROR, \"currtid can't handle type relations\");\n> \n> \tresult = (ItemPointer) palloc(sizeof(ItemPointerData));\n> \tItemPointerCopy(tid, result);\n> Index: src/bin/pg_dump/common.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/bin/pg_dump/common.c,v\n> retrieving revision 1.67\n> diff -c -r1.67 common.c\n> *** src/bin/pg_dump/common.c\t30 Jul 2002 21:56:04 -0000\t1.67\n> --- src/bin/pg_dump/common.c\t15 Aug 2002 03:06:23 -0000\n> ***************\n> *** 215,223 ****\n> \n> \tfor (i = 0; i < numTables; i++)\n> \t{\n> ! \t\t/* Sequences and views never have parents */\n> \t\tif (tblinfo[i].relkind == RELKIND_SEQUENCE ||\n> ! \t\t\ttblinfo[i].relkind == RELKIND_VIEW)\n> \t\t\tcontinue;\n> \n> \t\t/* Don't bother computing anything for non-target tables, either */\n> --- 215,224 ----\n> \n> \tfor (i = 0; i < numTables; i++)\n> \t{\n> ! \t\t/* Sequences, views, and types never have parents */\n> \t\tif (tblinfo[i].relkind == RELKIND_SEQUENCE ||\n> ! \t\t\ttblinfo[i].relkind == RELKIND_VIEW ||\n> ! \t\t\ttblinfo[i].relkind == RELKIND_COMPOSITE_TYPE)\n> \t\t\tcontinue;\n> \n> \t\t/* Don't bother computing anything for non-target tables, either */\n> ***************\n> *** 269,277 ****\n> \n> \tfor (i = 0; i < numTables; i++)\n> \t{\n> ! \t\t/* Sequences and views never have parents */\n> \t\tif (tblinfo[i].relkind == RELKIND_SEQUENCE ||\n> ! \t\t\ttblinfo[i].relkind == RELKIND_VIEW)\n> \t\t\tcontinue;\n> \n> \t\t/* Don't bother computing anything for non-target tables, either */\n> --- 270,279 ----\n> \n> \tfor (i = 0; i < numTables; i++)\n> \t{\n> ! \t\t/* Sequences, views, and types never have parents */\n> \t\tif (tblinfo[i].relkind == RELKIND_SEQUENCE ||\n> ! \t\t\ttblinfo[i].relkind == RELKIND_VIEW ||\n> ! \t\t\ttblinfo[i].relkind == RELKIND_COMPOSITE_TYPE)\n> \t\t\tcontinue;\n> \n> \t\t/* Don't bother computing anything for non-target tables, either */\n> Index: src/bin/pg_dump/pg_dump.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/bin/pg_dump/pg_dump.c,v\n> retrieving revision 1.281\n> diff -c -r1.281 pg_dump.c\n> *** src/bin/pg_dump/pg_dump.c\t10 Aug 2002 16:57:31 -0000\t1.281\n> --- src/bin/pg_dump/pg_dump.c\t15 Aug 2002 03:06:23 -0000\n> ***************\n> *** 95,100 ****\n> --- 95,101 ----\n> \t\t\t\t\t\t\tFuncInfo *g_finfo, int numFuncs,\n> \t\t\t\t\t\t\tTypeInfo *g_tinfo, int numTypes);\n> static void dumpOneDomain(Archive *fout, TypeInfo *tinfo);\n> + static void dumpOneCompositeType(Archive *fout, TypeInfo *tinfo);\n> static void dumpOneTable(Archive *fout, TableInfo *tbinfo,\n> \t\t\t\t\t\t TableInfo *g_tblinfo);\n> static void dumpOneSequence(Archive *fout, TableInfo *tbinfo,\n> ***************\n> *** 1171,1176 ****\n> --- 1172,1181 ----\n> \t\tif (tblinfo[i].relkind == RELKIND_VIEW)\n> \t\t\tcontinue;\n> \n> + \t\t/* Skip TYPE relations */\n> + \t\tif (tblinfo[i].relkind == RELKIND_COMPOSITE_TYPE)\n> + \t\t\tcontinue;\n> + \n> \t\tif (tblinfo[i].relkind == RELKIND_SEQUENCE)\t\t/* already dumped */\n> \t\t\tcontinue;\n> \n> ***************\n> *** 1575,1580 ****\n> --- 1580,1586 ----\n> \tint\t\t\ti_usename;\n> \tint\t\t\ti_typelem;\n> \tint\t\t\ti_typrelid;\n> + \tint\t\t\ti_typrelkind;\n> \tint\t\t\ti_typtype;\n> \tint\t\t\ti_typisdefined;\n> \n> ***************\n> *** 1595,1601 ****\n> \t\tappendPQExpBuffer(query, \"SELECT pg_type.oid, typname, \"\n> \t\t\t\t\t\t \"typnamespace, \"\n> \t\t\t\t\t\t \"(select usename from pg_user where typowner = usesysid) as usename, \"\n> ! \t\t\t\t\t\t \"typelem, typrelid, typtype, typisdefined \"\n> \t\t\t\t\t\t \"FROM pg_type\");\n> \t}\n> \telse\n> --- 1601,1609 ----\n> \t\tappendPQExpBuffer(query, \"SELECT pg_type.oid, typname, \"\n> \t\t\t\t\t\t \"typnamespace, \"\n> \t\t\t\t\t\t \"(select usename from pg_user where typowner = usesysid) as usename, \"\n> ! \t\t\t\t\t\t \"typelem, typrelid, \"\n> ! \t\t\t\t\t\t \"(select relkind from pg_class where oid = typrelid) as typrelkind, \"\n> ! \t\t\t\t\t\t \"typtype, typisdefined \"\n> \t\t\t\t\t\t \"FROM pg_type\");\n> \t}\n> \telse\n> ***************\n> *** 1603,1609 ****\n> \t\tappendPQExpBuffer(query, \"SELECT pg_type.oid, typname, \"\n> \t\t\t\t\t\t \"0::oid as typnamespace, \"\n> \t\t\t\t\t\t \"(select usename from pg_user where typowner = usesysid) as usename, \"\n> ! \t\t\t\t\t\t \"typelem, typrelid, typtype, typisdefined \"\n> \t\t\t\t\t\t \"FROM pg_type\");\n> \t}\n> \n> --- 1611,1619 ----\n> \t\tappendPQExpBuffer(query, \"SELECT pg_type.oid, typname, \"\n> \t\t\t\t\t\t \"0::oid as typnamespace, \"\n> \t\t\t\t\t\t \"(select usename from pg_user where typowner = usesysid) as usename, \"\n> ! \t\t\t\t\t\t \"typelem, typrelid, \"\n> ! \t\t\t\t\t\t \"''::char as typrelkind, \"\n> ! \t\t\t\t\t\t \"typtype, typisdefined \"\n> \t\t\t\t\t\t \"FROM pg_type\");\n> \t}\n> \n> ***************\n> *** 1625,1630 ****\n> --- 1635,1641 ----\n> \ti_usename = PQfnumber(res, \"usename\");\n> \ti_typelem = PQfnumber(res, \"typelem\");\n> \ti_typrelid = PQfnumber(res, \"typrelid\");\n> + \ti_typrelkind = PQfnumber(res, \"typrelkind\");\n> \ti_typtype = PQfnumber(res, \"typtype\");\n> \ti_typisdefined = PQfnumber(res, \"typisdefined\");\n> \n> ***************\n> *** 1637,1642 ****\n> --- 1648,1654 ----\n> \t\ttinfo[i].usename = strdup(PQgetvalue(res, i, i_usename));\n> \t\ttinfo[i].typelem = strdup(PQgetvalue(res, i, i_typelem));\n> \t\ttinfo[i].typrelid = strdup(PQgetvalue(res, i, i_typrelid));\n> + \t\ttinfo[i].typrelkind = *PQgetvalue(res, i, i_typrelkind);\n> \t\ttinfo[i].typtype = *PQgetvalue(res, i, i_typtype);\n> \n> \t\t/*\n> ***************\n> *** 2102,2108 ****\n> \t\tappendPQExpBuffer(query,\n> \t\t\t\t\t\t \"SELECT pg_class.oid, relname, relacl, relkind, \"\n> \t\t\t\t\t\t \"relnamespace, \"\n> - \n> \t\t\t\t\t\t \"(select usename from pg_user where relowner = usesysid) as usename, \"\n> \t\t\t\t\t\t \"relchecks, reltriggers, \"\n> \t\t\t\t\t\t \"relhasindex, relhasrules, relhasoids \"\n> --- 2114,2119 ----\n> ***************\n> *** 2113,2118 ****\n> --- 2124,2130 ----\n> \t}\n> \telse if (g_fout->remoteVersion >= 70200)\n> \t{\n> + \t\t/* before 7.3 there were no type relations with relkind 'c' */\n> \t\tappendPQExpBuffer(query,\n> \t\t\t\t\t\t \"SELECT pg_class.oid, relname, relacl, relkind, \"\n> \t\t\t\t\t\t \"0::oid as relnamespace, \"\n> ***************\n> *** 2356,2361 ****\n> --- 2368,2377 ----\n> \t\tif (tblinfo[i].relkind == RELKIND_SEQUENCE)\n> \t\t\tcontinue;\n> \n> + \t\t/* Don't bother to collect info for type relations */\n> + \t\tif (tblinfo[i].relkind == RELKIND_COMPOSITE_TYPE)\n> + \t\t\tcontinue;\n> + \n> \t\t/* Don't bother with uninteresting tables, either */\n> \t\tif (!tblinfo[i].interesting)\n> \t\t\tcontinue;\n> ***************\n> *** 3173,3178 ****\n> --- 3189,3293 ----\n> }\n> \n> /*\n> + * dumpOneCompositeType\n> + * writes out to fout the queries to recreate a user-defined stand-alone\n> + * composite type as requested by dumpTypes\n> + */\n> + static void\n> + dumpOneCompositeType(Archive *fout, TypeInfo *tinfo)\n> + {\n> + \tPQExpBuffer q = createPQExpBuffer();\n> + \tPQExpBuffer delq = createPQExpBuffer();\n> + \tPQExpBuffer query = createPQExpBuffer();\n> + \tPGresult *res;\n> + \tint\t\t\tntups;\n> + \tchar\t *attname;\n> + \tchar\t *atttypdefn;\n> + \tchar\t *attbasetype;\n> + \tconst char *((*deps)[]);\n> + \tint\t\t\tdepIdx = 0;\n> + \tint\t\t\ti;\n> + \n> + \tdeps = malloc(sizeof(char *) * 10);\n> + \n> + \t/* Set proper schema search path so type references list correctly */\n> + \tselectSourceSchema(tinfo->typnamespace->nspname);\n> + \n> + \t/* Fetch type specific details */\n> + \t/* We assume here that remoteVersion must be at least 70300 */\n> + \n> + \tappendPQExpBuffer(query, \"SELECT a.attname, \"\n> + \t\t\t\t\t \"pg_catalog.format_type(a.atttypid, a.atttypmod) as atttypdefn, \"\n> + \t\t\t\t\t \"a.atttypid as attbasetype \"\n> + \t\t\t\t\t \"FROM pg_catalog.pg_type t, pg_catalog.pg_attribute a \"\n> + \t\t\t\t\t \"WHERE t.oid = '%s'::pg_catalog.oid \"\n> + \t\t\t\t\t \"AND a.attrelid = t.typrelid\",\n> + \t\t\t\t\t tinfo->oid);\n> + \n> + \tres = PQexec(g_conn, query->data);\n> + \tif (!res ||\n> + \t\tPQresultStatus(res) != PGRES_TUPLES_OK)\n> + \t{\n> + \t\twrite_msg(NULL, \"query to obtain type information failed: %s\", PQerrorMessage(g_conn));\n> + \t\texit_nicely();\n> + \t}\n> + \n> + \t/* Expecting at least a single result */\n> + \tntups = PQntuples(res);\n> + \tif (ntups < 1)\n> + \t{\n> + \t\twrite_msg(NULL, \"Got no rows from: %s\", query->data);\n> + \t\texit_nicely();\n> + \t}\n> + \n> + \t/* DROP must be fully qualified in case same name appears in pg_catalog */\n> + \tappendPQExpBuffer(delq, \"DROP TYPE %s.\",\n> + \t\t\t\t\t fmtId(tinfo->typnamespace->nspname, force_quotes));\n> + \tappendPQExpBuffer(delq, \"%s RESTRICT;\\n\",\n> + \t\t\t\t\t fmtId(tinfo->typname, force_quotes));\n> + \n> + \tappendPQExpBuffer(q,\n> + \t\t\t\t\t \"CREATE TYPE %s AS (\",\n> + \t\t\t\t\t fmtId(tinfo->typname, force_quotes));\n> + \n> + \tfor (i = 0; i < ntups; i++)\n> + \t{\n> + \t\tattname = PQgetvalue(res, i, PQfnumber(res, \"attname\"));\n> + \t\tatttypdefn = PQgetvalue(res, i, PQfnumber(res, \"atttypdefn\"));\n> + \t\tattbasetype = PQgetvalue(res, i, PQfnumber(res, \"attbasetype\"));\n> + \n> + \t\tif (i > 0)\n> + \t\t\tappendPQExpBuffer(q, \",\\n\\t %s %s\", attname, atttypdefn);\n> + \t\telse\n> + \t\t\tappendPQExpBuffer(q, \"%s %s\", attname, atttypdefn);\n> + \n> + \t\t/* Depends on the base type */\n> + \t\t(*deps)[depIdx++] = strdup(attbasetype);\n> + \t}\n> + \tappendPQExpBuffer(q, \");\\n\");\n> + \n> + \t(*deps)[depIdx++] = NULL;\t\t/* End of List */\n> + \n> + \tArchiveEntry(fout, tinfo->oid, tinfo->typname,\n> + \t\t\t\t tinfo->typnamespace->nspname,\n> + \t\t\t\t tinfo->usename, \"TYPE\", deps,\n> + \t\t\t\t q->data, delq->data, NULL, NULL, NULL);\n> + \n> + \t/*** Dump Type Comments ***/\n> + \tresetPQExpBuffer(q);\n> + \n> + \tappendPQExpBuffer(q, \"TYPE %s\", fmtId(tinfo->typname, force_quotes));\n> + \tdumpComment(fout, q->data,\n> + \t\t\t\ttinfo->typnamespace->nspname, tinfo->usename,\n> + \t\t\t\ttinfo->oid, \"pg_type\", 0, NULL);\n> + \n> + \tPQclear(res);\n> + \tdestroyPQExpBuffer(q);\n> + \tdestroyPQExpBuffer(delq);\n> + \tdestroyPQExpBuffer(query);\n> + }\n> + \n> + /*\n> * dumpTypes\n> *\t writes out to fout the queries to recreate all the user-defined types\n> */\n> ***************\n> *** 3188,3195 ****\n> \t\tif (!tinfo[i].typnamespace->dump)\n> \t\t\tcontinue;\n> \n> ! \t\t/* skip relation types */\n> ! \t\tif (atooid(tinfo[i].typrelid) != 0)\n> \t\t\tcontinue;\n> \n> \t\t/* skip undefined placeholder types */\n> --- 3303,3310 ----\n> \t\tif (!tinfo[i].typnamespace->dump)\n> \t\t\tcontinue;\n> \n> ! \t\t/* skip relation types for non-stand-alone type relations*/\n> ! \t\tif (atooid(tinfo[i].typrelid) != 0 && tinfo[i].typrelkind != 'c')\n> \t\t\tcontinue;\n> \n> \t\t/* skip undefined placeholder types */\n> ***************\n> *** 3207,3212 ****\n> --- 3322,3329 ----\n> \t\t\t\t\t\t\tfinfo, numFuncs, tinfo, numTypes);\n> \t\telse if (tinfo[i].typtype == 'd')\n> \t\t\tdumpOneDomain(fout, &tinfo[i]);\n> + \t\telse if (tinfo[i].typtype == 'c')\n> + \t\t\tdumpOneCompositeType(fout, &tinfo[i]);\n> \t}\n> }\n> \n> ***************\n> *** 4832,4837 ****\n> --- 4949,4955 ----\n> \n> \t\tif (tbinfo->relkind != RELKIND_SEQUENCE)\n> \t\t\tcontinue;\n> + \n> \t\tif (tbinfo->dump)\n> \t\t{\n> \t\t\tdumpOneSequence(fout, tbinfo, schemaOnly, dataOnly);\n> ***************\n> *** 4847,4852 ****\n> --- 4965,4972 ----\n> \t\t\tTableInfo\t *tbinfo = &tblinfo[i];\n> \n> \t\t\tif (tbinfo->relkind == RELKIND_SEQUENCE) /* already dumped */\n> + \t\t\t\tcontinue;\n> + \t\t\tif (tbinfo->relkind == RELKIND_COMPOSITE_TYPE) /* dumped as a type */\n> \t\t\t\tcontinue;\n> \n> \t\t\tif (tbinfo->dump)\n> Index: src/bin/pg_dump/pg_dump.h\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/bin/pg_dump/pg_dump.h,v\n> retrieving revision 1.94\n> diff -c -r1.94 pg_dump.h\n> *** src/bin/pg_dump/pg_dump.h\t2 Aug 2002 18:15:08 -0000\t1.94\n> --- src/bin/pg_dump/pg_dump.h\t15 Aug 2002 03:06:23 -0000\n> ***************\n> *** 47,52 ****\n> --- 47,53 ----\n> \tchar\t *usename;\t\t/* name of owner, or empty string */\n> \tchar\t *typelem;\t\t/* OID */\n> \tchar\t *typrelid;\t\t/* OID */\n> + \tchar\t\ttyprelkind;\t\t/* 'r', 'v', 'c', etc */\n> \tchar\t\ttyptype;\t\t/* 'b', 'c', etc */\n> \tbool\t\tisArray;\t\t/* true if user-defined array type */\n> \tbool\t\tisDefined;\t\t/* true if typisdefined */\n> Index: src/bin/psql/describe.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/bin/psql/describe.c,v\n> retrieving revision 1.60\n> diff -c -r1.60 describe.c\n> *** src/bin/psql/describe.c\t10 Aug 2002 16:01:16 -0000\t1.60\n> --- src/bin/psql/describe.c\t15 Aug 2002 04:14:09 -0000\n> ***************\n> *** 210,218 ****\n> \n> \t/*\n> \t * do not include array types (start with underscore), do not include\n> ! \t * user relations (typrelid!=0)\n> \t */\n> ! \tappendPQExpBuffer(&buf, \"WHERE t.typrelid = 0 AND t.typname !~ '^_'\\n\");\n> \n> \t/* Match name pattern against either internal or external name */\n> \tprocessNamePattern(&buf, pattern, true, false,\n> --- 210,221 ----\n> \n> \t/*\n> \t * do not include array types (start with underscore), do not include\n> ! \t * user relations (typrelid!=0) unless they are type relations\n> \t */\n> ! \tappendPQExpBuffer(&buf, \"WHERE (t.typrelid = 0 \");\n> ! \tappendPQExpBuffer(&buf, \"OR (SELECT c.relkind = 'c' FROM pg_class c \"\n> ! \t\t\t\t\t\t\t \"where c.oid = t.typrelid)) \");\n> ! \tappendPQExpBuffer(&buf, \"AND t.typname !~ '^_'\\n\");\n> \n> \t/* Match name pattern against either internal or external name */\n> \tprocessNamePattern(&buf, pattern, true, false,\n> Index: src/include/catalog/pg_class.h\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/include/catalog/pg_class.h,v\n> retrieving revision 1.70\n> diff -c -r1.70 pg_class.h\n> *** src/include/catalog/pg_class.h\t2 Aug 2002 18:15:09 -0000\t1.70\n> --- src/include/catalog/pg_class.h\t15 Aug 2002 03:06:23 -0000\n> ***************\n> *** 169,173 ****\n> --- 169,174 ----\n> #define\t\t RELKIND_UNCATALOGED\t 'u'\t\t/* temporary heap */\n> #define\t\t RELKIND_TOASTVALUE\t 't'\t\t/* moved off huge values */\n> #define\t\t RELKIND_VIEW\t\t\t 'v'\t\t/* view */\n> + #define\t\t RELKIND_COMPOSITE_TYPE 'c'\t\t/* composite type */\n> \n> #endif /* PG_CLASS_H */\n> Index: src/include/commands/defrem.h\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/include/commands/defrem.h,v\n> retrieving revision 1.43\n> diff -c -r1.43 defrem.h\n> *** src/include/commands/defrem.h\t29 Jul 2002 22:14:11 -0000\t1.43\n> --- src/include/commands/defrem.h\t15 Aug 2002 03:06:23 -0000\n> ***************\n> *** 58,63 ****\n> --- 58,64 ----\n> extern void RemoveTypeById(Oid typeOid);\n> extern void DefineDomain(CreateDomainStmt *stmt);\n> extern void RemoveDomain(List *names, DropBehavior behavior);\n> + extern Oid DefineCompositeType(const RangeVar *typevar, List *coldeflist);\n> \n> extern void DefineOpClass(CreateOpClassStmt *stmt);\n> extern void RemoveOpClass(RemoveOpClassStmt *stmt);\n> Index: src/include/nodes/nodes.h\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/include/nodes/nodes.h,v\n> retrieving revision 1.114\n> diff -c -r1.114 nodes.h\n> *** src/include/nodes/nodes.h\t29 Jul 2002 22:14:11 -0000\t1.114\n> --- src/include/nodes/nodes.h\t15 Aug 2002 03:06:23 -0000\n> ***************\n> *** 238,243 ****\n> --- 238,244 ----\n> \tT_PrivTarget,\n> \tT_InsertDefault,\n> \tT_CreateOpClassItem,\n> + \tT_CompositeTypeStmt,\n> \n> \t/*\n> \t * TAGS FOR FUNCTION-CALL CONTEXT AND RESULTINFO NODES (see fmgr.h)\n> Index: src/include/nodes/parsenodes.h\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/include/nodes/parsenodes.h,v\n> retrieving revision 1.198\n> diff -c -r1.198 parsenodes.h\n> *** src/include/nodes/parsenodes.h\t4 Aug 2002 19:48:10 -0000\t1.198\n> --- src/include/nodes/parsenodes.h\t15 Aug 2002 03:06:23 -0000\n> ***************\n> *** 1402,1407 ****\n> --- 1402,1419 ----\n> } TransactionStmt;\n> \n> /* ----------------------\n> + *\t\tCreate Type Statement, composite types\n> + * ----------------------\n> + */\n> + typedef struct CompositeTypeStmt\n> + {\n> + \tNodeTag\t\ttype;\n> + \tRangeVar *typevar;\t\t/* the composite type to be created */\n> + \tList\t *coldeflist;\t\t/* list of ColumnDef nodes */\n> + } CompositeTypeStmt;\n> + \n> + \n> + /* ----------------------\n> *\t\tCreate View Statement\n> * ----------------------\n> */\n> Index: src/pl/plpgsql/src/pl_comp.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/pl/plpgsql/src/pl_comp.c,v\n> retrieving revision 1.45\n> diff -c -r1.45 pl_comp.c\n> *** src/pl/plpgsql/src/pl_comp.c\t12 Aug 2002 14:25:07 -0000\t1.45\n> --- src/pl/plpgsql/src/pl_comp.c\t15 Aug 2002 04:02:53 -0000\n> ***************\n> *** 1028,1039 ****\n> \t}\n> \n> \t/*\n> ! \t * It must be a relation, sequence or view\n> \t */\n> \tclassStruct = (Form_pg_class) GETSTRUCT(classtup);\n> \tif (classStruct->relkind != RELKIND_RELATION &&\n> \t\tclassStruct->relkind != RELKIND_SEQUENCE &&\n> ! \t\tclassStruct->relkind != RELKIND_VIEW)\n> \t{\n> \t\tReleaseSysCache(classtup);\n> \t\tpfree(cp[0]);\n> --- 1028,1040 ----\n> \t}\n> \n> \t/*\n> ! \t * It must be a relation, sequence, view, or type\n> \t */\n> \tclassStruct = (Form_pg_class) GETSTRUCT(classtup);\n> \tif (classStruct->relkind != RELKIND_RELATION &&\n> \t\tclassStruct->relkind != RELKIND_SEQUENCE &&\n> ! \t\tclassStruct->relkind != RELKIND_VIEW &&\n> ! \t\tclassStruct->relkind != RELKIND_COMPOSITE_TYPE)\n> \t{\n> \t\tReleaseSysCache(classtup);\n> \t\tpfree(cp[0]);\n> ***************\n> *** 1145,1154 ****\n> \tclassStruct = (Form_pg_class) GETSTRUCT(classtup);\n> \trelname = NameStr(classStruct->relname);\n> \n> ! \t/* accept relation, sequence, or view pg_class entries */\n> \tif (classStruct->relkind != RELKIND_RELATION &&\n> \t\tclassStruct->relkind != RELKIND_SEQUENCE &&\n> ! \t\tclassStruct->relkind != RELKIND_VIEW)\n> \t\telog(ERROR, \"%s isn't a table\", relname);\n> \n> \t/*\n> --- 1146,1156 ----\n> \tclassStruct = (Form_pg_class) GETSTRUCT(classtup);\n> \trelname = NameStr(classStruct->relname);\n> \n> ! \t/* accept relation, sequence, view, or type pg_class entries */\n> \tif (classStruct->relkind != RELKIND_RELATION &&\n> \t\tclassStruct->relkind != RELKIND_SEQUENCE &&\n> ! \t\tclassStruct->relkind != RELKIND_VIEW &&\n> ! \t\tclassStruct->relkind != RELKIND_COMPOSITE_TYPE)\n> \t\telog(ERROR, \"%s isn't a table\", relname);\n> \n> \t/*\n> Index: src/test/regress/expected/create_type.out\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/test/regress/expected/create_type.out,v\n> retrieving revision 1.4\n> diff -c -r1.4 create_type.out\n> *** src/test/regress/expected/create_type.out\t6 Sep 2001 02:07:42 -0000\t1.4\n> --- src/test/regress/expected/create_type.out\t15 Aug 2002 03:06:23 -0000\n> ***************\n> *** 37,40 ****\n> --- 37,53 ----\n> zippo | 42\n> (1 row)\n> \n> + -- Test stand-alone composite type\n> + CREATE TYPE default_test_row AS (f1 text_w_default, f2 int42);\n> + CREATE FUNCTION get_default_test() RETURNS SETOF default_test_row AS '\n> + SELECT * FROM default_test;\n> + ' LANGUAGE SQL;\n> + SELECT * FROM get_default_test();\n> + f1 | f2 \n> + -------+----\n> + zippo | 42\n> + (1 row)\n> + \n> + DROP TYPE default_test_row CASCADE;\n> + NOTICE: Drop cascades to function get_default_test()\n> DROP TABLE default_test;\n> Index: src/test/regress/sql/create_type.sql\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/test/regress/sql/create_type.sql,v\n> retrieving revision 1.4\n> diff -c -r1.4 create_type.sql\n> *** src/test/regress/sql/create_type.sql\t6 Sep 2001 02:07:42 -0000\t1.4\n> --- src/test/regress/sql/create_type.sql\t15 Aug 2002 03:06:23 -0000\n> ***************\n> *** 41,44 ****\n> --- 41,56 ----\n> \n> SELECT * FROM default_test;\n> \n> + -- Test stand-alone composite type\n> + \n> + CREATE TYPE default_test_row AS (f1 text_w_default, f2 int42);\n> + \n> + CREATE FUNCTION get_default_test() RETURNS SETOF default_test_row AS '\n> + SELECT * FROM default_test;\n> + ' LANGUAGE SQL;\n> + \n> + SELECT * FROM get_default_test();\n> + \n> + DROP TYPE default_test_row CASCADE;\n> + \n> DROP TABLE default_test;\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 15 Aug 2002 12:35:26 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: stand-alone composite types patch (was [HACKERS] Proposal:" } ]
[ { "msg_contents": "Some weeks ago i wrote about one problem(called as \n\"Bug of PL/pgSQL parser\"):\n\n\"eutm\" <eutm@yandex.ru> writes:\n> Dear Sirs!:)I encounted one small problem,working with\n> PostgreSQL 7.3devel.It can look a\n> bit strange,but i have to use whitespaces in names of\ndatabases,tables,fields\n> and so on(like \"roomno jk\").It's possible to create them all and work\nwith them\n> (INSERT,DELETE,UPDATE),but PL/pgSQL parser(compiler ?) can't execute such\n> statements...\n\nToday i send a simple patch(my first:)).\n Regards,Eugene.", "msg_date": "Tue, 30 Jul 2002 10:41:04 +0400 (MSD)", "msg_from": "\"eutm\" <eutm@yandex.ru>", "msg_from_op": true, "msg_subject": "Patch for \"Bug of PL/pgSQL parser\"" }, { "msg_contents": "\nPatch rejected. Tom Lane pointed out some mistakes in this patch, and\nthe patch does not show any corrections.\n\n---------------------------------------------------------------------------\n\neutm wrote:\n> Some weeks ago i wrote about one problem(called as \n> \"Bug of PL/pgSQL parser\"):\n> \n> \"eutm\" <eutm@yandex.ru> writes:\n> > Dear Sirs!:)I encounted one small problem,working with\n> > PostgreSQL 7.3devel.It can look a\n> > bit strange,but i have to use whitespaces in names of\n> databases,tables,fields\n> > and so on(like \"roomno jk\").It's possible to create them all and work\n> with them\n> > (INSERT,DELETE,UPDATE),but PL/pgSQL parser(compiler ?) can't execute such\n> > statements...\n> \n> Today i send a simple patch(my first:)).\n> Regards,Eugene.\n> \n> \n> \n> \n> \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 4 Aug 2002 00:23:44 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch for \"Bug of PL/pgSQL parser\"" }, { "msg_contents": "\nThis has been fixed in the current sources. You can test it if you\ndownload the snapshot from our ftp site.\n\n---------------------------------------------------------------------------\n\neutm wrote:\n> Some weeks ago i wrote about one problem(called as \n> \"Bug of PL/pgSQL parser\"):\n> \n> \"eutm\" <eutm@yandex.ru> writes:\n> > Dear Sirs!:)I encounted one small problem,working with\n> > PostgreSQL 7.3devel.It can look a\n> > bit strange,but i have to use whitespaces in names of\n> databases,tables,fields\n> > and so on(like \"roomno jk\").It's possible to create them all and work\n> with them\n> > (INSERT,DELETE,UPDATE),but PL/pgSQL parser(compiler ?) can't execute such\n> > statements...\n> \n> Today i send a simple patch(my first:)).\n> Regards,Eugene.\n> \n> \n> \n> \n> \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 13 Aug 2002 22:54:48 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch for \"Bug of PL/pgSQL parser\"" } ]
[ { "msg_contents": "Hi All,\n\nThis is another cut of the DROP COLUMN patch. I have split the drop\nfunction into two parts - and now handle dependencies.\n\nProblems:\n\n1. It cascade deletes objects, but it _always_ cascades, no matter what\nbehaviour I specify. Also, it doesn't give me indications that it's cascade\ndeleted an object.\n\n2. I get this in my regression tests:\n\n+ -- test inheritance\n+ create table parent (a int, b int, c int);\n+ insert into parent values (1, 2, 3);\n+ alter table parent drop a;\n+ create table child (d varchar(255)) inherits (parent);\n+ insert into child values (12, 13, 'testing');\n+ select * from parent;\n+ b | c\n+ ----+----\n+ 2 | 3\n+ 12 | 13\n+ (2 rows)\n+\n+ select * from child;\n+ b | c | d\n+ ----+----+---------\n+ 12 | 13 | testing\n+ (1 row)\n+\n+ alter table parent drop c;\n+ select * from parent;\n+ b\n+ ----\n+ 2\n+ 12\n+ (2 rows)\n+\n+ select * from child;\n+ b | d\n+ ----+---------\n+ 12 | testing\n+ (1 row)\n+\n+ drop table child;\n+ ERROR: RelationForgetRelation: relation 143905 is still open\n+ drop table parent;\n+ NOTICE: table child depends on table parent\n+ ERROR: Cannot drop table parent because other objects depend on it\n+ Use DROP ... CASCADE to drop the dependent objects too\n\n\nWhat's with the RelationForgetRelation error??? Am I not closing some\nhandle somewhere?\n\nLastly, do we want new SearchSysCache functions or not? I'd be more than\nhappy for Tom or someone to take this patch off my hands and polish it off -\nespecially since it's required for proper dependency handling and the beta\ndate is approaching...\n\nChris", "msg_date": "Tue, 30 Jul 2002 14:55:35 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "DROP COLUMN round 4" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> ... I'd be more than\n> happy for Tom or someone to take this patch off my hands and polish it off -\n> especially since it's required for proper dependency handling and the beta\n> date is approaching...\n\nI'm out from under CREATE/DROP OPERATOR CLASS now, so I'll take a\nlook...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Jul 2002 18:08:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN round 4 " }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> 1. It cascade deletes objects, but it _always_ cascades, no matter what\n> behaviour I specify. Also, it doesn't give me indications that it's cascade\n> deleted an object.\n\nWould you give a specific example?\n\n> + drop table child;\n> + ERROR: RelationForgetRelation: relation 143905 is still open\n\n> What's with the RelationForgetRelation error??? Am I not closing some\n> handle somewhere?\n\nAlterTableDropColumn neglects to heap_close the relation, but I'm\nsurprised that error isn't reported sooner.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Jul 2002 18:51:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN round 4 " }, { "msg_contents": "> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > 1. It cascade deletes objects, but it _always_ cascades, no matter what\n> > behaviour I specify. Also, it doesn't give me indications that\n> it's cascade\n> > deleted an object.\n>\n> Would you give a specific example?\n\ntest=# create table test (a int4, b int4);\nCREATE TABLE\ntest=# create index temp on test (a);\nCREATE INDEX\ntest=# \\dt\n List of relations\n Name | Schema | Type | Owner\n------+--------+-------+---------\n test | public | table | chriskl\n(1 row)\n\ntest=# \\di\n List of relations\n Name | Schema | Type | Owner | Table\n------+--------+-------+---------+-------\n temp | public | index | chriskl | test\n(1 row)\n\ntest=# alter table test drop a restrict;\nALTER TABLE\ntest=# \\di\nNo relations found.\ntest=#\n\n> > + drop table child;\n> > + ERROR: RelationForgetRelation: relation 143905 is still open\n>\n> > What's with the RelationForgetRelation error??? Am I not closing some\n> > handle somewhere?\n>\n> AlterTableDropColumn neglects to heap_close the relation, but I'm\n> surprised that error isn't reported sooner.\n\nFixed. New diff attached - fixes regression tests as well, plus re-merged\nagainst HEAD.\n\nNote that the check against the parent attribute when adding a foreign key\nprobably should be improved. ie. It relies on the fact that the parent\ncolumn(s) should not have a unique index on them (thanks to dependencies),\nrather than actually checking the attisdropped attribute.\n\nChris", "msg_date": "Wed, 31 Jul 2002 13:29:24 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] DROP COLUMN round 4 " }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> 1. It cascade deletes objects, but it _always_ cascades, no matter what\n> behaviour I specify. Also, it doesn't give me indications that\n> it's cascade deleted an object.\n>> \n>> Would you give a specific example?\n\n> [ dropping a column silently drops an index on the column ]\n\nThis happens because indexes are marked DEPENDENCY_AUTO on their\ncolumns. The drop will cascade to the index even under RESTRICT;\nif you have message level set to DEBUG1 or higher you'll be told\nabout it, but otherwise the behavior is to zap the index quietly.\n\nAn example of a case where RESTRICT should make a difference is\nwhere you have a foreign key reference from another table, or\na view that uses the column you're trying to delete.\n\nWe can debate whether indexes ought to be AUTO dependencies or not.\nIt seems reasonable to me though. Indexes aren't really independent\nobjects to my mind, only auxiliary thingummies...\n\n>> AlterTableDropColumn neglects to heap_close the relation, but I'm\n>> surprised that error isn't reported sooner.\n\n> Fixed. New diff attached - fixes regression tests as well, plus re-merged\n> against HEAD.\n\nThanks, will work from this.\n\n> Note that the check against the parent attribute when adding a foreign key\n> probably should be improved. ie. It relies on the fact that the parent\n> column(s) should not have a unique index on them (thanks to dependencies),\n> rather than actually checking the attisdropped attribute.\n\nUh ... maybe it's too late at night here, but I didn't follow.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 31 Jul 2002 01:44:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] DROP COLUMN round 4 " }, { "msg_contents": "> This happens because indexes are marked DEPENDENCY_AUTO on their\n> columns. The drop will cascade to the index even under RESTRICT;\n> if you have message level set to DEBUG1 or higher you'll be told\n> about it, but otherwise the behavior is to zap the index quietly.\n\nAh doh. I knew that as well. I'll try a view or something then...\n\n> > Note that the check against the parent attribute when adding a\n> foreign key\n> > probably should be improved. ie. It relies on the fact that the parent\n> > column(s) should not have a unique index on them (thanks to\n> dependencies),\n> > rather than actually checking the attisdropped attribute.\n\nOK, in this statement:\n\nALTER TABLE child ADD FOREIGN KEY (a) REFERENCES parent(b);\n\nIt does not check that column b is not dropped (it does check \"a\"). It just\nrelies on the UNIQUE constraint not being present on b. This should\nprobably be fixed...\n\nChris\n\n", "msg_date": "Wed, 31 Jul 2002 14:27:03 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] DROP COLUMN round 4 " }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Fixed. New diff attached - fixes regression tests as well, plus re-merged\n> against HEAD.\n\nI've applied this patch after some editorializing. Notably, I went\nahead and created syscache lookup convenience routines\n\nextern HeapTuple SearchSysCacheAttName(Oid relid, const char *attname);\nextern HeapTuple SearchSysCacheCopyAttName(Oid relid, const char *attname);\nextern bool SearchSysCacheExistsAttName(Oid relid, const char *attname);\n\nThese are equivalent to the corresponding basic syscache routines for\nthe ATTNAME cache, except that they will release the cache entry and\nreturn NULL (or false, etc) if the entry exists but has attisdropped =\ntrue.\n\nEssentially all uses of the ATTNAME cache should now go through these\nroutines instead. This cleans up a number of cases in which the patch\nas-submitted produced rather bogus error messages.\n\nI did not adopt the approach of inserting NULLs in eref lists, because\nI'm afraid that will break code elsewhere that relies on eref lists.\nInstead I added a post-facto check to scanRTEforColumn, which is simple\nand reliable.\n\nAlso, I'm still unconvinced that there's any percentage in modifying the\nname of a deleted column to avoid collisions, since it does nothing for\nafter-the-fact collisions; AFAICT the net result is just to *enlarge*\nthe set of column names that users have to avoid. So I took that code\nout. A possible solution that would actually work is to replace\npg_attribute's unique index on (attrelid, attname) with one on\n(attrelid, attname, attisdropped). However, I doubt that the problem is\nworth the overhead that that would add. I'm inclined to leave the code\nas it stands and just document that column names like\n\"........pg.dropped.N........\" are best avoided.\n\n\nThere are still issues about functions that accept or return tuple\ndatatypes --- a DROP COLUMN is likely to break any functions that use\nthe table's tuple datatype. Related problems occur already with ADD\nCOLUMN, though, so I did not think this was a reason to reject the\npatch.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 02 Aug 2002 14:39:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] DROP COLUMN round 4 " }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> OK, in this statement:\n\n> ALTER TABLE child ADD FOREIGN KEY (a) REFERENCES parent(b);\n\n> It does not check that column b is not dropped (it does check \"a\"). It just\n> relies on the UNIQUE constraint not being present on b. This should\n> probably be fixed...\n\nThis is not really a case of it \"relies on\" that, it's just that the\ncheck for a key is done before trying to resolve the column names to\nnumbers, so you get that error first. I'm not sure it's worth trying to\nrearrange the code in analyze.c to avoid that. In the self-referential\ncase,\n\n\tcreate table foo (id int primary key,\n\t\t\t parent int references foo);\n\nyou more or less have to work with column names not numbers.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 02 Aug 2002 14:45:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] DROP COLUMN round 4 " } ]
[ { "msg_contents": "On Tue, 30 Jul 2002, Andrew Sullivan wrote:\n\n> On Tue, Jul 30, 2002 at 12:43:52AM -0300, Marc G. Fournier wrote:\n>\n> > since as soon as there are two 'bruce' users, only one can have a password\n>\n> I guess I don't understand why that's a problem. I mean, if you're\n> authenticating users, how can you have two with the same name? It's\n> just like UNIX usernames, to my mind: they have to be unique on the\n> system, no?\n\nI think that is the problem with everyone's \"thinking\" ... they are only\ndealing with 'small servers', where it only has a couple of databases ...\nI'm currently running a server with >100 domains on it, each one with *at\nleast* one database ... each one of those domains, in reality, *could*\nhave a user 'bruce' ...\n\nnote that I run virtual machines ... so each one fo those 'domains' has\ntheir own password files, so I can't say to 'client A' that 'client B'\nalready has user 'bruce', so you can't use it, even though its unique to\nyour system ...\n\nAnd, I don't want to run 100 pgsql instances on the server, since either\nI'd have to have one helluva lot of RAM dedicated to PgSQL, or have little\ntiny shared memory segments available to each ...\n\nactually, let's add onto that ... let's say every one of those 100 pgsql\ndatabases is accessed by PHPPgAdmin, through the web ... so, with a\n'common password' amongst all the various 'bruce's, I could, in theory, go\nto any other domain's PHPPgAdmin, login and see their databases (major\nsecurity problem) ... the way it was before, I could setup a password file\nthat contained a different password for each of those domains, so that\nbruce on domain 1 couldn't access domain 2's databases ... or vice versa\n...\n\nI've CC'd this back into the list, mainly because I think others might be\n'thinking within the box' on this :(\n\n", "msg_date": "Tue, 30 Jul 2002 11:55:55 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: Password sub-process ..." }, { "msg_contents": "> ... amongst all the various 'bruce's...\n\nHmm. The \"Monty Python scenario\"? :)\n\n - Thomas\n", "msg_date": "Tue, 30 Jul 2002 08:19:29 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Password sub-process ..." }, { "msg_contents": "Marc G. Fournier wrote:\n> I think that is the problem with everyone's \"thinking\" ... they are only\n> dealing with 'small servers', where it only has a couple of databases ...\n> I'm currently running a server with >100 domains on it, each one with *at\n> least* one database ... each one of those domains, in reality, *could*\n> have a user 'bruce' ...\n> \n> note that I run virtual machines ... so each one fo those 'domains' has\n> their own password files, so I can't say to 'client A' that 'client B'\n> already has user 'bruce', so you can't use it, even though its unique to\n> your system ...\n> \n> And, I don't want to run 100 pgsql instances on the server, since either\n> I'd have to have one helluva lot of RAM dedicated to PgSQL, or have little\n> tiny shared memory segments available to each ...\n> \n> actually, let's add onto that ... let's say every one of those 100 pgsql\n> databases is accessed by PHPPgAdmin, through the web ... so, with a\n> 'common password' amongst all the various 'bruce's, I could, in theory, go\n> to any other domain's PHPPgAdmin, login and see their databases (major\n> security problem) ... the way it was before, I could setup a password file\n> that contained a different password for each of those domains, so that\n> bruce on domain 1 couldn't access domain 2's databases ... or vice versa\n> ...\n> \n> I've CC'd this back into the list, mainly because I think others might be\n> 'thinking within the box' on this :(\n\nHow hard would it be to do something like this:\n\n1. Add a column called usedatid to pg_shadow. This would contain an \narray of database oids to which a user is bound. Use the value 0 to mean \n\"all databases\".\n\n2. Remove unique index on usename (we always know which database a user \nis logging in to, don't we?). Change unique index on usesysid to be over \nboth usesysid and usedatid.\n\n3. Add sufficient grammer to support specifying a specific database when \ncreating a user. Default to all databases for BC. Add ability to bind to \nadditional databases in ALTER USER.\n\nJust trying to think outside the box ;-)\n\nJoe\n\n", "msg_date": "Tue, 30 Jul 2002 08:23:15 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Password sub-process ..." }, { "msg_contents": "On Tue, Jul 30, 2002 at 11:55:55AM -0300, Marc G. Fournier wrote:\n> I think that is the problem with everyone's \"thinking\" ... they are only\n> dealing with 'small servers', where it only has a couple of databases ...\n> I'm currently running a server with >100 domains on it, each one with *at\n> least* one database ... each one of those domains, in reality, *could*\n> have a user 'bruce' ...\n\nFirst off, I think the implementation of this functionality present in 7.2\nwas a big hack, and I'd rather not see it resurrected.\n\nHowever, it would be useful to be able to do something like this -- how\nabout something like the following:\n\n - the auth system contains a list of 'auth domains' -- an identifier\n similar to a schema name\n\n - the combination of (domain, username) must be unique -- i.e. a\n username is unique within a domain\n\n - each database exists within a single domain; a domain can have 0,\n 1, or many databases\n\n - by default, the system ships with a single auth domain; when a\n user is created, the admin can specify the domain in which the\n user exists, otherwise it defaults to the default domain\n\nAnyway, just thinking out loud -- that may or may not make any sense...\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Tue, 30 Jul 2002 11:23:17 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": false, "msg_subject": "Re: Password sub-process ..." }, { "msg_contents": "On Tue, 2002-07-30 at 16:55, Marc G. Fournier wrote:\n> On Tue, 30 Jul 2002, Andrew Sullivan wrote:\n> \n> > On Tue, Jul 30, 2002 at 12:43:52AM -0300, Marc G. Fournier wrote:\n> >\n> > > since as soon as there are two 'bruce' users, only one can have a password\n> >\n> > I guess I don't understand why that's a problem. I mean, if you're\n> > authenticating users, how can you have two with the same name? It's\n> > just like UNIX usernames, to my mind: they have to be unique on the\n> > system, no?\n> \n> I think that is the problem with everyone's \"thinking\" ... they are only\n> dealing with 'small servers', where it only has a couple of databases ...\n> I'm currently running a server with >100 domains on it, each one with *at\n> least* one database ... each one of those domains, in reality, *could*\n> have a user 'bruce' ...\n> \n> note that I run virtual machines ... so each one fo those 'domains' has\n> their own password files, so I can't say to 'client A' that 'client B'\n> already has user 'bruce', so you can't use it, even though its unique to\n> your system ...\n\nBut if they are _really_ virtual machines then you can probably\ndistinguish them by IP as was discussed earlier.\n\nOr you can declare each virtual machine to be its own \"domain\" and name\ndb users user@domain (or //domain/user if you are inclined that way ;).\nboth of these names are accepted by postgres as valid usernames.\n\nI guess you must doing something like that with their e-mail addresses\nalready ;)\n\n> And, I don't want to run 100 pgsql instances on the server, since either\n> I'd have to have one helluva lot of RAM dedicated to PgSQL, or have little\n> tiny shared memory segments available to each ...\n> \n> actually, let's add onto that ... let's say every one of those 100 pgsql\n> databases is accessed by PHPPgAdmin, through the web ... so, with a\n> 'common password' amongst all the various 'bruce's, I could, in theory, go\n> to any other domain's PHPPgAdmin, login and see their databases (major\n> security problem) ...\n\nBugzilla resolves the problem of \"many bruces\" by having e-mail address\nas a globally unique username.\n\n> the way it was before, I could setup a password file\n> that contained a different password for each of those domains, so that\n> bruce on domain 1 couldn't access domain 2's databases ... or vice versa\n> ...\n> \n> I've CC'd this back into the list, mainly because I think others might be\n> 'thinking within the box' on this :(\n\nOtoh, thinking that distinguishing users by password is a good idea can\nalso be considered 'thinking within the box' by some ;)\n\n--------------------\nHannu\n\n\n\n\n\n\n\n\n", "msg_date": "30 Jul 2002 18:31:14 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Password sub-process ..." }, { "msg_contents": "Neil Conway writes:\n\n> However, it would be useful to be able to do something like this -- how\n> about something like the following:\n>\n> - the auth system contains a list of 'auth domains' -- an identifier\n> similar to a schema name\n>\n> - the combination of (domain, username) must be unique -- i.e. a\n> username is unique within a domain\n>\n> - each database exists within a single domain; a domain can have 0,\n> 1, or many databases\n>\n> - by default, the system ships with a single auth domain; when a\n> user is created, the admin can specify the domain in which the\n> user exists, otherwise it defaults to the default domain\n>\n> Anyway, just thinking out loud -- that may or may not make any sense...\n\nActually, I was thinking just about the same thing. Essentially you're\nproposing virtual hosting, where \"domain\" is the same thing as a virtual\nhost URI. Somewhere you'd need a configuration file that maps request\nparameters (host and port, basically) to a domain (not sure if I'd use\nthat name, though). I like it.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 30 Jul 2002 23:42:00 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Password sub-process ..." } ]
[ { "msg_contents": "I have implemented the ROW keyword, but am not sure that I've gotten\nwhat the spec intends to be the full scope of functionality. It may be\nthat I've missed the main point completely :)\n\nIf someone has the time and interest I'd appreciate it if they would go\nthrough the SQL99 spec and see what they deduce about the ROW keyword\nand the underlying functionality which goes along with it. It seemed to\nenable (at least) having an explicit keyword to introduce row clauses\n(thus eliminating the ambiguity between a single-column row and a\nparenthesized expression) and that is what I implemented. But I get\nhints from reading the spec that there may be more involved, including\n(perhaps) something like name/value pairs in a row expression.\n\nafaict the spec is not at all verbose about this, and is very dense and\nobtuse where it does discuss it. So more pairs of eyes would be greatly\nappreciated...\n\n - Thomas\n", "msg_date": "Tue, 30 Jul 2002 08:19:02 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "ROW features" }, { "msg_contents": "Thomas Lockhart wrote:\n> I have implemented the ROW keyword, but am not sure that I've gotten\n> what the spec intends to be the full scope of functionality. It may be\n> that I've missed the main point completely :)\n> \n> If someone has the time and interest I'd appreciate it if they would go\n> through the SQL99 spec and see what they deduce about the ROW keyword\n> and the underlying functionality which goes along with it. It seemed to\n> enable (at least) having an explicit keyword to introduce row clauses\n> (thus eliminating the ambiguity between a single-column row and a\n> parenthesized expression) and that is what I implemented. But I get\n> hints from reading the spec that there may be more involved, including\n> (perhaps) something like name/value pairs in a row expression.\n> \n> afaict the spec is not at all verbose about this, and is very dense and\n> obtuse where it does discuss it. So more pairs of eyes would be greatly\n> appreciated...\n\nObtuse indeed!\n\nUnder 6 Scalar expressions -> 6.1 <data type> I see (my take):\n ROW ( column_name data_type [,...] )\nand under 7 Query expressions -> 7.1 <row value constructor>:\n ROW ( value_expression [,...] )\n\nCan you send examples of how these would be used?\n\nIt seems this is related to the RECORD pseudo type patch I just \nsubmitted (see:\n http://archives.postgresql.org/pgsql-patches/2002-07/msg00286.php\n) and the CREATE composite type proposal I sent in last night (no link \nin the archives yet), but it isn't clear to if or how anything would \nneed to be changed.\n\nThanks,\n\nJoe\n\n", "msg_date": "Tue, 30 Jul 2002 09:52:17 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: ROW features" }, { "msg_contents": "> > I have implemented the ROW keyword, but am not sure that I've gotten\n> > what the spec intends to be the full scope of functionality. It may be\n> > that I've missed the main point completely :)\n...\n> > afaict the spec is not at all verbose about this, and is very dense and\n> > obtuse where it does discuss it. So more pairs of eyes would be greatly\n> > appreciated...\n> Obtuse indeed!\n> Under 6 Scalar expressions -> 6.1 <data type> I see (my take):\n> ROW ( column_name data_type [,...] )\n> and under 7 Query expressions -> 7.1 <row value constructor>:\n> ROW ( value_expression [,...] )\n> Can you send examples of how these would be used?\n\nNo! :)\n\nOK, \"section 7\" <row value constructor> is what I implemented. afaict\nthat allows things like\n\nSELECT ROW (1) IN (SELECT A FROM T);\n\nwhich would otherwise be disallowed in our grammar because\n\nSELECT (1) IN ...\n\nis ambiguous as to whether it is a single-column row or a parenthesized\nexpression.\n\nThe other usage for \"section 6\" <data type> is pretty obscure to me, but\nI *think* it has to do with RECORD types or UDTs built using SQL99\ncommands and consisting of structures of other types (I forget the\nterminology; is that a RECORD type?).\n\nI didn't implement the latter stuff, but it looks like we can get to it\nlater. When I was looking recently that's all I noticed and was worried\nthat I'd completely missed the boat...\n\n - Thomas\n", "msg_date": "Tue, 30 Jul 2002 16:28:29 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: ROW features" } ]
[ { "msg_contents": "I've got patches to adjust the interpretation of hex literals from an\ninteger type (which is how I implemented it years ago to support the\n*syntax*) to a bit string type. I've mentioned this in a previous\nthread, and am following up now.\n\nOne point raised previously is that the spec may not be clear about the\ncorrect type assignment for a hex constant. I believe that the spec is\nclear on this (well, not really, but as clear as SQL99 manages to get ;)\nand that the correct assignment is to bit string (as opposed to a large\nobject or some other alternative).\n\nI base this on at least one part of the standard, which is a clause in\nthe restrictions on the BIT feature (which we already support):\n\n 31) Specifications for Feature F511, \"BIT data type\":\n a) Subclause 5.3, \"<literal>\":\n i) Without Feature F511, \"BIT data type\", a <general literal>\n shall not be a <bit string literal> or a <hex string\n literal>.\n\nThis seems to be a hard linkage of hex strings with the BIT type.\n\nComments or concerns?\n\n - Thomas\n", "msg_date": "Tue, 30 Jul 2002 08:33:14 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Hex literals" }, { "msg_contents": "Oh, I've also implemented int8 to/from bit conversions, which was a\ntrivial addition/modification to the int4 support already there...\n\n - Thomas\n", "msg_date": "Tue, 30 Jul 2002 08:43:26 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: Hex literals" }, { "msg_contents": "Thomas Lockhart wrote:\n> I've got patches to adjust the interpretation of hex literals from an\n> integer type (which is how I implemented it years ago to support the\n> *syntax*) to a bit string type. I've mentioned this in a previous\n> thread, and am following up now.\n> \n> One point raised previously is that the spec may not be clear about the\n> correct type assignment for a hex constant. I believe that the spec is\n> clear on this (well, not really, but as clear as SQL99 manages to get ;)\n> and that the correct assignment is to bit string (as opposed to a large\n> object or some other alternative).\n> \n> I base this on at least one part of the standard, which is a clause in\n> the restrictions on the BIT feature (which we already support):\n> \n> 31) Specifications for Feature F511, \"BIT data type\":\n> a) Subclause 5.3, \"<literal>\":\n> i) Without Feature F511, \"BIT data type\", a <general literal>\n> shall not be a <bit string literal> or a <hex string\n> literal>.\n> \n> This seems to be a hard linkage of hex strings with the BIT type.\n> \n> Comments or concerns?\n> \n\nMy reading of this was that if there are pairs of <hexit>s, then \nassignment can be to <hex string literal> *or* <binary string literal>, \nbut if there are not pairs (i.e. an odd number of <hexit>s) the \ninterpretaion must be <hex string literal>. I base this on section 5.3 \n<literal>. Peter was the one who pointed this out earlier.\n\nCan BIT be the default but BYTEA be allowed by explicit cast?\n\nJoe\n\n", "msg_date": "Tue, 30 Jul 2002 09:59:09 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Hex literals" }, { "msg_contents": "Thomas Lockhart writes:\n\n> 31) Specifications for Feature F511, \"BIT data type\":\n> a) Subclause 5.3, \"<literal>\":\n> i) Without Feature F511, \"BIT data type\", a <general literal>\n> shall not be a <bit string literal> or a <hex string\n> literal>.\n>\n> This seems to be a hard linkage of hex strings with the BIT type.\n\nYou'll also find in 5.3 Conformance Rule 9)\n\n 9) Without Feature T041, \"Basic LOB data type support\", conforming\n Core SQL language shall not contain any <binary string literal>.\n\nwhich is an equally solid linkage.\n\nI might also add that the rules concerning the absence of a feature do not\ndetermine what happens in presence of a feature. ;-)\n\nLet's think: We could send a formal interpretation request to the\nstandards committee. (They might argue that there is no ambiguity,\nbecause the target type is always known.) Or we could check what other\ndatabase systems do.\n\nIn any case, I'd rather create a readable syntax for blob'ish types (which\nthe current bytea input format does not qualify for) rather than mapping\nhexadecimal input to bit types, which is idiosyncratic.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 30 Jul 2002 23:42:34 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Hex literals" } ]