threads
listlengths
1
2.99k
[ { "msg_contents": "Hi,\n\nIs it me (who hasn't read some FAQ or a doc/man page) or\nit's a bug in the psql interactive terminal?\n\nA sample session is provided at the bottom. I just typed\na simple CREATE TABLE command and did not put closing\nparenthesis (I was typing too fast); I did put a semicolon, however.\npsql gave me no error message whatsoever and accepted whatever\ninput afterwards and ignored it with the exception of \\commands.\n\nWas this reported? Do you need some other info?\nLogs?\n\no I have a RHL7.1 and the tarball of 7.2b3 downloaded\n from the website.\n\no # ./configure --enable-nls --enable-multibyte --enable-locale --enable-debug --enable-cassert\n\nHere is the session:\n\n[regress72b3@gunn regress72b3]$ /usr/local/pgsql/bin/psql test\nWelcome to psql, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\ntest=# create table test2(id serial;\ntest(# select version();\ntest(# ?\ntest(# \\?\n \\a toggle between unaligned and aligned output mode\n \\c[onnect] [DBNAME|- [USER]]\n connect to new database (currently \"test\")\n \\C TITLE set table title\n \\cd [DIRNAME] change the current working directory\n \\copy ... perform SQL COPY with data stream to the client host\n \\copyright show PostgreSQL usage and distribution terms\n \\d TABLE describe table (or view, index, sequence)\n \\d{t|i|s|v}... list tables/indexes/sequences/views\n \\d{p|S|l} list access privileges, system tables, or large objects\n \\da list aggregate functions\n \\dd NAME show comment for table, type, function, or operator\n \\df list functions\n \\do list operators\n \\dT list data types\n \\e FILENAME edit the current query buffer or file with external editor\n \\echo TEXT write text to standard output\n \\encoding ENCODING set client encoding\n \\f STRING set field separator\n \\g FILENAME send SQL command to server (and write results to file or |pipe)\n \\h NAME help on syntax of SQL commands, * for all commands\n \\H toggle HTML output mode (currently off)\n \\i FILENAME execute commands from file\n \\l list all databases\n \\lo_export, \\lo_import, \\lo_list, \\lo_unlink\n large object operations\n \\o FILENAME send all query results to file or |pipe\n \\p show the content of the current query buffer\n \\pset VAR set table output option (VAR := {format|border|expanded|\n fieldsep|null|recordsep|tuples_only|title|tableattr|pager})\n \\q quit psql\n \\qecho TEXT write text to query output stream (see \\o)\n \\r reset (clear) the query buffer\n \\s FILENAME print history or save it to file\n \\set NAME VALUE set internal variable\n \\t show only rows (currently off)\n \\T TEXT set HTML table tag attributes\n \\unset NAME unset (delete) internal variable\n \\w FILENAME write current query buffer to file\n \\x toggle expanded output (currently off)\n \\z list table access privileges\n \\! [COMMAND] execute command in shell or start interactive shell\ntest-# select version();\nERROR: parser: parse error at or near \";\"\ntest=# select version();\n version\n-------------------------------------------------------------\n PostgreSQL 7.2b3 on i686-pc-linux-gnu, compiled by GCC 2.96\n(1 row)\n \ntest=#\n\n--\nSerguei A. Mokhov\n \n\n", "msg_date": "Sun, 2 Dec 2001 01:58:10 -0500", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": true, "msg_subject": "psql misbehaves because of a simple typo" }, { "msg_contents": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca> writes:\n> Is it me (who hasn't read some FAQ or a doc/man page) or\n> it's a bug in the psql interactive terminal?\n\nBoth. There's a bug there, but it's not the one you think.\npsql seems to forget that it's got unmatched parentheses in the\nbuffer after executing a \\? command. Watch the prompt:\n\nregression=# (select\nregression(# \\?\n ... yadda yadda ...\nregression-# 2;\nERROR: parser: parse error at or near \";\"\nregression=# (select\nregression(# \\?\n ... yadda yadda ...\nregression-# 2);\n ?column?\n----------\n 2\n(1 row)\n\nIn the first example, it should not have thought that it had\na complete command after \"2;\".\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 02 Dec 2001 10:16:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql misbehaves because of a simple typo " }, { "msg_contents": "----- Original Message ----- \nFrom: Tom Lane <tgl@sss.pgh.pa.us>\nSent: Sunday, December 02, 2001 10:16 AM\n\n> \"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca> writes:\n> > Is it me (who hasn't read some FAQ or a doc/man page) or\n> > it's a bug in the psql interactive terminal?\n> \n> Both. \n\nOops.\n\n> There's a bug there, but it's not the one you think.\n\nNo, that's what I thought.\n\n> psql seems to forget that it's got unmatched parentheses in the\n> buffer after executing a \\? command. Watch the prompt:\n\n... and that's what I've witnessed.\n\nSorry for the 'false' alarm. This feature just bumped\ninto me unexpectedly.\n\n-s\n\n", "msg_date": "Tue, 4 Dec 2001 01:02:46 -0500", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": true, "msg_subject": "Re: psql misbehaves because of a simple typo " }, { "msg_contents": "> \"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca> writes:\n> > Is it me (who hasn't read some FAQ or a doc/man page) or\n> > it's a bug in the psql interactive terminal?\n> \n> Both. There's a bug there, but it's not the one you think.\n> psql seems to forget that it's got unmatched parentheses in the\n> buffer after executing a \\? command. Watch the prompt:\n> \n> regression=# (select\n> regression(# \\?\n> ... yadda yadda ...\n> regression-# 2;\n> ERROR: parser: parse error at or near \";\"\n> regression=# (select\n> regression(# \\?\n> ... yadda yadda ...\n> regression-# 2);\n> ?column?\n> ----------\n> 2\n> (1 row)\n> \n> In the first example, it should not have thought that it had\n> a complete command after \"2;\".\n\nThe actual code that resets the paren level is:\n\n\t/* backslash command */\n\telse if (was_bslash)\n\t{\n\t\tconst char *end_of_cmd = NULL;\n\n\t\tparen_level = 0;\n\t\tline[i - prevlen] = '\\0'; /* overwrites backslash */\n\nI believe this code is good. If someone issues a backslash command,\nthey probably want to get out of they paren nesting, or may not even\nknow they are in paren nesting, as Serguei did not in the example shown.\n\nWhile it doesn't seem logical, it does help prevent people from getting\nstuck in psql. The fact that backslash commands inside parens clear the\ncounter is a minor anoyance but resonable behavior.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 27 Dec 2001 23:00:36 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql misbehaves because of a simple typo" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> The fact that backslash commands inside parens clear the\n> counter is a minor anoyance but resonable behavior.\n\nIf they cleared the command buffer too, it might be construed\nas novice-friendly behavior (though I'd still call it broken).\nHowever, letting the counter get out of sync with the buffer\ncontents cannot be called anything but a bug.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 27 Dec 2001 23:11:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql misbehaves because of a simple typo " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > The fact that backslash commands inside parens clear the\n> > counter is a minor anoyance but resonable behavior.\n> \n> If they cleared the command buffer too, it might be construed\n> as novice-friendly behavior (though I'd still call it broken).\n> However, letting the counter get out of sync with the buffer\n> contents cannot be called anything but a bug.\n\nOK, so what do we want to do? Clearing the buffer on a any backslash\ncommand is clearly not what we want to do. Should we clear the buffer\non a backslash command _only_ if the number of paren's is not even? If\nwe don't clear the counter on a backslash command with uneven parens, do\nwe risk trapping people in psql?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 27 Dec 2001 23:20:33 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql misbehaves because of a simple typo" }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > The fact that backslash commands inside parens clear the\n> > counter is a minor anoyance but resonable behavior.\n> \n> If they cleared the command buffer too, it might be construed\n> as novice-friendly behavior (though I'd still call it broken).\n> However, letting the counter get out of sync with the buffer\n> contents cannot be called anything but a bug.\n\nMaybe we should clear the counter _only_ if the user clears the buffer,\nor sends the query to the backend with \\g.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 27 Dec 2001 23:22:38 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql misbehaves because of a simple typo" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, so what do we want to do? Clearing the buffer on a any backslash\n> command is clearly not what we want to do. Should we clear the buffer\n> on a backslash command _only_ if the number of paren's is not even? If\n> we don't clear the counter on a backslash command with uneven parens, do\n> we risk trapping people in psql?\n\n\"Trap\"? AFAIK \\q works in any case.\n\n\\r should reset both the buffer and the counter, and seems to do so,\nthough I'm not quite seeing where it manages to accomplish the latter\n(command.c only resets query_buf). \\e should probably provoke a\nrecomputation of paren_level after the edit occurs. Offhand I do not\nthink that any other backslash commands should reset either the buffer\nor the counter. Peter, your thoughts?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 27 Dec 2001 23:33:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql misbehaves because of a simple typo " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, so what do we want to do? Clearing the buffer on a any backslash\n> > command is clearly not what we want to do. Should we clear the buffer\n> > on a backslash command _only_ if the number of paren's is not even? If\n> > we don't clear the counter on a backslash command with uneven parens, do\n> > we risk trapping people in psql?\n> \n> \"Trap\"? AFAIK \\q works in any case.\n> \n> \\r should reset both the buffer and the counter, and seems to do so,\n> though I'm not quite seeing where it manages to accomplish the latter\n> (command.c only resets query_buf). \\e should probably provoke a\n\nSee mainloop.c, line 450. Any backshash command resets the counter.\n\n> recomputation of paren_level after the edit occurs. Offhand I do not\n> think that any other backslash commands should reset either the buffer\n> or the counter. Peter, your thoughts?\n\nRe-doing it for the editor is interesting. The other items like quotes\nand stuff should also probably be recomputed too.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 27 Dec 2001 23:35:13 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql misbehaves because of a simple typo" }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, so what do we want to do? Clearing the buffer on a any backslash\n> > command is clearly not what we want to do. Should we clear the buffer\n> > on a backslash command _only_ if the number of paren's is not even? If\n> > we don't clear the counter on a backslash command with uneven parens, do\n> > we risk trapping people in psql?\n> \n> \"Trap\"? AFAIK \\q works in any case.\n> \n> \\r should reset both the buffer and the counter, and seems to do so,\n> though I'm not quite seeing where it manages to accomplish the latter\n> (command.c only resets query_buf). \\e should probably provoke a\n> recomputation of paren_level after the edit occurs. Offhand I do not\n> think that any other backslash commands should reset either the buffer\n> or the counter. Peter, your thoughts?\n\nOK, here is a patch for 7.3. It clears the paren counter only when the\nbuffer is cleared. Forget what I said about recomputing quotes. You\ncan't use backslash commands while you are in quotes.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/bin/psql/mainloop.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/bin/psql/mainloop.c,v\nretrieving revision 1.43\ndiff -c -r1.43 mainloop.c\n*** src/bin/psql/mainloop.c\t2001/11/05 17:46:31\t1.43\n--- src/bin/psql/mainloop.c\t2001/12/28 04:41:48\n***************\n*** 447,453 ****\n \t\t\t{\n \t\t\t\tconst char *end_of_cmd = NULL;\n \n- \t\t\t\tparen_level = 0;\n \t\t\t\tline[i - prevlen] = '\\0';\t\t/* overwrites backslash */\n \n \t\t\t\t/* is there anything else on the line for the command? */\n--- 447,452 ----\n***************\n*** 469,474 ****\n--- 468,476 ----\n \t\t\t\t\t\t\t\t\t\t\t\t &end_of_cmd);\n \n \t\t\t\tsuccess = slashCmdStatus != CMD_ERROR;\n+ \n+ \t\t\t\tif (query_buf->len == 0)\n+ \t\t\t\t\tparen_level = 0;\n \n \t\t\t\tif ((slashCmdStatus == CMD_SEND || slashCmdStatus == CMD_NEWEDIT) &&\n \t\t\t\t\tquery_buf->len == 0)", "msg_date": "Thu, 27 Dec 2001 23:42:59 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql misbehaves because of a simple typo" }, { "msg_contents": "\n--- Bruce Momjian <pgman@candle.pha.pa.us> wrote:\n\n[...]\n> See mainloop.c, line 450. Any backshash command resets the counter.\n> \n> > recomputation of paren_level after the edit occurs. Offhand I do not\n> > think that any other backslash commands should reset either the buffer\n> > or the counter. Peter, your thoughts?\n> \n> Re-doing it for the editor is interesting. The other items like quotes\n> and stuff should also probably be recomputed too.\n\nWhooof! Looks like psql's sitting on a quite big can of worms.\nYou don't plan to take them all out before 7.2, do you?\n\n\n\n=====\n--\nSerguei A. Mokhov <mailto:mokhov@cs.concordia.ca>\n\n__________________________________________________\nDo You Yahoo!?\nSend your FREE holiday greetings online!\nhttp://greetings.yahoo.com\n", "msg_date": "Thu, 27 Dec 2001 20:52:58 -0800 (PST)", "msg_from": "Serguei Mokhov <serguei_mokhov@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: psql misbehaves because of a simple typo" }, { "msg_contents": "> \n> --- Bruce Momjian <pgman@candle.pha.pa.us> wrote:\n> \n> [...]\n> > See mainloop.c, line 450. Any backshash command resets the counter.\n> > \n> > > recomputation of paren_level after the edit occurs. Offhand I do not\n> > > think that any other backslash commands should reset either the buffer\n> > > or the counter. Peter, your thoughts?\n> > \n> > Re-doing it for the editor is interesting. The other items like quotes\n> > and stuff should also probably be recomputed too.\n> \n> Whooof! Looks like psql's sitting on a quite big can of worms.\n> You don't plan to take them all out before 7.2, do you?\n\nNo, this is all 7.3 discussion.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 27 Dec 2001 23:53:33 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql misbehaves because of a simple typo" }, { "msg_contents": "Bruce Momjian writes:\n\n> OK, here is a patch for 7.3. It clears the paren counter only when the\n> buffer is cleared. Forget what I said about recomputing quotes. You\n> can't use backslash commands while you are in quotes.\n\nI don't think this works when the command is \\g because you're testing for\nthe cleared buffer too early. Look at what happens under \"if\n(slashCmdStatus == CMD_SEND)\". The test should be after that (around line\n489 in original copy).\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Fri, 28 Dec 2001 19:44:53 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: psql misbehaves because of a simple typo" }, { "msg_contents": "> Bruce Momjian writes:\n> \n> > OK, here is a patch for 7.3. It clears the paren counter only when the\n> > buffer is cleared. Forget what I said about recomputing quotes. You\n> > can't use backslash commands while you are in quotes.\n> \n> I don't think this works when the command is \\g because you're testing for\n> the cleared buffer too early. Look at what happens under \"if\n> (slashCmdStatus == CMD_SEND)\". The test should be after that (around line\n> 489 in original copy).\n\nAre you sure?\n\t\n\ttest=> (select\n\ttest(> \\g\n\tERROR: parser: parse error at or near \"\"\n\ttest(> \n\n\nI now see the next \\p kills me:\n\n\ttest(> \\p\n\t( select\n\ttest=> \n\nOops, line 489 doesn't work either:\n\n\ttest=> (select\n\ttest(> \\g\n\tERROR: parser: parse error at or near \"\"\n\ttest=> \n\nI now remember how confusing the previous_buffer handling is. It is\nthis line that is tricky:\n\n\t/* handle backslash command */\n\tslashCmdStatus = HandleSlashCmds(&line[i],\n\t\t query_buf->len > 0 ? query_buf : previous_buf, \n ^^^^^^^^^^^^^^^^^^^^^^^^\n &end_of_cmd); \nIt works now:\n\n\ttest=> (select\n\ttest(> \\g\n\tERROR: parser: parse error at or near \"\"\n\ttest(> \\p\n\t(select\n\nPatch attached.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/bin/psql/mainloop.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/bin/psql/mainloop.c,v\nretrieving revision 1.45\ndiff -c -r1.45 mainloop.c\n*** src/bin/psql/mainloop.c\t2001/12/28 05:01:05\t1.45\n--- src/bin/psql/mainloop.c\t2001/12/28 19:12:37\n***************\n*** 447,453 ****\n \t\t\t{\n \t\t\t\tconst char *end_of_cmd = NULL;\n \n- \t\t\t\tparen_level = 0;\n \t\t\t\tline[i - prevlen] = '\\0';\t\t/* overwrites backslash */\n \n \t\t\t\t/* is there anything else on the line for the command? */\n--- 447,452 ----\n***************\n*** 473,479 ****\n \t\t\t\tif ((slashCmdStatus == CMD_SEND || slashCmdStatus == CMD_NEWEDIT) &&\n \t\t\t\t\tquery_buf->len == 0)\n \t\t\t\t{\n! \t\t\t\t\t/* copy previous buffer to current for for handling */\n \t\t\t\t\tappendPQExpBufferStr(query_buf, previous_buf->data);\n \t\t\t\t}\n \n--- 472,478 ----\n \t\t\t\tif ((slashCmdStatus == CMD_SEND || slashCmdStatus == CMD_NEWEDIT) &&\n \t\t\t\t\tquery_buf->len == 0)\n \t\t\t\t{\n! \t\t\t\t\t/* copy previous buffer to current for handling */\n \t\t\t\t\tappendPQExpBufferStr(query_buf, previous_buf->data);\n \t\t\t\t}\n \n***************\n*** 486,491 ****\n--- 485,493 ----\n \t\t\t\t\tappendPQExpBufferStr(previous_buf, query_buf->data);\n \t\t\t\t\tresetPQExpBuffer(query_buf);\n \t\t\t\t}\n+ \n+ \t\t\t\tif (query_buf->len == 0 && previous_buf->len == 0)\n+ \t\t\t\t\tparen_level = 0;\n \n \t\t\t\t/* process anything left after the backslash command */\n \t\t\t\ti += end_of_cmd - &line[i];", "msg_date": "Fri, 28 Dec 2001 14:33:00 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql misbehaves because of a simple typo" } ]
[ { "msg_contents": "I started to think about postgresql.conf.\n\nDo you think it is a good idea, and how hard would it be, if postmaster and the\nvarious utilities were able to read their whole setup from a postgresql.conf\nfile? Many UNIX utilities do this.\n\nThink of this:\n\nsu pgsql -c \"postmaster -C /etc/pgsql/mydb1.conf\"\n\n\n From this, postmaster can find all its settings in the config file. That way\nyou don't have to mess with startup scripts, environment variables, or anything\nit just comes from the configuration file. Right now it is to hard to run two\ndifferent versions/instances of postgresql on the same machine.\n\nAlso, is there a real need to have a separate pg_hba.conf file? Couldn't\npostgresql.conf also contain these settings?\n\nI know this isn't of great concern to this group, but one of the things that\nPostgreSQL does that kind of bugs me, is that it keeps its configuration and\ndata in the same place.\n", "msg_date": "Sun, 02 Dec 2001 11:04:36 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "configuration files and PGDATA variable" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> Think of this:\n> su pgsql -c \"postmaster -C /etc/pgsql/mydb1.conf\"\n\n> From this, postmaster can find all its settings in the config\n> file. That way you don't have to mess with startup scripts,\n> environment variables, or anything it just comes from the\n> configuration file.\n\nThis is largely possible now, with the exception of locale\nenvironment variables (not sure that we should ignore those) and\ndatabase-path environment variables (an ugly hack whose days are\nnumbered anyway). However, we spell the thing slightly differently:\n\n> su pgsql -c \"postmaster -D /path/to/data/dir\"\n\nand expect to find the config files therein.\n\n> Right now it is to hard to run two\n> different versions/instances of postgresql on the same machine.\n\nAu contraire, it's easy; I do it all the time. (I've got 7.0, 7.1,\nand 7.2 postmasters alive right now on this machine.) Moving the\nconfig files out of the data directory would make it harder, IMHO.\n\n> I know this isn't of great concern to this group, but one of the things that\n> PostgreSQL does that kind of bugs me, is that it keeps its configuration and\n> data in the same place.\n\nSome of us think that's a feature, not a bug. I realize that it's open\nto argument; but you'd better give practical arguments, not just assert\nthat it's the wrong approach.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 02 Dec 2001 20:01:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: configuration files and PGDATA variable " } ]
[ { "msg_contents": "\nI have been trying to update a table -- the update statement returns 1\nindicating that one row has been updated, but no changes were made. \n\nThis is happening on two servers: \n\n 1.my development server, running Postgres 7.0.3, RedHat 7.1 \n 2.soon-to-be production server running Postgres 7.1.3, RedHat 6.2 \n\nCould I have a deadlock? Something else? \n\nWhat should I be looking for?\n\nThanks.\n\n JT\n________________________________________\nJames Thornton, http://jamesthornton.com\n", "msg_date": "Sun, 02 Dec 2001 12:31:48 -0600", "msg_from": "James Thornton <thornton@cs.baylor.edu>", "msg_from_op": true, "msg_subject": "update returns 1, but no changes have been made" } ]
[ { "msg_contents": " I'd like to contrast an error that I get when using like in text\n fields.\n\n I have a table A an a view v_A of A. Name is a text field (person\n names). Look at these queries:\n\n 1)\n\n \tselect * from A where name like '%DAVID%'\n\n It works pretty well and fast. Result:\n\n \tDAVID\n\tDAVID FOO\n\tDAVID\n\t.../...\n\n 2)\n\n \tselect * from v_A where name like '%DA%'\n\n It works too, with a result bigger (obviously) than first query.\n Result:\n\n \tDAVID\n\tDANIEL\n\tDAVID FOO\n\t.../...\n\n 3)\n\n \tselect * from v_A where name like '%DAVID%'\n\n It freezes psql. Why?. It seems a bug, doesn't it?. Thanks for any\n info.\n\n \t\t\t\t\tDavid\n", "msg_date": "Mon, 3 Dec 2001 08:37:36 +0100", "msg_from": "bombadil@wanadoo.es", "msg_from_op": true, "msg_subject": "Problem (bug?) wih like" }, { "msg_contents": "El lunes 03 de diciembre, bombadil@wanadoo.es escribi�:\n> I'd like to contrast an error that I get when using like in text\n> fields.\n\n Sorry. Info about my system:\n\n \tPostgresql 7.1.3\n\tDebian Sid GNU/Linux\n\ti386\n\n Greets.\n\n \t\t\t\t\tDavid\n", "msg_date": "Mon, 3 Dec 2001 09:13:02 +0100", "msg_from": "DaVinci <davinci@escomposlinux.org>", "msg_from_op": false, "msg_subject": "Re: Problem (bug?) wih like" }, { "msg_contents": "El lunes 03 de diciembre, bombadil@wanadoo.es escribi�:\n> I'd like to contrast an error that I get when using like in text\n> fields.\n\n Sorry. Info about my system:\n\n \tPostgresql 7.1.3\n\tDebian Sid GNU/Linux\n\ti386\n\n Greets.\n\n \t\t\t\t\tDavid\n", "msg_date": "Mon, 3 Dec 2001 09:33:25 +0100", "msg_from": "DaVinci <bombadil@wanadoo.es>", "msg_from_op": false, "msg_subject": "Re: Problem (bug?) wih like" }, { "msg_contents": "bombadil@wanadoo.es writes:\n> \tselect * from v_A where name like '%DAVID%'\n\n> It freezes psql.\n\nI don't believe that it's really \"frozen\". Taking a long time, maybe.\n\n> Why?\n\nYou tell us. What's the EXPLAIN query plan for these three queries?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Dec 2001 09:43:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem (bug?) wih like " }, { "msg_contents": "El lunes 03 de diciembre, Tom Lane escribi�:\n> bombadil@wanadoo.es writes:\n> > \tselect * from v_A where name like '%DAVID%'\n> >\n> > It freezes psql.\n> \n> I don't believe that it's really \"frozen\". Taking a long time, maybe.\n\n Perhaps. But a veeeeeery long time, in any way ;)\n\n I have been waiting more than 3 minutes and... �e voila!, here it is\n :)\n\n> > Why?\n> \n> You tell us. What's the EXPLAIN query plan for these three queries?\n\n Ops. Sorry for laziness.\n\n Here are my queries:\n\n--------------------------------------------------------------------------\n 1) # explain SELECT * from cliente where nombre like '%DAVID%';\n\n Result:\n\nNOTICE: QUERY PLAN:\n\nSeq Scan on cliente (cost=0.00..16139.44 rows=1 width=131)\n\n-------------------------------------------------------------------------- \n 2) # explain SELECT * from v_cliente where nombre like '%DA%';\n\n Result:\n\n\nNOTICE: QUERY PLAN:\n\nMerge Join (cost=54763.50..62874.36 rows=413980 width=183)\n -> Sort (cost=16238.44..16238.44 rows=54 width=131)\n -> Seq Scan on cliente c (cost=0.00..16236.88 rows=54 width=131)\n -> Sort (cost=38525.06..38525.06 rows=20097 width=74)\n -> Subquery Scan d (cost=891.91..37088.66 rows=20097 width=74)\n -> Hash Join (cost=891.91..37088.66 rows=20097 width=74)\n -> Hash Join (cost=100.89..26377.49 rows=20097 width=58)\n -> Merge Join (cost=78.96..17190.49 rows=20097 width=42)\n -> Index Scan using dir_via_ndx on direcci�n d (cost=0.00..8951.65 rows=20097 width=26)\n -> Sort (cost=78.96..78.96 rows=176 width=16)\n -> Seq Scan on v�a v (cost=0.00..72.40 rows=176 width=16)\n -> Hash (cost=21.80..21.80 rows=52 width=16)\n -> Seq Scan on provincia p (cost=0.00..21.80 rows=52 width=16)\n -> Hash (cost=786.20..786.20 rows=1928 width=16)\n -> Seq Scan on localidad l (cost=0.00..786.20 rows=1928 width=16)\n\n------------------------------------------------------------------------------\n 3) # explain SELECT * from v_cliente where nombre like '%DAVID%';\n\n Result:\n\nNOTICE: QUERY PLAN:\n\nMerge Join (cost=54763.50..62874.36 rows=413980 width=183)\n -> Sort (cost=16238.44..16238.44 rows=54 width=131)\n -> Seq Scan on cliente c (cost=0.00..16236.88 rows=54 width=131)\n -> Sort (cost=38525.06..38525.06 rows=20097 width=74)\n -> Subquery Scan d (cost=891.91..37088.66 rows=20097 width=74)\n -> Hash Join (cost=891.91..37088.66 rows=20097 width=74)\n -> Hash Join (cost=100.89..26377.49 rows=20097 width=58)\n -> Merge Join (cost=78.96..17190.49 rows=20097 width=42)\n -> Index Scan using dir_via_ndx on direcci�n d (cost=0.00..8951.65 rows=20097 width=26)\n -> Sort (cost=78.96..78.96 rows=176 width=16)\n -> Seq Scan on v�a v (cost=0.00..72.40 rows=176 width=16)\n -> Hash (cost=21.80..21.80 rows=52 width=16)\n -> Seq Scan on provincia p (cost=0.00..21.80 rows=52 width=16)\n -> Hash (cost=786.20..786.20 rows=1928 width=16)\n -> Seq Scan on localidad l (cost=0.00..786.20 rows=1928 width=16)\n\n--------------------------------------------------------------------------------\n\n Greets.\n\n \t\t\t\t\tDavid\n\n", "msg_date": "Mon, 3 Dec 2001 16:08:59 +0100", "msg_from": "bombadil@wanadoo.es", "msg_from_op": true, "msg_subject": "Re: Problem (bug?) with like" }, { "msg_contents": "bombadil@wanadoo.es writes:\n> Here are my queries:\n\nYou sure you didn't paste in the same result for #2 and #3? They're\nthe same plan with the same rows estimates --- but I'd expect the rows\nestimates, at least, to change given the more-selective LIKE pattern.\n\nAlso, how many rows are there really that match '%DA%' and '%DAVID%'?\n\nI suspect the planner is being overoptimistic about the selectivity of\n'%DAVID%', and is choosing a plan that doesn't work well when there are\nlots of DAVIDs :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Dec 2001 10:40:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem (bug?) with like " }, { "msg_contents": "El lunes 03 de diciembre, Tom Lane escribi�:\n> > Here are my queries:\n> \n> You sure you didn't paste in the same result for #2 and #3? They're\n> the same plan with the same rows estimates --- but I'd expect the rows\n> estimates, at least, to change given the more-selective LIKE pattern.\n\n I don't know for sure if I have send wrong data, but I think not.\n Tomorrow I'll get more info.\n\n> Also, how many rows are there really that match '%DA%' and '%DAVID%'?\n\n Very few for that incredible difference in time:\n\n \t'%DA%' -> 2 sec.\n\t'%DAVID%' -> 3 min.\n\n> I suspect the planner is being overoptimistic about the selectivity of\n> '%DAVID%', and is choosing a plan that doesn't work well when there are\n> lots of DAVIDs :-(\n\n I have thought that it only occurs when item to find is present\n completely in any number of registers, but this is only a burde\n hypothesis :?\n\n Thanks for all.\n\n \t\t\t\t\t\t\tDavid\n\n", "msg_date": "Mon, 3 Dec 2001 19:30:33 +0100", "msg_from": "bombadil@wanadoo.es", "msg_from_op": true, "msg_subject": "Re: Problem (bug?) with like" }, { "msg_contents": "bombadil@wanadoo.es writes:\n> Here are my queries:\n\nYou sure you didn't paste in the same result for #2 and #3? They're\nthe same plan with the same rows estimates --- but I'd expect the rows\nestimates, at least, to change given the more-selective LIKE pattern.\n\nAlso, how many rows are there really that match '%DA%' and '%DAVID%'?\n\nI suspect the planner is being overoptimistic about the selectivity of\n'%DAVID%', and is choosing a plan that doesn't work well when there are\nlots of DAVIDs :-(\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Mon, 3 Dec 2001 19:31:23 +0100", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem (bug?) with like" }, { "msg_contents": "El lunes 03 de diciembre, Tom Lane escribi�:\n> You sure you didn't paste in the same result for #2 and #3? They're\n> the same plan with the same rows estimates --- but I'd expect the rows\n> estimates, at least, to change given the more-selective LIKE pattern.\n\n Ahem... You are right. It seems I have a problem with Vim and insertion of\n command result :(\n\n Here are the correct results:\n\n-----------------------------------------------------------------------\n1) # explain SELECT * from v_cliente where nombre like '%DA%';\n\nResult:\n\nNOTICE: QUERY PLAN:\n\nMerge Join (cost=54763.50..62874.36 rows=413980 width=183)\n -> Sort (cost=16238.44..16238.44 rows=54 width=131)\n -> Seq Scan on cliente c (cost=0.00..16236.88 rows=54 width=131)\n -> Sort (cost=38525.06..38525.06 rows=20097 width=74)\n -> Subquery Scan d (cost=891.91..37088.66 rows=20097 width=74)\n -> Hash Join (cost=891.91..37088.66 rows=20097 width=74)\n -> Hash Join (cost=100.89..26377.49 rows=20097 width=58)\n -> Merge Join (cost=78.96..17190.49 rows=20097 width=42)\n -> Index Scan using dir_via_ndx on direcci�n d (cost=0.00..8951.65 rows=20097 width=26)\n -> Sort (cost=78.96..78.96 rows=176 width=16)\n -> Seq Scan on v�a v (cost=0.00..72.40 rows=176 width=16)\n -> Hash (cost=21.80..21.80 rows=52 width=16)\n -> Seq Scan on provincia p (cost=0.00..21.80 rows=52 width=16)\n -> Hash (cost=786.20..786.20 rows=1928 width=16)\n -> Seq Scan on localidad l (cost=0.00..786.20 rows=1928 width=16)\n\nEXPLAIN\n---------------------------------------------------------------------------\n2) explain SELECT * from v_cliente where nombre like '%DAVID%';\n\nResult:\n\nNOTICE: QUERY PLAN:\n\nNested Loop (cost=891.91..61414.58 rows=7638 width=183)\n -> Seq Scan on cliente c (cost=0.00..16236.88 rows=1 width=131)\n -> Subquery Scan d (cost=891.91..37088.66 rows=20097 width=74)\n -> Hash Join (cost=891.91..37088.66 rows=20097 width=74)\n -> Hash Join (cost=100.89..26377.49 rows=20097 width=58)\n -> Merge Join (cost=78.96..17190.49 rows=20097 width=42)\n -> Index Scan using dir_via_ndx on direcci�n d (cost=0.00..8951.65 rows=20097 width=26)\n -> Sort (cost=78.96..78.96 rows=176 width=16)\n -> Seq Scan on v�a v (cost=0.00..72.40 rows=176 width=16)\n -> Hash (cost=21.80..21.80 rows=52 width=16)\n -> Seq Scan on provincia p (cost=0.00..21.80 rows=52 width=16)\n -> Hash (cost=786.20..786.20 rows=1928 width=16)\n -> Seq Scan on localidad l (cost=0.00..786.20 rows=1928 width=16)\nEXPLAIN\n----------------------------------------------------------------------------\n\n> Also, how many rows are there really that match '%DA%' and '%DAVID%'?\n\n 1) 2672 rows -> 3.59 sec.\n 2) 257 rows -> 364.69 sec.\n\n v_cliente and cliente have same number of rows: 38975\n\n I hope this is enough. Greets.\n\n \t\t\t\t\t\t\tDavid\n\n", "msg_date": "Tue, 4 Dec 2001 09:15:57 +0100", "msg_from": "bombadil@wanadoo.es", "msg_from_op": true, "msg_subject": "Re: Problem (bug?) with like" }, { "msg_contents": "bombadil@wanadoo.es writes:\n> 1) # explain SELECT * from v_cliente where nombre like '%DA%';\n\n> Merge Join (cost=54763.50..62874.36 rows=413980 width=183)\n> -> Sort (cost=16238.44..16238.44 rows=54 width=131)\n> -> Seq Scan on cliente c (cost=0.00..16236.88 rows=54 width=131)\n> -> Sort (cost=38525.06..38525.06 rows=20097 width=74)\n> -> Subquery Scan d (cost=891.91..37088.66 rows=20097 width=74)\n> -> Hash Join (cost=891.91..37088.66 rows=20097 width=74)\n> ...\n\n> 2) explain SELECT * from v_cliente where nombre like '%DAVID%';\n\n> Nested Loop (cost=891.91..61414.58 rows=7638 width=183)\n> -> Seq Scan on cliente c (cost=0.00..16236.88 rows=1 width=131)\n> -> Subquery Scan d (cost=891.91..37088.66 rows=20097 width=74)\n> -> Hash Join (cost=891.91..37088.66 rows=20097 width=74)\n> ... [same subplan as above]\n\nThe problem here is that the planner is being way too optimistic about\nthe selectivity of LIKE '%DAVID%' --- notice the estimate that only\none matching row will be found in cliente, rather than 54 as with '%DA%'.\nSo it chooses a plan that avoids the sort overhead needed for an\nefficient merge join with the other tables. That would be a win if\nthere were only one matching row, but as soon as there are lots, it's\na big loss, because the subquery to join the other tables gets redone\nfor every matching row :-(\n\n>> Also, how many rows are there really that match '%DA%' and '%DAVID%'?\n\n> 1) 2672 rows -> 3.59 sec.\n> 2) 257 rows -> 364.69 sec.\n\nI am thinking that the rules for selectivity of LIKE patterns probably\nneed to be modified. Presently the code assumes that a long constant\nstring has probability of occurrence proportional to the product of the\nprobabilities of the individual letters. That might be true in a random\nworld, but people don't search for random strings. I think we need to\nback off the selectivity estimate by some large factor to account for\nthe fact that the pattern being searched for is probably not random.\nAnyone have ideas how to do that?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Dec 2001 10:21:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem (bug?) with like " }, { "msg_contents": "\n\nOn Tue, 4 Dec 2001, Tom Lane wrote:\n\n> bombadil@wanadoo.es writes:\n> > 1) # explain SELECT * from v_cliente where nombre like '%DA%';\n> \n> > Merge Join (cost=54763.50..62874.36 rows=413980 width=183)\n> > -> Sort (cost=16238.44..16238.44 rows=54 width=131)\n\n> The problem here is that the planner is being way too optimistic about\n> the selectivity of LIKE '%DAVID%' --- notice the estimate that only\n> one matching row will be found in cliente, rather than 54 as with '%DA%'.\n> So it chooses a plan that avoids the sort overhead needed for an\n> efficient merge join with the other tables. That would be a win if\n> there were only one matching row, but as soon as there are lots, it's\n> a big loss, because the subquery to join the other tables gets redone\n> for every matching row :-(\n> \n> >> Also, how many rows are there really that match '%DA%' and '%DAVID%'?\n> \n> > 1) 2672 rows -> 3.59 sec.\n> > 2) 257 rows -> 364.69 sec.\n> \n> I am thinking that the rules for selectivity of LIKE patterns probably\n> need to be modified. Presently the code assumes that a long constant\n> string has probability of occurrence proportional to the product of the\n> probabilities of the individual letters. That might be true in a random\n> world, but people don't search for random strings. I think we need to\n> back off the selectivity estimate by some large factor to account for\n> the fact that the pattern being searched for is probably not random.\n> Anyone have ideas how to do that?\n> \n\nIs there any statistic being kept by partial index?\n\nfor instance\n\n\t#occurrences of A\n\t B\n\t\t\t\nA fairly small table/index could track these couldn't it?\n\nIf it were a btree itself, then statistics could be split appropriately \ninto sub-branches when the # of occurrences exceeds some threshold.\n\ndavid\n\n\n", "msg_date": "Tue, 4 Dec 2001 11:10:30 -0500 (EST)", "msg_from": "David Walter <dwalter@ecs.syr.edu>", "msg_from_op": false, "msg_subject": "New planner for like was -- Problem (bug?) with like " }, { "msg_contents": "bombadil@wanadoo.es writes:\n> 1) # explain SELECT * from v_cliente where nombre like '%DA%';\n\n> Merge Join (cost=54763.50..62874.36 rows=413980 width=183)\n> -> Sort (cost=16238.44..16238.44 rows=54 width=131)\n> -> Seq Scan on cliente c (cost=0.00..16236.88 rows=54 width=131)\n> -> Sort (cost=38525.06..38525.06 rows=20097 width=74)\n> -> Subquery Scan d (cost=891.91..37088.66 rows=20097 width=74)\n> -> Hash Join (cost=891.91..37088.66 rows=20097 width=74)\n> ...\n\n> 2) explain SELECT * from v_cliente where nombre like '%DAVID%';\n\n> Nested Loop (cost=891.91..61414.58 rows=7638 width=183)\n> -> Seq Scan on cliente c (cost=0.00..16236.88 rows=1 width=131)\n> -> Subquery Scan d (cost=891.91..37088.66 rows=20097 width=74)\n> -> Hash Join (cost=891.91..37088.66 rows=20097 width=74)\n> ... [same subplan as above]\n\nThe problem here is that the planner is being way too optimistic about\nthe selectivity of LIKE '%DAVID%' --- notice the estimate that only\none matching row will be found in cliente, rather than 54 as with '%DA%'.\nSo it chooses a plan that avoids the sort overhead needed for an\nefficient merge join with the other tables. That would be a win if\nthere were only one matching row, but as soon as there are lots, it's\na big loss, because the subquery to join the other tables gets redone\nfor every matching row :-(\n\n>> Also, how many rows are there really that match '%DA%' and '%DAVID%'?\n\n> 1) 2672 rows -> 3.59 sec.\n> 2) 257 rows -> 364.69 sec.\n\nI am thinking that the rules for selectivity of LIKE patterns probably\nneed to be modified. Presently the code assumes that a long constant\nstring has probability of occurrence proportional to the product of the\nprobabilities of the individual letters. That might be true in a random\nworld, but people don't search for random strings. I think we need to\nback off the selectivity estimate by some large factor to account for\nthe fact that the pattern being searched for is probably not random.\nAnyone have ideas how to do that?\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 4 Dec 2001 18:16:27 +0100", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem (bug?) with like" }, { "msg_contents": "\n\nTom Lane wrote:\n\n>Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>\n>>But what about '%A%' vs. '%AC%'. Seems the second is reasonably\n>>different from the first the our optimizer may be fine with that. Is it\n>>only when the strings get longer that we lose specificity?\n>>\n>\n>Yeah, I don't think that the estimates are bad for one or two\n>characters. But the estimate gets real small real fast as you\n>increase the number of match characters in the LIKE pattern.\n>We need to slow that down some.\n>\nCould we just assign weights to first few characters and then consider \nonly these\nfirst few characters when determinind probabbility of finding it ?\n\nIf someone searches for '%New York City%' we have quite good reasons to \nbelieve\nthat there are some of these in there so we should factor in the fact \nthat usually one searches\nfor strings that do exist by said weights.\n\nAnother option would be to gather statistics not only on individual \nletters but on bi- or\ntrigraphs. Then as a next step we could implement proper trigraph indexes ;)\n\n----------------\nHannu\n\n\n", "msg_date": "Tue, 04 Dec 2001 22:30:33 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problem (bug?) with like" }, { "msg_contents": "> I am thinking that the rules for selectivity of LIKE patterns probably\n> need to be modified. Presently the code assumes that a long constant\n> string has probability of occurrence proportional to the product of the\n> probabilities of the individual letters. That might be true in a random\n> world, but people don't search for random strings. I think we need to\n> back off the selectivity estimate by some large factor to account for\n> the fact that the pattern being searched for is probably not random.\n> Anyone have ideas how to do that?\n\nBut what about '%A%' vs. '%AC%'. Seems the second is reasonably\ndifferent from the first the our optimizer may be fine with that. Is it\nonly when the strings get longer that we lose specificity?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Dec 2001 14:00:32 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem (bug?) with like" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> But what about '%A%' vs. '%AC%'. Seems the second is reasonably\n> different from the first the our optimizer may be fine with that. Is it\n> only when the strings get longer that we lose specificity?\n\nYeah, I don't think that the estimates are bad for one or two\ncharacters. But the estimate gets real small real fast as you\nincrease the number of match characters in the LIKE pattern.\nWe need to slow that down some.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Dec 2001 14:52:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem (bug?) with like " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > But what about '%A%' vs. '%AC%'. Seems the second is reasonably\n> > different from the first the our optimizer may be fine with that. Is it\n> > only when the strings get longer that we lose specificity?\n> \n> Yeah, I don't think that the estimates are bad for one or two\n> characters. But the estimate gets real small real fast as you\n> increase the number of match characters in the LIKE pattern.\n> We need to slow that down some.\n\nYea, maybe a log base 2 decrease:\n\t\n\t1 char\t1x\n\t2 char\t2x\n\t4 char\t3x\n\t8 char\t4x\n\t16 char 5x\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Dec 2001 14:55:30 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem (bug?) with like" }, { "msg_contents": "> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > But what about '%A%' vs. '%AC%'. Seems the second is reasonably\n> > > different from the first the our optimizer may be fine with that. Is it\n> > > only when the strings get longer that we lose specificity?\n> > \n> > Yeah, I don't think that the estimates are bad for one or two\n> > characters. But the estimate gets real small real fast as you\n> > increase the number of match characters in the LIKE pattern.\n> > We need to slow that down some.\n> \n> Yea, maybe a log base 2 decrease:\n> \t\n> \t1 char\t1x\n> \t2 char\t2x\n> \t4 char\t3x\n> \t8 char\t4x\n> \t16 char 5x\n\nI did a little research on this. I think the problem is that ordinary\ncharacters are assumed to randomly appear in a character string, while\nin practice, if the string has already been specified like 'DAV', there\nare very few additional characters that can follow it and make sense.\n\nLooking at backend/utils/adt/selfuncs.c, I see this:\n\n\t#define FIXED_CHAR_SEL\t0.04\t/* about 1/25 */\n...\n\tsel *= FIXED_CHAR_SEL;\n\nwhich means every additional character reduces the selectivity by 96%. \nThis seems much too restrictive to me. Because of the new optimizer\nbuckets, we do have good statistics on the leading character, but\nadditional characters drastically reduce selectivity. I think perhaps a\nnumber like 0.50 or 50% may be correct.\n\nThat would be a table like this:\n\n \t1 char\t2x\n \t2 char\t4x\n \t4 char\t8x\n \t8 char\t16x\n \t16 char 32x\n\nwhich is more restrictive than I initially suggested above but less\nrestrictive than we have now.\n\nShould we assume additional characters are indeed randomly appearing in\nthe string?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 28 Dec 2001 13:27:26 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problem (bug?) with like" }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > But what about '%A%' vs. '%AC%'. Seems the second is reasonably\n> > different from the first the our optimizer may be fine with that. Is it\n> > only when the strings get longer that we lose specificity?\n> \n> Yeah, I don't think that the estimates are bad for one or two\n> characters. But the estimate gets real small real fast as you\n> increase the number of match characters in the LIKE pattern.\n> We need to slow that down some.\n\nSee my earlier email about the 50% idea for LIKE. Do ordinary string\ncomparisons also have this problem?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 28 Dec 2001 13:35:41 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem (bug?) with like" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Do ordinary string\n> comparisons also have this problem?\n\nNo, only LIKE and regex matching.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 28 Dec 2001 13:54:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Problem (bug?) with like " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > But what about '%A%' vs. '%AC%'. Seems the second is reasonably\n> > different from the first the our optimizer may be fine with that. Is it\n> > only when the strings get longer that we lose specificity?\n> \n> Yeah, I don't think that the estimates are bad for one or two\n> characters. But the estimate gets real small real fast as you\n> increase the number of match characters in the LIKE pattern.\n> We need to slow that down some.\n\nOK, I think I have the proper value for FIXED_CHAR_SEL. It is currently\n0.04 or 1/26, meaning the letters are random, though this is not usually\nthe case.\n\nIf we assume our optimizer buckets have given us a reasonable value for\nthe first character, suppose it is an 'F', there are only a few valid\ncharacters after that, at least in English. There are vowels, and a few\nconsonants, and given that character, there are only a few characters\nthat can be valid after that. To my thinking, it is two characters that\nrepresent the same distribution as one random character, leaving 0.20 as\nthe proper value for FIXED_CHAR_SEL because 0.20 * 0.20 is the same as\n0.04.\n\nAdded to TODO:\n\n * Change FIXED_CHAR_SEL to 0.20 from 0.04 to give better selectivity (Bruce)\n\nIf people think there is a better value for this, please chime in.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 28 Dec 2001 23:55:19 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem (bug?) with like" }, { "msg_contents": "> The problem here is that the planner is being way too optimistic about\n> the selectivity of LIKE '%DAVID%' --- notice the estimate that only\n> one matching row will be found in cliente, rather than 54 as with '%DA%'.\n> So it chooses a plan that avoids the sort overhead needed for an\n> efficient merge join with the other tables. That would be a win if\n> there were only one matching row, but as soon as there are lots, it's\n> a big loss, because the subquery to join the other tables gets redone\n> for every matching row :-(\n> \n> >> Also, how many rows are there really that match '%DA%' and '%DAVID%'?\n> \n> > 1) 2672 rows -> 3.59 sec.\n> > 2) 257 rows -> 364.69 sec.\n> \n> I am thinking that the rules for selectivity of LIKE patterns probably\n> need to be modified. Presently the code assumes that a long constant\n> string has probability of occurrence proportional to the product of the\n> probabilities of the individual letters. That might be true in a random\n> world, but people don't search for random strings. I think we need to\n> back off the selectivity estimate by some large factor to account for\n> the fact that the pattern being searched for is probably not random.\n> Anyone have ideas how to do that?\n\nLet's use the above example with the new FIXED_CHAR_SEL values:\n\nWith the new 0.20 value for FIXED_CHAR_SEL, we see for DA and DAVID\nabove:\n\t\nDA\t1) 0.20 ^ 2\n\t .04\n\t\nDAVID\t2) 0.20 ^ 5\n\t .00032\n\nIf we divide these two, we get:\n\n\t> 0.04 / 0.00032\n\t 125\n\nwhile looking at the total counts reported above, we get:\n\t\n\t> 2672 / 257\n\t ~10.39688715953307392996\n\nThe 0.04 value gives a value of:\n\t\n\t> 0.04 ^ 2 \n\t .0016\n\t> 0.04 ^ 5 \n\t .0000001024\n\t> .0016 / .0000001024\n\t 15625\n\nClearly the 0.20 value is 10x too large, while the 0.04 value is 1000x\ntoo large. Because this was a contrived example, and because some have\nmore random text than DAVID in their field, I think 0.20 is the proper\nvalue.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 29 Dec 2001 00:08:12 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem (bug?) with like" } ]
[ { "msg_contents": "Hi,\n\tI threw this to cygwin but it doesn't seem to have elisited any\ninterest over the weekend so I'm sending it here as a beta problem (not\nentirely sure if this is correct or if it should go to bugs).\n- Stuart\n\n-----Original Message-----\nFrom: Henshall, Stuart - WCP\n[mailto:SHenshall@westcountrypublications.co.uk]\nSent: 30 November 2001 19:13\nTo: 'pgsql-cygwin@postgresql.org'\nSubject: [CYGWIN] 7.2b3 postmaster doesn't start on Win98 \n\n\nWhen trying to start the postmaster on win98se with cygwin\nI get told that the data directory must be 0700, but when I try to chmod to\n700, it apparently succeds, but nothing permissions stay at 755. I suspect\nthis to be because win98 has no real file protection (just a read only\nattribute)\n(uname -a: \nCYGWIN_98-4.10 BX3551 1.3.5(0.47/3/2) 2001-11-13 23:16 i686 unknown)\n- Stuart\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n", "msg_date": "Mon, 3 Dec 2001 10:05:06 -0000 ", "msg_from": "\"Henshall, Stuart - WCP\" <SHenshall@westcountrypublications.co.uk>", "msg_from_op": true, "msg_subject": "FW: [CYGWIN] 7.2b3 postmaster doesn't start on Win98 " } ]
[ { "msg_contents": "\n> > > What modification should be made to configure.in to make it\ninclude\n> > > SupportDefs.h when testing for int8 uint8 int64 and uint64 size ?\n\n> \n> OK, I bit the bullet and made up a whole new macro for type testing.\n> Those who can't get at a new snapshot can try the attached patch.\n> \n> I have included <stdio.h> because that appears to effect the\ndefinition of\n> the types in question on AIX; apparently it pulls in <inttypes.h>\nsomehow.\n\nCorrect. stdio.h pulls sys/types.h which then pulls sys/inttypes.h ==\ninttypes.h.\n\n> Please check this on AIX and BeOS.\n\nWorks like a charm on AIX :-) (Tested snapshot of 3 Dec 4:01)\nI think it is also a gain, since it does not confuse people looking at \nconfigure output.\n\nThank you Peter\nAndreas\n", "msg_date": "Mon, 3 Dec 2001 11:05:42 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Second call for platform testing " }, { "msg_contents": ">> > > What modification should be made to configure.in to make it\n>include\n>> > > SupportDefs.h when testing for int8 uint8 int64 and uint64 size ?\n>\n>> \n>> OK, I bit the bullet and made up a whole new macro for type testing.\n>> Those who can't get at a new snapshot can try the attached patch.\n>> \n>> I have included <stdio.h> because that appears to effect the\n>definition of\n>> the types in question on AIX; apparently it pulls in <inttypes.h>\n>somehow.\n>\n>Correct. stdio.h pulls sys/types.h which then pulls sys/inttypes.h ==\n>inttypes.h.\n>\n>> Please check this on AIX and BeOS.\n\n The detection now works on BEOS\n\n thanks\n\n cyril\n\n", "msg_date": "Tue, 4 Dec 2001 10:34:35 +0100", "msg_from": "\"Cyril VELTER\" <cyril.velter@libertysurf.fr>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing " } ]
[ { "msg_contents": "> \tI threw this to cygwin but it doesn't seem to have elisited any\n> interest over the weekend so I'm sending it here as a beta \n> problem (not\n> entirely sure if this is correct or if it should go to bugs).\n\n> \n> When trying to start the postmaster on win98se with cygwin\n> I get told that the data directory must be 0700, but when I \n> try to chmod to\n> 700, it apparently succeds, but nothing permissions stay at \n> 755. I suspect\n> this to be because win98 has no real file protection (just a read only\n> attribute)\n> (uname -a: \n> CYGWIN_98-4.10 BX3551 1.3.5(0.47/3/2) 2001-11-13 23:16 i686 unknown)\n> - Stuart\n\nIt works on WinNT, Win2K, ... because full file security is implemented\nonly in this systems. There could be a dirty hack that disables the\ncheck (for 0700 permissions on $DATADIR) in\nsrc/backend/postmaster/postmaster.c. I don't know if it is possible to\ndo it during runtime for only Win9x systems.\n\n\n\t\t\tDan\n", "msg_date": "Mon, 3 Dec 2001 13:32:25 +0100", "msg_from": "=?iso-8859-1?Q?Hor=E1k_Daniel?= <horak@sit.plzen-city.cz>", "msg_from_op": true, "msg_subject": "Re: FW: [CYGWIN] 7.2b3 postmaster doesn't start on Win98 " }, { "msg_contents": "=?iso-8859-1?Q?Hor=E1k_Daniel?= <horak@sit.plzen-city.cz> writes:\n>> When trying to start the postmaster on win98se with cygwin\n>> I get told that the data directory must be 0700, but when I \n>> try to chmod to\n>> 700, it apparently succeds, but nothing permissions stay at \n>> 755. I suspect\n>> this to be because win98 has no real file protection (just a read only\n>> attribute)\n\n> It works on WinNT, Win2K, ... because full file security is implemented\n> only in this systems. There could be a dirty hack that disables the\n> check (for 0700 permissions on $DATADIR) in\n> src/backend/postmaster/postmaster.c. I don't know if it is possible to\n> do it during runtime for only Win9x systems.\n\nUgh...\n\nUnless someone can think of a reasonable runtime check to distinguish\nwin98 from newer systems, I think we have little choice but to make the\ndata directory permissions check be #ifndef __CYGWIN__. I don't like\nthis much, but (a) I don't want to hold up 7.2 while we look for better\nideas, and (b) no one should consider a Windoze box secure anyway ;-).\n\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Dec 2001 11:05:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [CYGWIN] 7.2b3 postmaster doesn't start on Win98 " }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> I'll write and test something with cygwin this week if that would help. (If\n> someone can get to it first it is something stupid like \"GetWindowsVersion()\"\n> or something like that.\n\nWell, the non-stupid part is to know which return values correspond to\nWindows versions that have proper file permissions and which values to\nversions that don't. Given that NT and the other versions are two\nseparate code streams (no?), I'm not sure that distinguishing this is\ntrivial, and even less sure that we should assume all future Windows\nreleases will have it. I'd be more comfortable with an autoconf-like\napproach: actually probe the desired feature and see if it works.\n\nI was thinking this morning about trying to chmod the directory and,\nif that doesn't report an error, assuming that all is well. On Windows\nit'd presumably claim success despite not being able to do what is asked\nfor. But this would definitely require testing.\n\nI'm really not happy about the idea of holding up the release for this...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Dec 2001 21:52:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [CYGWIN] 7.2b3 postmaster doesn't start on Win98 " }, { "msg_contents": "Tom Lane wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n> > I'll write and test something with cygwin this week if that would help. (If\n> > someone can get to it first it is something stupid like \"GetWindowsVersion()\"\n> > or something like that.\n> \n> Well, the non-stupid part is to know which return values correspond to\n> Windows versions that have proper file permissions and which values to\n> versions that don't. Given that NT and the other versions are two\n> separate code streams (no?), I'm not sure that distinguishing this is\n> trivial, and even less sure that we should assume all future Windows\n> releases will have it. I'd be more comfortable with an autoconf-like\n> approach: actually probe the desired feature and see if it works.\n> \n> I was thinking this morning about trying to chmod the directory and,\n> if that doesn't report an error, assuming that all is well. On Windows\n> it'd presumably claim success despite not being able to do what is asked\n> for. But this would definitely require testing.\n> \n> I'm really not happy about the idea of holding up the release for this...\n\nIt is a trivial peice of code to write, there is a bit mask that indicates the\ntechnology. Be it DOS or NT. I will be able to get to it over the week.\n\nThe proper test would be to test for \"known\" DOS legacy because all future\nWindows versions will be at least capable of file permissions.\n\nThe function call is GetVersionEx(...) it accepts a structure:\n\nPlatform SDK: Windows System Information\n\n\n OSVERSIONINFO\n\n The OSVERSIONINFO data structure contains operating system version\ninformation.\n The information includes major and minor version numbers, a build number, a\n platform identifier, and descriptive text about the operating system. This\nstructure is\n used with the GetVersionEx function.\n\n typedef struct _OSVERSIONINFO{ \n DWORD dwOSVersionInfoSize; \n DWORD dwMajorVersion; \n DWORD dwMinorVersion; \n DWORD dwBuildNumber; \n DWORD dwPlatformId; \n TCHAR szCSDVersion[ 128 ]; \n } OSVERSIONINFO; \n\n Members\n\n dwOSVersionInfoSize \n Specifies the size, in bytes, of this data structure. Set this member to\n sizeof(OSVERSIONINFO) before calling the GetVersionEx function. \n dwMajorVersion \n Identifies the major version number of the operating system as follows. \n Operating System\n Value\n Windows 95\n 4\n Windows 98\n 4\n Windows Me\n 4\n Windows NT 3.51\n 3\n Windows NT 4.0\n 4\n Windows 2000\n 5\n Windows XP\n 5\n Windows .NET Server\n 5\n\n\n dwMinorVersion \n Identifies the minor version number of the operating system as follows. \n Operating System\n Value\n Windows 95\n 0\n Windows 98\n 10\n Windows Me\n 90\n Windows NT 3.51\n 51\n Windows NT 4.0\n 0\n Windows 2000\n 0\n Windows XP\n 1\n Windows .NET Server\n 1\n\n\n dwBuildNumber \n Windows NT/2000/XP: Identifies the build number of the operating system. \n\n Windows 95/98/Me: Identifies the build number of the operating system in\nthe\n low-order word. The high-order word contains the major and minor version\n numbers. \n dwPlatformId \n Identifies the operating system platform. This member can be one of the\n following values. \n Value\n Platform\n VER_PLATFORM_WIN32s\n Win32s on Windows 3.1. \n VER_PLATFORM_WIN32_WINDOWS\n Windows 95, Windows 98, or\n Windows Me. \n VER_PLATFORM_WIN32_NT\n Windows NT 3.51,\n Windows NT 4.0,\n Windows 2000, Windows XP,\n or Windows .NET Server.\n\n\n szCSDVersion \n Windows NT/2000/XP: Contains a null-terminated string, such as \"Service\n Pack 3\", that indicates the latest Service Pack installed on the system.\nIf no Service\n Pack has been installed, the string is empty. \n\n Windows 95/98/Me: Contains a null-terminated string that indicates\nadditional\n version information. For example, \" C\" indicates Windows 95 OSR2 and \" A\"\n indicates Windows 98 Second Edition.\n", "msg_date": "Mon, 03 Dec 2001 23:07:38 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: FW: [CYGWIN] 7.2b3 postmaster doesn't start on Win98" }, { "msg_contents": "----- Original Message ----- \nFrom: Tom Lane <tgl@sss.pgh.pa.us>\nSent: Monday, December 03, 2001 9:52 PM\n\n> mlw <markw@mohawksoft.com> writes:\n> > I'll write and test something with cygwin this week if that would help. (If\n> > someone can get to it first it is something stupid like \"GetWindowsVersion()\"\n> > or something like that.\n> \n> Well, the non-stupid part is to know which return values correspond to\n> Windows versions that have proper file permissions and which values to\n> versions that don't. Given that NT and the other versions are two\n> separate code streams (no?), I'm not sure that distinguishing this is\n> trivial, and even less sure that we should assume all future Windows\n> releases will have it. I'd be more comfortable with an autoconf-like\n> approach: actually probe the desired feature and see if it works.\n> \n> I was thinking this morning about trying to chmod the directory and,\n> if that doesn't report an error, assuming that all is well. On Windows\n> it'd presumably claim success despite not being able to do what is asked\n> for. But this would definitely require testing.\n> \n> I'm really not happy about the idea of holding up the release for this...\n\nWhy so much fuss about an officially unsupported platform\nat this point in time? Interested individuals can work on a more or less\nworkable W98 \"port\" for 7.3 right after the 7.2 release, no?\n(They can actually work now, but why this should up the release?)\n\n\n\n", "msg_date": "Tue, 4 Dec 2001 00:54:21 -0500", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": false, "msg_subject": "Re: FW: [CYGWIN] 7.2b3 postmaster doesn't start on Win98 " }, { "msg_contents": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca> writes:\n> Why so much fuss about an officially unsupported platform\n> at this point in time? Interested individuals can work on a more or less\n> workable W98 \"port\" for 7.3 right after the 7.2 release, no?\n\nMy point exactly --- let's work on this for 7.3, not 7.2. However,\nit'd be nice if 7.2 didn't fail entirely on Win98. Thus the thought\nthat a simple #ifdef is the right solution for this release.\n\nWe did not have any permissions checks in releases before 7.2, so\nthis approach doesn't mean any regression for newer Windows versions.\nIt'd be better to have the check in the newer Windows versions, but\nI'm satisfied to let that happen in 7.3.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Dec 2001 01:18:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [CYGWIN] 7.2b3 postmaster doesn't start on Win98 " }, { "msg_contents": "Tom Lane wrote:\n> \n> =?iso-8859-1?Q?Hor=E1k_Daniel?= <horak@sit.plzen-city.cz> writes:\n> >> When trying to start the postmaster on win98se with cygwin\n> >> I get told that the data directory must be 0700, but when I\n> >> try to chmod to\n> >> 700, it apparently succeds, but nothing permissions stay at\n> >> 755. I suspect\n> >> this to be because win98 has no real file protection (just a read only\n> >> attribute)\n> \n> > It works on WinNT, Win2K, ... because full file security is implemented\n> > only in this systems. There could be a dirty hack that disables the\n> > check (for 0700 permissions on $DATADIR) in\n> > src/backend/postmaster/postmaster.c. I don't know if it is possible to\n> > do it during runtime for only Win9x systems.\n> \n> Ugh...\n> \n> Unless someone can think of a reasonable runtime check to distinguish\n> win98 from newer systems, I think we have little choice but to make the\n> data directory permissions check be #ifndef __CYGWIN__. I don't like\n> this much, but (a) I don't want to hold up 7.2 while we look for better\n> ideas, and (b) no one should consider a Windoze box secure anyway ;-).\n> \n> Comments?\n\nI have an idea which my side step the whole question about Windows.\n\nWhy not have a postgres option which allows the admin to specify that Postgres\ndoes not check file permissions? Then it becomes a documentation issue.\n", "msg_date": "Tue, 04 Dec 2001 07:38:56 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: FW: [CYGWIN] 7.2b3 postmaster doesn't start on Win98" }, { "msg_contents": "Tom Lane writes:\n\n> Unless someone can think of a reasonable runtime check to distinguish\n> win98 from newer systems, I think we have little choice but to make the\n> data directory permissions check be #ifndef __CYGWIN__.\n\nI don't think so. We've never claimed to support Cygwin on Windows\n95/98/ME, but we've reasonably supported Cygwin on Windows NT/2000 and we\nshouldn't break that to support some other system.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 4 Dec 2001 17:12:55 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: FW: [CYGWIN] 7.2b3 postmaster doesn't start on Win98" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> Unless someone can think of a reasonable runtime check to distinguish\n>> win98 from newer systems, I think we have little choice but to make the\n>> data directory permissions check be #ifndef __CYGWIN__.\n\n> I don't think so. We've never claimed to support Cygwin on Windows\n> 95/98/ME, but we've reasonably supported Cygwin on Windows NT/2000 and we\n> shouldn't break that to support some other system.\n\nNot applying a data directory permissions check doesn't quite rise to\nthe level of \"breaking it\", I think, unless you want to argue that 7.1\nand before were all broken because they didn't have the check. Note\nalso Dave Page's complaint about ntsec, and the observation that\navailability of Unixy file permissions depends on the filesystem type\neven under more recent Windowsen. So we've got some issues here even\non the recent ones.\n\nIt's my feeling that the better part of engineering judgment is to\ndisable the check for the moment. It seems too risky to leave it in,\nand I don't want to postpone the release while we think of a better\nanswer.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Dec 2001 11:59:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [CYGWIN] 7.2b3 postmaster doesn't start on Win98 " }, { "msg_contents": "Tom Lane writes:\n\n> We did not have any permissions checks in releases before 7.2, so\n> this approach doesn't mean any regression for newer Windows versions.\n\nWe did have the same permission check in 7.1, only it was on\n$PGDATA/postgresql.conf instead. Nothing has changed materially.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 4 Dec 2001 22:21:16 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: FW: [CYGWIN] 7.2b3 postmaster doesn't start on Win98" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> We did not have any permissions checks in releases before 7.2, so\n>> this approach doesn't mean any regression for newer Windows versions.\n\n> We did have the same permission check in 7.1, only it was on\n> $PGDATA/postgresql.conf instead. Nothing has changed materially.\n\nHmmm ... why weren't Windows users complaining before, then?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Dec 2001 16:24:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [CYGWIN] 7.2b3 postmaster doesn't start on Win98 " }, { "msg_contents": "Tom Lane wrote:\n> \n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Tom Lane writes:\n> >> We did not have any permissions checks in releases before 7.2, so\n> >> this approach doesn't mean any regression for newer Windows versions.\n> \n> > We did have the same permission check in 7.1, only it was on\n> > $PGDATA/postgresql.conf instead. Nothing has changed materially.\n> \n> Hmmm ... why weren't Windows users complaining before, then?\n\nIt was not the same permission check.\nThe check allowed 0744 whereas the current check only allows\n0700.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Wed, 05 Dec 2001 08:59:54 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: FW: [CYGWIN] 7.2b3 postmaster doesn't start on Win98" } ]
[ { "msg_contents": "The following ECPG TODO item is already closed\n\no Allow SELECT of array of strings into a auto-sized variable\n\nDecent lines for HISTORY would be (ECPG enhancements)\n\n auto allocation for indicator variable arrays (int *ind_p=NULL)\n auto allocation for string arrays (char **foo_pp=NULL)\n ECPGfree_auto_mem fixed\n all function names with external linkage are now prefixed by ECPG\n\nYours\n Christof\n\nPS:\nIn HISTORY:\n\nEXECUTE ... INTO ... implemented\nmultiple row descriptor support (e.g. CARDINALITY)\n\nyou might mark me as the person responsible for any mistakes made ;-)\n\n\n", "msg_date": "Mon, 03 Dec 2001 15:46:56 +0100", "msg_from": "Christof Petig <christof@petig-baender.de>", "msg_from_op": true, "msg_subject": "TODO" }, { "msg_contents": "> The following ECPG TODO item is already closed\n> \n> o Allow SELECT of array of strings into a auto-sized variable\n\nTODO updated.\n\n> \n> Decent lines for HISTORY would be (ECPG enhancements)\n> \n> auto allocation for indicator variable arrays (int *ind_p=NULL)\n> auto allocation for string arrays (char **foo_pp=NULL)\n> ECPGfree_auto_mem fixed\n> all function names with external linkage are now prefixed by ECPG\n\nHISTORY/release.sgml updated.\n\n> In HISTORY:\n> \n> EXECUTE ... INTO ... implemented\n> multiple row descriptor support (e.g. CARDINALITY)\n\nDone.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 28 Dec 2001 00:03:21 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: TODO" } ]
[ { "msg_contents": "Hello,\n\nAfter fresh install of PostgreSQL 7.1.3 I was having one particular query \n(JOIN), running for several hours. Upon closer investigation, it was returning \nweird EXPLAIN 'optimisations' (in essence, doing 'index' searches on fields \nthat were not constrained in the query etc). The same query has reasonable \nEXPLAIN and executes fine under 7.0.2.\n\nI tried to re-create the table by table, starting with the following:\n\nCREATE TABLE r (\n\ta integer,\n\tb integer,\n\tc integer,\n\td integer\n);\n\nCREATE INDEX r_d_idx on r(d);\n\nCOPY r FROM stdin;\n1\t4234\t4324\t4\n1\t4342\t886\t8\n[...]\n\\.\n\n(table has ~30k rows)\n\nEXPLAIN SELECT * FROM r where d = 8;\n\nThe result is \n\nNOTICE: QUERY PLAN:\n\nSeq Scan on r (cost=0.00...3041.13 rows=7191 width=4)\n\nDoes not matter if I VACUUM ANALYZE the table or the whole database.\n\nAny ideas why this happens?\n\nPostgreSQL is compiled with \n\n./configure --enable-locale --with-perl --with-python --with-tcl --enable-odbc \n--with-krb4 --with-openssl --enable-syslog --with-includes=/usr/include/kerbero\nsIV:/usr/contrib/include\n\nthis, under BSD/OS 4.2\n\nThanks in advance for any ideas,\nDaniel\n\n", "msg_date": "Mon, 03 Dec 2001 18:59:45 +0200", "msg_from": "Daniel Kalchev <daniel@digsys.bg>", "msg_from_op": true, "msg_subject": "7.1.3 not using index" }, { "msg_contents": "Daniel Kalchev <daniel@digsys.bg> writes:\n> (table has ~30k rows)\n> EXPLAIN SELECT * FROM r where d = 8;\n> The result is \n> NOTICE: QUERY PLAN:\n> Seq Scan on r (cost=0.00...3041.13 rows=7191 width=4)\n\nSeqscan is the right plan to retrieve 7k rows out of a 30k table.\nSo the question is whether that estimate is in the right ballpark\nor not. How many rows are there really with d=8? If it's way off,\nwhat do you get from\n\nselect attname,attdispersion,s.*\nfrom pg_statistic s, pg_attribute a, pg_class c\nwhere starelid = c.oid and attrelid = c.oid and staattnum = attnum\nand relname = 'r';\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Dec 2001 12:34:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.1.3 not using index " }, { "msg_contents": "Tom,\n\nYou may be correct that sequential scan is preferable, but I can never get \nversion 7.1.3 to use index scan on almost any table. Here is the output of \nyour query:\n\n attname | attdispersion | starelid | staattnum | staop | stanullfrac \n| stacommonfrac | stacommonval | staloval | stahival\n-----------------+---------------+----------+-----------+-------+-------------+\n---------------+--------------+----------+----------\n a | 0.978655 | 8160023 | 1 | 97 | 0 \n| 0.988079 | 1 | 1 | 52\n b | 2.86564e-05 | 8160023 | 2 | 97 | 0 \n| 0.0001432 | 4971 | 1 | 12857\n c | 0.000520834 | 8160023 | 3 | 97 | 0 \n| 0.0025776 | 1 | 1 | 11309\n d | 0.104507 | 8160023 | 4 | 97 | 0 \n| 0.257437 | 8 | 1 | 32\n\nIn fact, field 'd' has only few values - usually powers of 2 (history). Values \nare respectively 1,2,4,8. 16 and 32 and are spread like:\n\n\n person_type | count \n-------------+-------\n 1 | 8572\n 2 | 3464\n 4 | 8607\n 8 | 7191\n 16 | 3\n 32 | 96\n(6 rows)\n\nSome estimates are weird, such as:\n\ndb=# explain select * from r where d = 16;\nNOTICE: QUERY PLAN:\n\nSeq Scan on r (cost=0.00..527.16 rows=719 width=16)\n\nI also tried the same query where the value exists only once in the table - \none would expect this is the perfect use of index...\n\nI also note very slow response to any queries that access systems tables, such \nas \\d in psql.\n\nDaniel\n\n\n>>>Tom Lane said:\n > Daniel Kalchev <daniel@digsys.bg> writes:\n > > (table has ~30k rows)\n > > EXPLAIN SELECT * FROM r where d = 8;\n > > The result is \n > > NOTICE: QUERY PLAN:\n > > Seq Scan on r (cost=0.00...3041.13 rows=7191 width=4)\n > \n > Seqscan is the right plan to retrieve 7k rows out of a 30k table.\n > So the question is whether that estimate is in the right ballpark\n > or not. How many rows are there really with d=8? If it's way off,\n > what do you get from\n > \n > select attname,attdispersion,s.*\n > from pg_statistic s, pg_attribute a, pg_class c\n > where starelid = c.oid and attrelid = c.oid and staattnum = attnum\n > and relname = 'r';\n > \n > \t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 03 Dec 2001 20:06:25 +0200", "msg_from": "Daniel Kalchev <daniel@digsys.bg>", "msg_from_op": true, "msg_subject": "Re: 7.1.3 not using index " }, { "msg_contents": "Daniel Kalchev <daniel@digsys.bg> writes:\n> You may be correct that sequential scan is preferable, but I can never get \n> version 7.1.3 to use index scan on almost any table.\n\nThat's a fairly large claim to make, especially on the evidence of this\none table.\n\n\n> attname | attdispersion | starelid | staattnum | staop | stanullfrac \n> | stacommonfrac | stacommonval | staloval | stahival\n> d | 0.104507 | 8160023 | 4 | 97 | 0 \n> | 0.257437 | 8 | 1 | 32\n\n> In fact, field 'd' has only few values - usually powers of 2\n(history).\n\nWhat you've got here is that 8 is recorded as the most common value in\ncolumn d, with a frequency of 0.25 or about 1/4th of the table. So\nsearches for d = 8 will correctly estimate the selectivity at about 0.25\nand will (correctly) decide not to use the index.\n\n7.1 does not have any info about column values other than the most\ncommon, and will arbitrarily estimate their frequencies at (IIRC)\none-tenth of the most common value's. That's probably still too much\nto trigger an indexscan; the crossover point is usually 1% or even\nless selectivity.\n\n> Values are respectively 1,2,4,8. 16 and 32 and are spread like:\n\n> person_type | count \n> -------------+-------\n> 1 | 8572\n> 2 | 3464\n> 4 | 8607\n> 8 | 7191\n> 16 | 3\n> 32 | 96\n> (6 rows)\n\n7.2 will do better on this sort of example: it should correctly select\nan indexscan when looking for 16 or 32, otherwise a seqscan.\n\n> I also note very slow response to any queries that access systems\n> tables, such as \\d in psql.\n\nThere might indeed be something broken in your installation, but you've\nshown me no concrete evidence of it so far. On this query, 7.1 is\nbehaving as designed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Dec 2001 13:19:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.1.3 not using index " }, { "msg_contents": ">>>Tom Lane said:\n > Daniel Kalchev <daniel@digsys.bg> writes:\n > > You may be correct that sequential scan is preferable, but I can never get\n \n > > version 7.1.3 to use index scan on almost any table.\n > \n > That's a fairly large claim to make, especially on the evidence of this\n > one table.\n\nI tend to make it after waiting for almost two calendar days for an join query \nto complete (which takes at most under 10 seconds on 7.0). :-) (and of course, \nafter spending few more days to understand what is going on)\n\n > > attname | attdispersion | starelid | staattnum | staop | stanullf\n rac \n > > | stacommonfrac | stacommonval | staloval | stahival\n > > d | 0.104507 | 8160023 | 4 | 97 | \n 0 \n > > | 0.257437 | 8 | 1 | 32\n > \n > > In fact, field 'd' has only few values - usually powers of 2\n > (history).\n > \n > What you've got here is that 8 is recorded as the most common value in\n > column d, with a frequency of 0.25 or about 1/4th of the table. So\n > searches for d = 8 will correctly estimate the selectivity at about 0.25\n > and will (correctly) decide not to use the index.\n\nThis I understand and this is why I gave the other examples... Your \nexplanation on how 7.1 would handle this situation sort of explains the \nunfortunate siguation...\n\nAm I correct in assuming that it will be better to delete the index on such \nfields? (for 7.1)\n\n > > I also note very slow response to any queries that access systems\n > > tables, such as \\d in psql.\n > \n > There might indeed be something broken in your installation, but you've\n > shown me no concrete evidence of it so far. On this query, 7.1 is\n > behaving as designed.\n\nIf you are going to tell me 7.1 will only use index scan on PRIMARY KEY \ncolumns, I will spend some more time with the 7.2 betas (who knows, this may \nbe the secret plan <grin>)\n\nHere is another table:\n\nCREATE TABLE \"persons\" (\n \"personid\" integer DEFAULT nextval('personid_seq'::text),\n \"name\" text,\n \"title\" text,\n[...]\n);\n\nCREATE INDEX \"persons_personid_idx\" on \"persons\" using btree ( \"personid\" \n\"int4_ops\" );\n\ndb=# select count(*) from persons;\n\n count \n-------\n 14530\n(1 row)\n\n(part of the statistics for this row)\n attname | attdispersion | starelid | staattnum | staop | stanullfrac | \nstacommonfrac | stacommonval | staloval | \nstahival\n-------------+---------------+----------+-----------+-------+-------------+----\n-----------+------------------------+------------------------+-----------------\n---------\n personid | 4.1328e-05 | 19795 | 1 | 97 | 0 | \n0.000206469 | 2089 | 1 | 12857\n\nnow, EXPLAIN again gives me:\n\ndb=# explain select * from persons where personid = 1;\nNOTICE: QUERY PLAN:\n\nSeq Scan on persons (cost=0.00..490.62 rows=1 width=177)\n\n(note, personid is not unique - there are some 'duplicate' rows that mark \narchived records - but there are no more than 4-5 occurrences of the same \npersonid and this is rare)\n\nIf this is problem with my installation (I especially installed new BSD/OS 4.2 \nto test on clean 7.1.3 with my production database). It has locale eanbled, \nbut nowhere in the queries there is text involved...\n\nHow about this query (using my previous table r, that has poiner to the \npersonid on persons):\n\ndb=# explain select * from persons, r where r.d = 1 and r.a = persons.personid;\nNOTICE: QUERY PLAN:\n\nMerge Join (cost=0.00..nan rows=299 width=193)\n -> Index Scan using persons_personid_idx on persons (cost=0.00..nan \nrows=14530 width=177)\n -> Index Scan using r_a_idx on representatives (cost=0.00..nan rows=719 \nwidth=16)\n\nWhy would it do index scans on r.a? \n\nDaniel\n\n", "msg_date": "Mon, 03 Dec 2001 20:38:19 +0200", "msg_from": "Daniel Kalchev <daniel@digsys.bg>", "msg_from_op": true, "msg_subject": "Re: 7.1.3 not using index " }, { "msg_contents": "Daniel Kalchev <daniel@digsys.bg> writes:\n> Here is another table:\n\n> CREATE TABLE \"persons\" (\n> \"personid\" integer DEFAULT nextval('personid_seq'::text),\n> \"name\" text,\n> \"title\" text,\n> [...]\n> );\n\n> CREATE INDEX \"persons_personid_idx\" on \"persons\" using btree ( \"personid\" \n> \"int4_ops\" );\n\n> (part of the statistics for this row)\n> attname | attdispersion | starelid | staattnum | staop | stanullfrac | \n> stacommonfrac | stacommonval | staloval | \n> stahival\n> personid | 4.1328e-05 | 19795 | 1 | 97 | 0 | \n> 0.000206469 | 2089 | 1 | 12857\n\n> now, EXPLAIN again gives me:\n\n> db=# explain select * from persons where personid = 1;\n> NOTICE: QUERY PLAN:\n\n> Seq Scan on persons (cost=0.00..490.62 rows=1 width=177)\n\nThat does seem pretty broken; the thing is well aware that the query is\nselective (note the rows estimate), so why is it not using the index?\n\nDo you get the same plan if you try to force an indexscan by doing\n\tset enable_seqscan to off;\n\nAlso, I'd like to see the EXPLAIN VERBOSE result not just EXPLAIN.\n\n> db=# explain select * from persons, r where r.d = 1 and r.a = persons.personid;\n> NOTICE: QUERY PLAN:\n\n> Merge Join (cost=0.00..nan rows=299 width=193)\n> -> Index Scan using persons_personid_idx on persons (cost=0.00..nan \n> rows=14530 width=177)\n> -> Index Scan using r_a_idx on representatives (cost=0.00..nan rows=719 \n> width=16)\n\n> Why would it do index scans on r.a? \n\nTo get the data in the right order for a merge join. However, I think\nthe really interesting part of this is the \"cost=0.00..nan\" bit.\nApparently you're getting some NaN results during computation of the\ncost estimates, which will completely screw up all the planner's\nestimates of which plan is cheapest. That needs to be looked at.\nWe've seen previous reports of 7.1 getting confused that way when there\nwere column min or max values of +/-infinity in timestamp columns ...\nbut it looks like these are plain integer columns, so there's something\nelse going on.\n\nOne thing that should be eliminated at the outset is the possibility of\na bad build of Postgres. How did you configure and build, *exactly*?\nDid you make any midcourse corrections (like building some of the files\nwith different compiler switches than others)?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Dec 2001 14:57:08 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.1.3 not using index " }, { "msg_contents": ">>>Tom Lane said:\n > Daniel Kalchev <daniel@digsys.bg> writes:\n > > Here is another table:\n > \n > > CREATE TABLE \"persons\" (\n > > \"personid\" integer DEFAULT nextval('personid_seq'::text),\n > > \"name\" text,\n > > \"title\" text,\n > > [...]\n > > );\n > \n > > CREATE INDEX \"persons_personid_idx\" on \"persons\" using btree ( \"personid\"\n \n > > \"int4_ops\" );\n > \n > > (part of the statistics for this row)\n > > attname | attdispersion | starelid | staattnum | staop | stanullfrac \n | \n > > stacommonfrac | stacommonval | staloval | \n > > stahival\n > > personid | 4.1328e-05 | 19795 | 1 | 97 | 0 \n | \n > > 0.000206469 | 2089 | 1 | 12857\n > \n > > now, EXPLAIN again gives me:\n > \n > > db=# explain select * from persons where personid = 1;\n > > NOTICE: QUERY PLAN:\n > \n > > Seq Scan on persons (cost=0.00..490.62 rows=1 width=177)\n > \n > That does seem pretty broken; the thing is well aware that the query is\n > selective (note the rows estimate), so why is it not using the index?\n > \n > Do you get the same plan if you try to force an indexscan by doing\n > \tset enable_seqscan to off;\n\nHere is what it gives:\n\ndb=# set enable_seqscan to off;\nSET VARIABLE\ndb=# explain select * from persons where personid = 1;\nNOTICE: QUERY PLAN:\n\nIndex Scan using persons_personid_idx on persons (cost=0.00..nan rows=1 \nwidth=177)\n\n\n > \n > Also, I'd like to see the EXPLAIN VERBOSE result not just EXPLAIN.\n\nHere it is (after turning enable_seqscan back on)\n\n\ndb=# explain verbose select * from persons where personid = 1;\nNOTICE: QUERY DUMP:\n\n{ SEQSCAN :startup_cost 0.00 :total_cost 490.62 :rows 1 :width 177 \n:qptargetlist ({ TARGETENTRY :resdom { RESDOM :resno 1 :restype 23 :restypmod \n-1 :resname personid :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } \n:expr { VAR :varno 1 :varattno 1 :vartype 23 :vartypmod -1 :varlevelsup 0 \n:varnoold 1 :varoattno 1}} { TARGETENTRY :resdom { RESDOM :resno 2 :restype 25 \n:restypmod -1 :resname name :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk \nfalse } :expr { VAR :varno 1 :varattno 2 :vartype 25 :vartypmod -1 \n:varlevelsup 0 :varnoold 1 :varoattno 2}} { TARGETENTRY :resdom { RESDOM \n:resno 3 :restype 25 :restypmod -1 :resname title :reskey 0 :reskeyop 0 \n:ressortgroupref 0 :resjunk false } :expr { VAR :varno 1 :varattno 3 :vartype \n25 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 3}} { TARGETENTRY \n:resdom { RESDOM :resno 4 :restype 25 :restypmod -1 :resname occupation \n:reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1 \n:varattno 4 :vartype 25 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno \n4}} { TARGETENTRY :resdom { RESDOM :resno 5 :restype 23 :restypmod -1 :resname \nperson_type :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { \nVAR :varno 1 :varattno 5 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 1 \n:varoattno 5}} { TARGETENTRY :resdom { RESDOM :resno 6 :restype 25 :restypmod \n-1 :resname street :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } \n:expr { VAR :varno 1 :varattno 6 :vartype 25 :vartypmod -1 :varlevelsup 0 \n:varnoold 1 :varoattno 6}} { TARGETENTRY :resdom { RESDOM :resno 7 :restype 25 \n:restypmod -1 :resname town :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk \nfalse } :expr { VAR :varno 1 :varattno 7 :vartype 25 :vartypmod -1 \n:varlevelsup 0 :varnoold 1 :varoattno 7}} { TARGETENTRY :resdom { RESDOM \n:resno 8 :restype 25 :restypmod -1 :resname zipcode :reskey 0 :reskeyop 0 \n:ressortgroupref 0 :resjunk false } :expr { VAR :varno 1 :varattno 8 :vartype \n25 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 8}} { TARGETENTRY \n:resdom { RESDOM :resno 9 :restype 25 :restypmod -1 :resname phone :reskey 0 \n:reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1 :varattno \n9 :vartype 25 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 9}} { \nTARGETENTRY :resdom { RESDOM :resno 10 :restype 25 :restypmod -1 :resname fax \n:reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1 \n:varattno 10 :vartype 25 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno \n10}} { TARGETENTRY :resdom { RESDOM :resno 11 :restype 25 :restypmod -1 \n:resname email :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr \n{ VAR :varno 1 :varattno 11 :vartype 25 :vartypmod -1 :varlevelsup 0 \n:varnoold 1 :varoattno 11}} { TARGETENTRY :resdom { RESDOM :resno 12 :restype \n16 :restypmod -1 :resname archived :reskey 0 :reskeyop 0 :ressortgroupref 0 \n:resjunk false } :expr { VAR :varno 1 :varattno 12 :vartype 16 :vartypmod -1 \n:varlevelsup 0 :varnoold 1 :varoattno 12}} { TARGETENTRY :resdom { RESDOM \n:resno 13 :restype 1043 :restypmod 20 :resname archived_by :reskey 0 :reskeyop \n0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1 :varattno 13 \n:vartype 1043 :vartypmod 20 :varlevelsup 0 :varnoold 1 :varoattno 13}} { \nTARGETENTRY :resdom { RESDOM :resno 14 :restype 1184 :restypmod -1 :resname \narchived_at :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { \nVAR :varno 1 :varattno 14 :vartype 1184 :vartypmod -1 :varlevelsup 0 \n:varnoold 1 :varoattno 14}} { TARGETENTRY :resdom { RESDOM :resno 15 :restype \n1184 :restypmod -1 :resname created_at :reskey 0 :reskeyop 0 :ressortgroupref \n0 :resjunk false } :expr { VAR :varno 1 :varattno 15 :vartype 1184 :vartypmod \n-1 :varlevelsup 0 :varnoold 1 :varoattno 15}} { TARGETENTRY :resdom { RESDOM \n:resno 16 :restype 1043 :restypmod 20 :resname created_by :reskey 0 :reskeyop \n0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1 :varattno 16 \n:vartype 1043 :vartypmod 20 :varlevelsup 0 :varnoold 1 :varoattno 16}} { \nTARGETENTRY :resdom { RESDOM :resno 17 :restype 1184 :restypmod -1 :resname \nupdated_at :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { \nVAR :varno 1 :varattno 17 :vartype 1184 :vartypmod -1 :varlevelsup 0 \n:varnoold 1 :varoattno 17}} { TARGETENTRY :resdom { RESDOM :resno 18 :restype \n1043 :restypmod 20 :resname updated_by :reskey 0 :reskeyop 0 :ressortgroupref \n0 :resjunk false } :expr { VAR :varno 1 :varattno 18 :vartype 1043 :vartypmod \n20 :varlevelsup 0 :varnoold 1 :varoattno 18}}) :qpqual ({ EXPR :typeOid 16 \n:opType op :oper { OPER :opno 96 :opid 65 :opresulttype 16 } :args ({ VAR \n:varno 1 :varattno 1 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 1 \n:varoattno 1} { CONST :consttype 23 :constlen 4 :constbyval true :constisnull \nfalse :constvalue 4 [ 1 0 0 0 ] })}) :lefttree <> :righttree <> :extprm () \n:locprm () :initplan <> :nprm 0 :scanrelid 1 }\nNOTICE: QUERY PLAN:\n\nSeq Scan on persons (cost=0.00..490.62 rows=1 width=177)\n\n> One thing that should be eliminated at the outset is the possibility of\n > a bad build of Postgres. How did you configure and build, *exactly*?\n > Did you make any midcourse corrections (like building some of the files\n > with different compiler switches than others)?\n\n\nI will rebuild it again, re-initdb and reload the whole database, but this \nbuild was on vanilla BSD/OS 4.2 with the only modifications to add larger \nshared memory support in the kernel (I need to run many backends). My current \nfavorite (which I copy from server to server :) is\n\n# support for larger processes and number of childs\noptions \"DFLDSIZ=\\(128*1024*1024\\)\"\noptions \"MAXDSIZ=\\(256*1024*1024\\)\" \noptions \"CHILD_MAX=256\"\noptions \"OPEN_MAX=256\"\noptions \"KMAPENTRIES=4000\" # Prevents kmem malloc errors !\noptions \"KMEMSIZE=\\(32*1024*1024\\)\"\n\noptions \"SHMMAXPGS=32768\"\noptions \"SHMMNI=400\"\noptions \"SHMSEG=204\"\n# More semaphores for Postgres\noptions \"SEMMNS=600\"\n\nPostgreSQL was build with these options\n\n./configure --enable-locale --with-perl --with-pythos --with-tcl \n--enable-obdc --with-krb4 --with-openssl --enable-syslog \n--with-includes=/usr/include/kerberosIV:/usr/contrib/include\n\nI have the habbit to always clean before every build.\n\nWhat I will do try to do now is to clean/rebuild/install everything again. \nThen try to build with --enable-locale only. Then try to build without any \noptions at all..\n\nHope you find some useful information to track this down.\n\nDaniel\n\n", "msg_date": "Mon, 03 Dec 2001 22:57:59 +0200", "msg_from": "Daniel Kalchev <daniel@digsys.bg>", "msg_from_op": true, "msg_subject": "Re: 7.1.3 not using index " }, { "msg_contents": "Daniel Kalchev <daniel@digsys.bg> writes:\n>>> Do you get the same plan if you try to force an indexscan by doing\n>>> set enable_seqscan to off;\n\n> db=# set enable_seqscan to off;\n> SET VARIABLE\n> db=# explain select * from persons where personid = 1;\n> NOTICE: QUERY PLAN:\n\n> Index Scan using persons_personid_idx on persons (cost=0.00..nan rows=1 \n> width=177)\n\nHmph. The evidence so far suggests that you're getting a NaN cost\nestimate for *any* indexscan, ie, the problem is somewhere in cost_index\nor its subroutines. That's a bit of a leap but it's consistent both\nwith your general complaint and these specific examples.\n\n> I will rebuild it again, re-initdb and reload the whole database, but this \n> build was on vanilla BSD/OS 4.2 with the only modifications to add larger \n> shared memory support in the kernel (I need to run many backends).\n\nI'd wonder more about your compiler than the kernel. Keep in mind that\nno one but you has reported anything like this ... so there's got to be\nsome fairly specific cause.\n\n> PostgreSQL was build with these options\n\n> ./configure --enable-locale --with-perl --with-pythos --with-tcl \n> --enable-obdc --with-krb4 --with-openssl --enable-syslog \n> --with-includes=/usr/include/kerberosIV:/usr/contrib/include\n\n> What I will do try to do now is to clean/rebuild/install everything again. \n> Then try to build with --enable-locale only.\n\nOffhand I would not expect any of those options to affect anything\nhappening in the planner, at least not for integer column types.\n\nWild guess: what is configure producing for the ALIGN_xxx macros?\n(look in src/include/config.h) Does it match what you believe about\nyour hardware?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Dec 2001 16:05:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.1.3 not using index " }, { "msg_contents": ">>>Tom Lane said:\n > > I will rebuild it again, re-initdb and reload the whole database, but this\n \n > > build was on vanilla BSD/OS 4.2 with the only modifications to add larger \n > > shared memory support in the kernel (I need to run many backends).\n > \n > I'd wonder more about your compiler than the kernel. Keep in mind that\n > no one but you has reported anything like this ... so there's got to be\n > some fairly specific cause.\n\nWell... the same compiler (machine) happily runs 7.0.3. But see below.\n\n > > PostgreSQL was build with these options\n > \n > > ./configure --enable-locale --with-perl --with-pythos --with-tcl \n > > --enable-obdc --with-krb4 --with-openssl --enable-syslog \n > > --with-includes=/usr/include/kerberosIV:/usr/contrib/include\n > \n > > What I will do try to do now is to clean/rebuild/install everything again.\n \n > > Then try to build with --enable-locale only.\n > \n > Offhand I would not expect any of those options to affect anything\n > happening in the planner, at least not for integer column types.\n\nIt did not change anything, when I just \n\nmake clean\nmake\nmake install\n\nIt WORKED however after\n\nmake clean\nrm config.cache\n./configure --enable-locale --with-perl\nmake\nmake install\n\nYou will say, that configure had mangled things up... But I then tried:\n\nmake clean\nrm config.cache\n./configure (with the all options)\nmake\nmake install\n\nand got the same junk!\n\nThen, I discovered that my configure options had two typos.... corrected these \nand now all works!?!?!?!\n\nHow is this possible? Why would not configure complain for incorrect options?\n\n > Wild guess: what is configure producing for the ALIGN_xxx macros?\n > (look in src/include/config.h) Does it match what you believe about\n > your hardware?\n\nLooks very reasonable, as far as I can tell:\n\n#define ALIGNOF_SHORT 2\n#define ALIGNOF_INT 4\n#define ALIGNOF_LONG 4\n#define ALIGNOF_LONG_LONG_INT 4\n#define ALIGNOF_DOUBLE 4\n#define MAXIMUM_ALIGNOF 4\n\n\nIt may turn to be some library trouble...\n\nDaniel\n\n", "msg_date": "Mon, 03 Dec 2001 23:47:06 +0200", "msg_from": "Daniel Kalchev <daniel@digsys.bg>", "msg_from_op": true, "msg_subject": "Re: 7.1.3 not using index " }, { "msg_contents": "By the way, now that it works, I am glad to prove you wrong on the optimizer \nbehavior on 7.1.3 :-)\n\nMy query \n\nselect * from r where d = 8; \n\nstill results in sequential scan:\n\nSeq Scan on r (cost=0.00..527.16 rows=7191 width=16)\n\nHowever, the query\n\nselect * from r where d = 1; \n\nnow results in index scan.\n\nIndex Scan using r_d_idx on r (cost=0.00..308.45 rows=719 width=16)\n\nNot to say I am sufficiently confused - now to go on with some more testing...\n\nDaniel\n\n", "msg_date": "Mon, 03 Dec 2001 23:51:21 +0200", "msg_from": "Daniel Kalchev <daniel@digsys.bg>", "msg_from_op": true, "msg_subject": "Re: 7.1.3 not using index " }, { "msg_contents": "Daniel Kalchev <daniel@digsys.bg> writes:\n> Then, I discovered that my configure options had two\n> typos.... corrected these and now all works!?!?!?!\n\nMy, *that's* interesting. configure is supposed to ignore unrecognized\n--with and --enable options.\n\n> How is this possible? Why would not configure complain for incorrect options?\n\nThe autoconf people claim that's a feature. I think it's a bug, too,\nbut our opinion doesn't count.\n\n> It may turn to be some library trouble...\n\nI'm wondering the same. Try saving the make log for both ways of\nconfiguring, and comparing to see if there's any difference in what\nlibraries get linked into the backend.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Dec 2001 17:12:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.1.3 not using index " }, { "msg_contents": ">> It may turn to be some library trouble...\n\nAfter examining the code a little, I wonder whether the problem might\nbe due to some library messing up the behavior of log(). You could\nexperiment at the SQL level with \"SELECT ln(x)\" to see if there's\nanything obviously wrong.\n\nBTW, do the regression tests show any difference in behavior between\nthe good and bad builds?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Dec 2001 17:53:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.1.3 not using index " } ]
[ { "msg_contents": "Guys, attached is a patch against the 7.2b3 source tree which improves\nthe i18n of the date formatting functions, using the nl_langinfo(3)\nfunction, together with an autoconf macro to test it's availability,\nalso an small change to the tab-complete feature of psql which allows\nto use tables and views interchangeably. Please consider the\nincorporation of those, small but useful changes, in the upcoming 7.2\nrelease.\n\nKind regards,\nManuel.", "msg_date": "03 Dec 2001 13:41:25 -0600", "msg_from": "Manuel Sugawara <masm@fciencias.unam.mx>", "msg_from_op": true, "msg_subject": "date formatting and tab-complete patch" }, { "msg_contents": "> Guys, attached is a patch against the 7.2b3 source tree which improves\n> the i18n of the date formatting functions, using the nl_langinfo(3)\n> function, together with an autoconf macro to test it's availability,\n> also an small change to the tab-complete feature of psql which allows\n> to use tables and views interchangeably. Please consider the\n> incorporation of those, small but useful changes, in the upcoming 7.2\n> release.\n\nSorry, too late for 7.2. Added to queue for 7.3:\n\n\thttp://216.55.132.35/cgi-bin/pgpatches2\n\t\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 3 Dec 2001 15:23:42 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: date formatting and tab-complete patch" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> > Guys, attached is a patch against the 7.2b3 source tree which improves\n> > the i18n of the date formatting functions, using the nl_langinfo(3)\n> > function, together with an autoconf macro to test it's availability,\n> > also an small change to the tab-complete feature of psql which allows\n> > to use tables and views interchangeably. Please consider the\n> > incorporation of those, small but useful changes, in the upcoming 7.2\n> > release.\n> \n> Sorry, too late for 7.2. Added to queue for 7.3:\n\nOh I see, - maybe for 7.2.1? ;-) - .Any way I miss the pg_config.h.in\npatch. Attached.\n\nRegards,\nManuel.", "msg_date": "03 Dec 2001 14:29:11 -0600", "msg_from": "Manuel Sugawara <masm@fciencias.unam.mx>", "msg_from_op": true, "msg_subject": "Re: date formatting and tab-complete patch" }, { "msg_contents": "> > Sorry, too late for 7.2. Added to queue for 7.3:\n> \n> Oh I see, - maybe for 7.2.1? ;-) - .Any way I miss the pg_config.h.in\n> patch. Attached.\n\nOK, got it; now at same location:\n\n\thttp://216.55.132.35/cgi-bin/pgpatches2\n\nWe don't add features in 7.X.X releases, so it will have to wait for\n7.3. Sorry.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 3 Dec 2001 15:42:38 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: date formatting and tab-complete patch" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> We don't add features in 7.X.X releases, so it will have to wait for\n> 7.3. Sorry.\n\nThat's OK for me, since I already maintain my own version of postgres\nwith \"features\" that may not be useful for every one, or not\nacceptable due \"backwards compatibility\" constrains. However waiting\nabout a year for those small changes to appear in the official\nrelease is, IMHO, too much waiting :-(.\n\nRegards,\nManuel.\n", "msg_date": "03 Dec 2001 15:04:36 -0600", "msg_from": "Manuel Sugawara <masm@fciencias.unam.mx>", "msg_from_op": true, "msg_subject": "Re: date formatting and tab-complete patch" }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \n> > We don't add features in 7.X.X releases, so it will have to wait for\n> > 7.3. Sorry.\n> \n> That's OK for me, since I already maintain my own version of postgres\n> with \"features\" that may not be useful for every one, or not\n> acceptable due \"backwards compatibility\" constrains. However waiting\n> about a year for those small changes to appear in the official\n> release is, IMHO, too much waiting :-(.\n\nYes, I understand.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 3 Dec 2001 16:17:42 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: date formatting and tab-complete patch" }, { "msg_contents": "Manuel Sugawara <masm@fciencias.unam.mx> writes:\n> That's OK for me, since I already maintain my own version of postgres\n> with \"features\" that may not be useful for every one, or not\n> acceptable due \"backwards compatibility\" constrains. However waiting\n> about a year for those small changes to appear in the official\n> release is, IMHO, too much waiting :-(.\n\nThe interval between major versions is supposed to be 4 months or so,\nnot a year. I admit we've had pretty awful luck at keeping to schedule\nthe past couple of cycles --- but that's not a reason to change the\nschedule target, and definitely not a reason to cause the schedule to\nslip more by ignoring the feature-freeze rule after beta starts.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Dec 2001 17:09:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: date formatting and tab-complete patch " }, { "msg_contents": "Hi there,\n\n I tried using the binary large object support of Postgresql but it \nseems to be different from the one supported by Interbase. Is there a way \nby which I can get rid of the lo_export and lo_import way of doing blob \nin PostgreSQL?\n\n Thanks a lot...\n\nmanny\n\n", "msg_date": "Tue, 4 Dec 2001 10:18:45 +0800 (PHT)", "msg_from": "Manuel Cabido <manny@tinago.msuiit.edu.ph>", "msg_from_op": false, "msg_subject": "Re: date formatting and tab-complete patch" }, { "msg_contents": "Manuel Sugawara writes:\n\n> Guys, attached is a patch against the 7.2b3 source tree which improves\n> the i18n of the date formatting functions, using the nl_langinfo(3)\n> function,\n\nWhat's the effect of that?\n\n> together with an autoconf macro to test it's availability,\n\nPlease put macros into config/*.m4 (probably c-library.m4).\n\n> also an small change to the tab-complete feature of psql which allows\n> to use tables and views interchangeably.\n\nISTM that in the situation that tab completion covers tables and views are\nnot interchangeable. Do you have an example?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 4 Dec 2001 22:21:31 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: date formatting and tab-complete patch" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n\n> Manuel Sugawara writes:\n> \n> > Guys, attached is a patch against the 7.2b3 source tree which improves\n> > the i18n of the date formatting functions, using the nl_langinfo(3)\n> > function,\n> \n> What's the effect of that?\n\nCurrently none since I miss another part of the patch :-( -attached-,\nbut if you add:\n\nsetlocale(LC_TIME, \"\");\n\nin src/backend/main/main.c you will see month and day names printed\naccording to the current locale.\n\nregress=# select to_char(now(),'fmday dd/month/yyyy');\n to_char\n--------------------------\n martes 04/diciembre/2001\n(1 row)\n\n> ISTM that in the situation that tab completion covers tables and views are\n> not interchangeable. Do you have an example?\n\nselect * from <TAB>\n\\d <TAB>\n\nneither works for views; that's annoying, since most access to many\ndatabases are done through views. Actually the only counterexamples I\nfound are DROP and ALTER TABLE but, may be, that's enough.\n\nRegards,\nManuel.", "msg_date": "04 Dec 2001 17:57:11 -0600", "msg_from": "Manuel Sugawara <masm@fciencias.unam.mx>", "msg_from_op": true, "msg_subject": "Re: date formatting and tab-complete patch" }, { "msg_contents": "On Tue, Dec 04, 2001 at 05:57:11PM -0600, Manuel Sugawara wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> \n> > Manuel Sugawara writes:\n> > \n> > > Guys, attached is a patch against the 7.2b3 source tree which improves\n> > > the i18n of the date formatting functions, using the nl_langinfo(3)\n> > > function,\n\n We don't directly call locale stuff in PostgreSQL code. It's\n encapsulated in PGLC_ (pg_locale.c) API and all is cached, for \n this we use localeconv(3) that returns all in one struct.\n\n (What portability of nl_langinfo()? The localeconv() is ANSI C and\n POSIX functions.)\n\n> > What's the effect of that?\n> \n> Currently none since I miss another part of the patch :-( -attached-,\n> but if you add:\n> \n> setlocale(LC_TIME, \"\");\n\n I mean is here more reason why not use LC_TIME and LC_NUMERIC in\n current sources. A correct solution will \"per-columns\" locales.\n\n _May be_ all in PostgreSQL is ready (intependent) to LC_TIME, but I\n not sure with it. Thomas?\n\n> in src/backend/main/main.c you will see month and day names printed\n> according to the current locale.\n> \n> regress=# select to_char(now(),'fmday dd/month/yyyy');\n> to_char\n> --------------------------\n> martes 04/diciembre/2001\n> (1 row)\n\n Sorry didn't see your original patch (I overlook and delete it in my\n IMBOX:-(). But I have a question -- do you solve vice versa\n conversion from string to timestamp? The basic feature of to_char()\n is that all outputs must be possible parse by to_timestamp() with\n same format definition:\n\ntest=# select to_char('2001-12-05 00:00:00+01'::timestamp,\n 'fmday dd/month/yyyy');\n to_char\n-----------------------------\n wednesday 05/december /2001\n(1 row)\n\ntest=# select to_timestamp('wednesday 05/december /2001', \n 'fmday dd/month/yyyy');\n to_timestamp\n------------------------\n 2001-12-05 00:00:00+01\n(1 row)\n\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Wed, 5 Dec 2001 10:20:12 +0100", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: date formatting and tab-complete patch" }, { "msg_contents": "Karel Zak <zakkr@zf.jcu.cz> writes:\n\n> We don't directly call locale stuff in PostgreSQL code. It's\n> encapsulated in PGLC_ (pg_locale.c) API and all is cached, for \n> this we use localeconv(3) that returns all in one struct.\n> \n> (What portability of nl_langinfo()? The localeconv() is ANSI C and\n> POSIX functions.)\n\nlocalenconv is posix and ANSI C but it doesn't provide such\nfunctionality (localized month and day names). nl_langinfo conforms to\n\"The Single UNIX� Specification, Version 2\", according to it's manual\npage. The portability isn't an issue as long as you provide means to\ntest and avoid it's use in systems that doesn't provide it. I know\nthat, at least, Linux and Solaris does, but FreeBSD does not.\n\n[...]\n> Sorry didn't see your original patch (I overlook and delete it in my\n> IMBOX:-(). But I have a question -- do you solve vice versa\n> conversion from string to timestamp? The basic feature of to_char()\n> is that all outputs must be possible parse by to_timestamp() with\n> same format definition:\n\nNo, however the work seems pretty easy.\n\n> test=# select to_char('2001-12-05 00:00:00+01'::timestamp,\n> 'fmday dd/month/yyyy');\n> to_char\n> -----------------------------\n> wednesday 05/december /2001\n> (1 row)\n\nThis example shows another issue. With localized month and day names\nthe hardcoded paddings doesn't make sense any more since you may have\na month name longer than 9 chars -septiembre- as instance.\n\nIf people is interested I may spend some time working with this.\n\nRegards,\nManuel.\n", "msg_date": "05 Dec 2001 10:15:37 -0600", "msg_from": "Manuel Sugawara <masm@fciencias.unam.mx>", "msg_from_op": true, "msg_subject": "Re: date formatting and tab-complete patch" }, { "msg_contents": "On Wed, Dec 05, 2001 at 10:15:37AM -0600, Manuel Sugawara wrote:\n> Karel Zak <zakkr@zf.jcu.cz> writes:\n> \n> > We don't directly call locale stuff in PostgreSQL code. It's\n> > encapsulated in PGLC_ (pg_locale.c) API and all is cached, for \n> > this we use localeconv(3) that returns all in one struct.\n> > \n> > (What portability of nl_langinfo()? The localeconv() is ANSI C and\n> > POSIX functions.)\n> \n> localenconv is posix and ANSI C but it doesn't provide such\n> functionality (localized month and day names). nl_langinfo conforms to\n> \"The Single UNIXďż˝ Specification, Version 2\", according to it's manual\n> page. The portability isn't an issue as long as you provide means to\n> test and avoid it's use in systems that doesn't provide it. I know\n> that, at least, Linux and Solaris does, but FreeBSD does not.\n\n But we want FreeBSD and others systems too.... \n\n> [...]\n> > Sorry didn't see your original patch (I overlook and delete it in my\n> > IMBOX:-(). But I have a question -- do you solve vice versa\n> > conversion from string to timestamp? The basic feature of to_char()\n> > is that all outputs must be possible parse by to_timestamp() with\n> > same format definition:\n> \n> No, however the work seems pretty easy.\n> \n> > test=# select to_char('2001-12-05 00:00:00+01'::timestamp,\n> > 'fmday dd/month/yyyy');\n> > to_char\n> > -----------------------------\n> > wednesday 05/december /2001\n> > (1 row)\n> \n> This example shows another issue. With localized month and day names\n> the hardcoded paddings doesn't make sense any more since you may have\n> a month name longer than 9 chars -septiembre- as instance.\n\n Magic \"9 chars\" is nice and popular Oracle feature :-)\n\n> If people is interested I may spend some time working with this.\n\n I already thought about it. You are right it's possible implement,\n but I mean not is good if some feature anticipate other matter of\n project ... but current to_char(float, '9G999D99') output string \n that depend on locale. Why don't allow it for \n to_chat(timenstamp, ...) too? \n \n But don't forget it must be fast, portable and locale stuff must\n cached and encapsulated in pg_locale.c.\n \n The \"9 chars\" can be used for English locales only.\n\n Comments?\n \n Karel\n \n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Wed, 5 Dec 2001 17:48:04 +0100", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: date formatting and tab-complete patch" }, { "msg_contents": "Karel Zak <zakkr@zf.jcu.cz> writes:\n\n> > This example shows another issue. With localized month and day names\n> > the hardcoded paddings doesn't make sense any more since you may have\n> > a month name longer than 9 chars -septiembre- as instance.\n> \n> Magic \"9 chars\" is nice and popular Oracle feature :-)\n\nYou think so?, May be it was in the teletype or monospaced display\ntimes. But now? I don't think so.\n\n> The \"9 chars\" can be used for English locales only.\n\nMay be add a GUC, something like useless_oracle_compatibility turned\non by default (for the backwards compatibility). Believe me, the\npaddings are mostly useless those days where aplications tends to use\nmore the web or graphical interfaces than terminals or teletypes.\n\nRegards,\nManuel.\n", "msg_date": "05 Dec 2001 11:33:26 -0600", "msg_from": "Manuel Sugawara <masm@fciencias.unam.mx>", "msg_from_op": true, "msg_subject": "Re: date formatting and tab-complete patch" }, { "msg_contents": "On Wed, Dec 05, 2001 at 11:33:26AM -0600, Manuel Sugawara wrote:\n> Karel Zak <zakkr@zf.jcu.cz> writes:\n> \n> > > This example shows another issue. With localized month and day names\n> > > the hardcoded paddings doesn't make sense any more since you may have\n> > > a month name longer than 9 chars -septiembre- as instance.\n> > \n> > Magic \"9 chars\" is nice and popular Oracle feature :-)\n\n It was irony :-) I don't know why Oracle has this stupid feature, but\n if we want be compatible, we must use it too....\n\n> You think so?, May be it was in the teletype or monospaced display\n> times. But now? I don't think so.\n> \n> > The \"9 chars\" can be used for English locales only.\n> \n> May be add a GUC, something like useless_oracle_compatibility turned\n> on by default (for the backwards compatibility). Believe me, the\n\n I not sure with some global switch like 'useless_oracle_compatibility',\n here must be way how use old and new features together without\n restriction.\n\n> paddings are mostly useless those days where aplications tends to use\n> more the web or graphical interfaces than terminals or teletypes.\n\n Yesterday I forgot important note: if to_char() will support locales\n for datetime output it must be implemented as _new_ feature. It means\n this new implementation must be backward compatible for all current\n format definition. For example:\n\n to_char(now(), 'Month') _must_forever_output_: 'December '\n\n and this output must be independent on locales setting. It's \n important, because a lot of application depend of current output\n format. If someone wants use locale depend output must be possible\n set it by some format suffix, for example:\n\n to_char(now(), 'LCMonth')\n\n (to_char() already knows work with suffixes, for example FM). And must\n be possible mix it in one format definition:\n \n to_char(now(), 'LCMonth Month') \n\n -output-> 'Xxxxxx December '\n\n where 'Xxxxxx' is the Month from locales and 'December ' is output\n compatible with current (Oracle) to_char() and is locale independen.\n\n This solve a problem with \"9 Chars\", because all non-locales output\n or compilation without locales support will use it like now.\n\n It's very simular to number formatting by to_char(). There is two \n patterns for decimal point, 'D' -- for locale depend deciaml point\n and '.' -- for \"standard\" locales independend. With Months/Days it \n must be same.\n\n Right? \n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Thu, 6 Dec 2001 10:03:22 +0100", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: date formatting and tab-complete patch" }, { "msg_contents": "Karel Zak writes:\n\n> (What portability of nl_langinfo()? The localeconv() is ANSI C and\n> POSIX functions.)\n\nTo use the table of contents of the GNU libc manual:\n\n* The Lame Way to Locale Data:: ISO C's `localeconv'.\n* The Elegant and Fast Way:: X/Open's `nl_langinfo'.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 6 Dec 2001 17:39:11 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: date formatting and tab-complete patch" }, { "msg_contents": "Karel Zak <zakkr@zf.jcu.cz> writes:\n\n[...]\n> and this output must be independent on locales setting. It's \n> important, because a lot of application depend of current output\n> format. If someone wants use locale depend output must be possible\n> set it by some format suffix, for example:\n> \n> to_char(now(), 'LCMonth')\n\nI was thinking it and I do not like much the idea to add a new\nprefix. I would like more a new set of functions:\n\n * lto_char (date, format [,locale])\n * lto_date (date,format [,locale])\n * lto_timestamp (timestamp,format [,locale])\n\nAlso if we are filling with spaces in the present code we would have\nto also do it with the locale aware code. Seems to me that this is a\nbetter way to keep backwards compatibility, so\n\nlto_char('1974/10/22'::date,'dd/month/yyyy','es_MX') \n\nwould lead to '22/octubre /1974', filled to 10, since 'septiembre'\nwhich has 10 chars is the longest month name in the 'es' localization,\nso people can still use 'FM' if they want '22/octubre/1974'. Another\nissue is that capilalization is also locale depenent, as instance in\nspanish we don't capitalize month names, but my crystal ball sees\npeople complaining because 'Month' doesn't capitalize, so in any case\nwe need something like a prefix to indicate that we do not want it in\ncapital letters, nor in small letters, but in the form that the\nlocalization in course prefers, maybe something like 'lmonth'.\n\nComments?\n\nRegards,\nManuel.\n\nPS. By the way, reviewing the code I found a bug that causes the\nfollowing test:\n\nto_date(to_char('1979/22/10'::date,'dd/fmrm/yyyy'),'dd/fmrm/yyyy') = '1979/22/10'::date;\n\ngive an error instead of a true value. Attached is the patch to fix it\nagainst the 7.2b3 tree.", "msg_date": "08 Dec 2001 19:41:01 -0600", "msg_from": "Manuel Sugawara <masm@fciencias.unam.mx>", "msg_from_op": true, "msg_subject": "Re: date formatting and tab-complete patch" }, { "msg_contents": "On Sat, Dec 08, 2001 at 07:41:01PM -0600, Manuel Sugawara wrote:\n> Karel Zak <zakkr@zf.jcu.cz> writes:\n> \n> [...]\n> > and this output must be independent on locales setting. It's \n> > important, because a lot of application depend of current output\n> > format. If someone wants use locale depend output must be possible\n> > set it by some format suffix, for example:\n> > \n> > to_char(now(), 'LCMonth')\n> \n> I was thinking it and I do not like much the idea to add a new\n> prefix. I would like more a new set of functions:\n> \n> * lto_char (date, format [,locale])\n> * lto_date (date,format [,locale])\n> * lto_timestamp (timestamp,format [,locale])\n\n Yes, we can add new function for [,locale], but internaly it\n _must_ be same code as current to_ stuff. I think new prefix is no\n a problem because it's relevant for Months/Days only. \n\n> Also if we are filling with spaces in the present code we would have\n> to also do it with the locale aware code. Seems to me that this is a\n> better way to keep backwards compatibility, so\n> \n> lto_char('1974/10/22'::date,'dd/month/yyyy','es_MX') \n> \n> would lead to '22/octubre /1974', filled to 10, since 'septiembre'\n> which has 10 chars is the longest month name in the 'es' localization,\n\n If you use prefix you can write to docs: \"LC prefix truncate output\n like FM\". I mean we needn't support crazy Oracle feature with months \n align for locale version of to_char(). If you will use prefix you \n can be always sure how is input. I think we can keep this Oracle's\n feature only for *non-locale version* and for compatibility only.\n I have no idea why support month align in new features where we not\n limited with some compatibility...\n\n I still think that 'LC' prefix is good and simply implement-able \n idea that allows mix old features with new features, for example:\n\n to_char(now(), '\"DE month:\" LCMonth \"and Oracle month:\" Month', de_DE);\n \n BTW the [,locale] feature require improve (very careful) pg_locale \n routines because it allows to work with different locales setting \n than use actual DB setting.\n \n If user doesn't set locales directly by [,locale] must be possible\n for 'LC' prefix use actual locales database setting. For example I\n don't want in my applications use harcoded locales setting in queries,\n but I want maintain it by DB setting.\n\n> PS. By the way, reviewing the code I found a bug that causes the\n> following test:\n> \n> to_date(to_char('1979/22/10'::date,'dd/fmrm/yyyy'),'dd/fmrm/yyyy') = '1979/22/10'::date;\n> \n> give an error instead of a true value. Attached is the patch to fix it\n> against the 7.2b3 tree.\n\n Thanks. I will repost it to paches list.\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Mon, 10 Dec 2001 11:09:04 +0100", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: date formatting and tab-complete patch" }, { "msg_contents": "Please add this _simple_ patch to actual code. It _fix_ my bug\n in formatting.c.\n\n Thanks to Manuel Sugawara <masm@fciencias.unam.mx>.\n\n Karel\n\n \n\nOn Sat, Dec 08, 2001 at 07:41:01PM -0600, Manuel Sugawara wrote:\n\n> PS. By the way, reviewing the code I found a bug that causes the\n> following test:\n> \n> to_date(to_char('1979/22/10'::date,'dd/fmrm/yyyy'),'dd/fmrm/yyyy') = '1979/22/10'::date;\n> \n> give an error instead of a true value. Attached is the patch to fix it\n> against the 7.2b3 tree.\n \n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz", "msg_date": "Mon, 10 Dec 2001 11:18:30 +0100", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] date formatting and tab-complete patch" }, { "msg_contents": "\nCan I get some feedback from someone else before applying this? Pretty\nlate in beta.\n\n---------------------------------------------------------------------------\n\n> \n> \n> \n> Please add this _simple_ patch to actual code. It _fix_ my bug\n> in formatting.c.\n> \n> Thanks to Manuel Sugawara <masm@fciencias.unam.mx>.\n> \n> Karel\n> \n> \n> \n> On Sat, Dec 08, 2001 at 07:41:01PM -0600, Manuel Sugawara wrote:\n> \n> > PS. By the way, reviewing the code I found a bug that causes the\n> > following test:\n> > \n> > to_date(to_char('1979/22/10'::date,'dd/fmrm/yyyy'),'dd/fmrm/yyyy') = '1979/22/10'::date;\n> > \n> > give an error instead of a true value. Attached is the patch to fix it\n> > against the 7.2b3 tree.\n> \n> -- \n> Karel Zak <zakkr@zf.jcu.cz>\n> http://home.zf.jcu.cz/~zakkr/\n> \n> C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Dec 2001 06:23:04 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] date formatting and tab-complete patch" }, { "msg_contents": "On Mon, Dec 10, 2001 at 06:23:04AM -0500, Bruce Momjian wrote:\n> \n> Can I get some feedback from someone else before applying this? Pretty\n> late in beta.\n> \n\n Bruce, see the patch.\n\n it fix \n \n \"if (type == XXXX || YYYY)\" \n \n to \n \n \"if (type == XXXX || type == YYYY)\"\n\n :-)\n \n Karel\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Mon, 10 Dec 2001 13:14:51 +0100", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] date formatting and tab-complete patch" }, { "msg_contents": "Karel Zak <zakkr@zf.jcu.cz> writes:\n> Please add this _simple_ patch to actual code. It _fix_ my bug\n> in formatting.c.\n\nApplied, thanks.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Dec 2001 10:34:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] date formatting and tab-complete patch " }, { "msg_contents": "Karel Zak <zakkr@zf.jcu.cz> writes:\n\n[...]\n> > I was thinking it and I do not like much the idea to add a new\n> > prefix. I would like more a new set of functions:\n> > \n> > * lto_char (date, format [,locale])\n> > * lto_date (date,format [,locale])\n> > * lto_timestamp (timestamp,format [,locale])\n> \n> Yes, we can add new function for [,locale], but internaly it\n> _must_ be same code as current to_ stuff. I think new prefix is no\n> a problem because it's relevant for Months/Days only. \n\nYes, of course, it _must_ be, almost, the same code. The idea behind a\nnew set of functions is to have functions with the same functionality\nthat the currents but that are locale aware, although personally I\nwould prefer that the present ones were it, you do insist that\n\n> to_char(now(), 'Month') _must_forever_output_: 'December '\n\nWith a new set of functions I can use DROP/CREATE to get what I\nwant. On the other hand, personally I find the filling with spaces\nuseless, but if somebody in the world is using it, he/she probably\nwants to also use it in locale aware code. I do not believe that the\nbest way to go is to disappear a ``fature'' just because you add a new\none (locale awareness), think about backwards compatibility.\n\n> BTW the [,locale] feature require improve (very careful) pg_locale \n> routines because it allows to work with different locales setting \n> than use actual DB setting.\n> \n> If user doesn't set locales directly by [,locale] must be possible\n> for 'LC' prefix use actual locales database setting. For example I\n> don't want in my applications use harcoded locales setting in queries,\n> but I want maintain it by DB setting.\n\nYeah, That's the idea.\n\nComments?\n\nRegards,\nManuel.\n", "msg_date": "10 Dec 2001 11:01:42 -0600", "msg_from": "Manuel Sugawara <masm@fciencias.unam.mx>", "msg_from_op": true, "msg_subject": "Re: date formatting and tab-complete patch" }, { "msg_contents": "> On Mon, Dec 10, 2001 at 06:23:04AM -0500, Bruce Momjian wrote:\n> > \n> > Can I get some feedback from someone else before applying this? Pretty\n> > late in beta.\n> > \n> \n> Bruce, see the patch.\n> \n> it fix \n> \n> \"if (type == XXXX || YYYY)\" \n> \n> to \n> \n> \"if (type == XXXX || type == YYYY)\"\n> \n> :-)\n\nI was just being extra cautious because it is so near final and I may\nnot be around to fix any breakage (in Japan). Tom applied it. Thanks\nTom.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Dec 2001 05:33:23 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] date formatting and tab-complete patch" }, { "msg_contents": "On Tue, Dec 11, 2001 at 05:33:23AM -0500, Bruce Momjian wrote:\n\n> I was just being extra cautious because it is so near final and I may\n\n It's very right that you rabit code-watch-dog. We have like you \n for this :-))\n\n I have a great regard for PostgreSQL project management and good\n stability of stable release. I hate something like Linus's: \"it is\n possible compile, it must be OK. Well, make release from it\". I hate\n non-tested software and shy at regression tests. Products like 2.4.15\n linux kernel is nice exhibition of bad project managment.\n\n> not be around to fix any breakage (in Japan). Tom applied it. Thanks\n> Tom.\n\n Yes, thanks.\n\n Karel \n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Tue, 11 Dec 2001 11:57:53 +0100", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] date formatting and tab-complete patch" }, { "msg_contents": "\nOK, I have read this thread and am unsure how to proceed. Is this\nsomething to apply or does it require more work. The full thread is at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches2\n\nI will apply the tab completion fix. That is a separate issue.\n\nKarel Zak wrote:\n> On Sat, Dec 08, 2001 at 07:41:01PM -0600, Manuel Sugawara wrote:\n> > Karel Zak <zakkr@zf.jcu.cz> writes:\n> > \n> > [...]\n> > > and this output must be independent on locales setting. It's \n> > > important, because a lot of application depend of current output\n> > > format. If someone wants use locale depend output must be possible\n> > > set it by some format suffix, for example:\n> > > \n> > > to_char(now(), 'LCMonth')\n> > \n> > I was thinking it and I do not like much the idea to add a new\n> > prefix. I would like more a new set of functions:\n> > \n> > * lto_char (date, format [,locale])\n> > * lto_date (date,format [,locale])\n> > * lto_timestamp (timestamp,format [,locale])\n> \n> Yes, we can add new function for [,locale], but internaly it\n> _must_ be same code as current to_ stuff. I think new prefix is no\n> a problem because it's relevant for Months/Days only. \n> \n> > Also if we are filling with spaces in the present code we would have\n> > to also do it with the locale aware code. Seems to me that this is a\n> > better way to keep backwards compatibility, so\n> > \n> > lto_char('1974/10/22'::date,'dd/month/yyyy','es_MX') \n> > \n> > would lead to '22/octubre /1974', filled to 10, since 'septiembre'\n> > which has 10 chars is the longest month name in the 'es' localization,\n> \n> If you use prefix you can write to docs: \"LC prefix truncate output\n> like FM\". I mean we needn't support crazy Oracle feature with months \n> align for locale version of to_char(). If you will use prefix you \n> can be always sure how is input. I think we can keep this Oracle's\n> feature only for *non-locale version* and for compatibility only.\n> I have no idea why support month align in new features where we not\n> limited with some compatibility...\n> \n> I still think that 'LC' prefix is good and simply implement-able \n> idea that allows mix old features with new features, for example:\n> \n> to_char(now(), '\"DE month:\" LCMonth \"and Oracle month:\" Month', de_DE);\n> \n> BTW the [,locale] feature require improve (very careful) pg_locale \n> routines because it allows to work with different locales setting \n> than use actual DB setting.\n> \n> If user doesn't set locales directly by [,locale] must be possible\n> for 'LC' prefix use actual locales database setting. For example I\n> don't want in my applications use harcoded locales setting in queries,\n> but I want maintain it by DB setting.\n> \n> > PS. By the way, reviewing the code I found a bug that causes the\n> > following test:\n> > \n> > to_date(to_char('1979/22/10'::date,'dd/fmrm/yyyy'),'dd/fmrm/yyyy') = '1979/22/10'::date;\n> > \n> > give an error instead of a true value. Attached is the patch to fix it\n> > against the 7.2b3 tree.\n> \n> Thanks. I will repost it to paches list.\n> \n> Karel\n> \n> -- \n> Karel Zak <zakkr@zf.jcu.cz>\n> http://home.zf.jcu.cz/~zakkr/\n> \n> C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 6 Mar 2002 23:44:48 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: date formatting and tab-complete patch" }, { "msg_contents": "Karel Zak wrote:\n> On Wed, Dec 05, 2001 at 10:15:37AM -0600, Manuel Sugawara wrote:\n> > Karel Zak <zakkr@zf.jcu.cz> writes:\n> > \n> > > We don't directly call locale stuff in PostgreSQL code. It's\n> > > encapsulated in PGLC_ (pg_locale.c) API and all is cached, for \n> > > this we use localeconv(3) that returns all in one struct.\n> > > \n> > > (What portability of nl_langinfo()? The localeconv() is ANSI C and\n> > > POSIX functions.)\n> > \n> > localenconv is posix and ANSI C but it doesn't provide such\n> > functionality (localized month and day names). nl_langinfo conforms to\n> > \"The Single UNIX? Specification, Version 2\", according to it's manual\n> > page. The portability isn't an issue as long as you provide means to\n> > test and avoid it's use in systems that doesn't provide it. I know\n> > that, at least, Linux and Solaris does, but FreeBSD does not.\n> \n> But we want FreeBSD and others systems too.... \n\nYes, this was one of the issues. Can we support both localization\nlibraries?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 7 Mar 2002 00:10:45 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: date formatting and tab-complete patch" }, { "msg_contents": "On Wed, Mar 06, 2002 at 11:44:48PM -0500, Bruce Momjian wrote:\n> \n> OK, I have read this thread and am unsure how to proceed. Is this\n> something to apply or does it require more work. The full thread is at:\n> \n> \thttp://candle.pha.pa.us/cgi-bin/pgpatches2\n> \n> I will apply the tab completion fix. That is a separate issue.\n\n It's in my TODO.\n\n Karel\n\n> Karel Zak wrote:\n> > On Sat, Dec 08, 2001 at 07:41:01PM -0600, Manuel Sugawara wrote:\n> > > Karel Zak <zakkr@zf.jcu.cz> writes:\n> > > \n> > > [...]\n> > > > and this output must be independent on locales setting. It's \n> > > > important, because a lot of application depend of current output\n> > > > format. If someone wants use locale depend output must be possible\n> > > > set it by some format suffix, for example:\n> > > > \n> > > > to_char(now(), 'LCMonth')\n> > > \n> > > I was thinking it and I do not like much the idea to add a new\n> > > prefix. I would like more a new set of functions:\n> > > \n> > > * lto_char (date, format [,locale])\n> > > * lto_date (date,format [,locale])\n> > > * lto_timestamp (timestamp,format [,locale])\n> > \n> > Yes, we can add new function for [,locale], but internaly it\n> > _must_ be same code as current to_ stuff. I think new prefix is no\n> > a problem because it's relevant for Months/Days only. \n> > \n> > > Also if we are filling with spaces in the present code we would have\n> > > to also do it with the locale aware code. Seems to me that this is a\n> > > better way to keep backwards compatibility, so\n> > > \n> > > lto_char('1974/10/22'::date,'dd/month/yyyy','es_MX') \n> > > \n> > > would lead to '22/octubre /1974', filled to 10, since 'septiembre'\n> > > which has 10 chars is the longest month name in the 'es' localization,\n> > \n> > If you use prefix you can write to docs: \"LC prefix truncate output\n> > like FM\". I mean we needn't support crazy Oracle feature with months \n> > align for locale version of to_char(). If you will use prefix you \n> > can be always sure how is input. I think we can keep this Oracle's\n> > feature only for *non-locale version* and for compatibility only.\n> > I have no idea why support month align in new features where we not\n> > limited with some compatibility...\n> > \n> > I still think that 'LC' prefix is good and simply implement-able \n> > idea that allows mix old features with new features, for example:\n> > \n> > to_char(now(), '\"DE month:\" LCMonth \"and Oracle month:\" Month', de_DE);\n> > \n> > BTW the [,locale] feature require improve (very careful) pg_locale \n> > routines because it allows to work with different locales setting \n> > than use actual DB setting.\n> > \n> > If user doesn't set locales directly by [,locale] must be possible\n> > for 'LC' prefix use actual locales database setting. For example I\n> > don't want in my applications use harcoded locales setting in queries,\n> > but I want maintain it by DB setting.\n> > \n> > > PS. By the way, reviewing the code I found a bug that causes the\n> > > following test:\n> > > \n> > > to_date(to_char('1979/22/10'::date,'dd/fmrm/yyyy'),'dd/fmrm/yyyy') = '1979/22/10'::date;\n> > > \n> > > give an error instead of a true value. Attached is the patch to fix it\n> > > against the 7.2b3 tree.\n> > \n> > Thanks. I will repost it to paches list.\n> > \n> > Karel\n> > \n> > -- \n> > Karel Zak <zakkr@zf.jcu.cz>\n> > http://home.zf.jcu.cz/~zakkr/\n> > \n> > C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> > \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Thu, 7 Mar 2002 09:22:26 +0100", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: date formatting and tab-complete patch" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> OK, I have read this thread and am unsure how to proceed. Is this\n> something to apply or does it require more work. The full thread is\n> at:\n\nThe problem is that we did not reach any agreement :-(. What I would\nlike, personally, is make the present more locale aware, so we can\nreuse code already done with minimal impact (think nls). However Karel\ninsists that\n\nselect to_char(cast('2001/01/01' as timestamp),'fmDDMonYYYY');\n\nshould always lead to '1Jan2001' no matter which localization is in\nuse and I didn't understand the reasons for this. Any way is easy to\nimplement, however I would like to see agreement on which is the best\nway to go.\n\n> I will apply the tab completion fix. That is a separate issue.\n\nWhen you are there, can you fix the `full' part of the vacuum that\ndoes not complete?\n\nRegards,\nManuel.\n", "msg_date": "07 Mar 2002 11:30:47 -0600", "msg_from": "Manuel Sugawara <masm@fciencias.unam.mx>", "msg_from_op": true, "msg_subject": "Re: date formatting and tab-complete patch" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> Yes, this was one of the issues. Can we support both localization\n> libraries?\n\nYes.\n\nRegards,\nManuel.\n", "msg_date": "07 Mar 2002 11:34:28 -0600", "msg_from": "Manuel Sugawara <masm@fciencias.unam.mx>", "msg_from_op": true, "msg_subject": "Re: date formatting and tab-complete patch" }, { "msg_contents": "> > I will apply the tab completion fix. That is a separate issue.\n> \n> When you are there, can you fix the `full' part of the vacuum that\n> does not complete?\n\nDone and applied. Thanks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n? psql\nIndex: tab-complete.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/bin/psql/tab-complete.c,v\nretrieving revision 1.44\ndiff -c -r1.44 tab-complete.c\n*** tab-complete.c\t7 Mar 2002 04:45:53 -0000\t1.44\n--- tab-complete.c\t7 Mar 2002 20:46:51 -0000\n***************\n*** 732,741 ****\n \n /* VACUUM */\n \telse if (strcasecmp(prev_wd, \"VACUUM\") == 0)\n! \t\tCOMPLETE_WITH_QUERY(\"SELECT relname FROM pg_class WHERE relkind='r' and substr(relname,1,%d)='%s' UNION SELECT 'ANALYZE'::text\");\n! \telse if (strcasecmp(prev2_wd, \"VACUUM\") == 0 && strcasecmp(prev_wd, \"ANALYZE\") == 0)\n \t\tCOMPLETE_WITH_QUERY(Query_for_list_of_tables);\n- \n \n /* ... FROM ... */\n \telse if (strcasecmp(prev_wd, \"FROM\") == 0)\n--- 732,740 ----\n \n /* VACUUM */\n \telse if (strcasecmp(prev_wd, \"VACUUM\") == 0)\n! \t\tCOMPLETE_WITH_QUERY(\"SELECT relname FROM pg_class WHERE relkind='r' and substr(relname,1,%d)='%s' UNION SELECT 'FULL'::text UNION SELECT 'ANALYZE'::text\");\n! \telse if (strcasecmp(prev2_wd, \"VACUUM\") == 0 && (strcasecmp(prev_wd, \"FULL\") == 0 || strcasecmp(prev_wd, \"ANALYZE\") == 0))\n \t\tCOMPLETE_WITH_QUERY(Query_for_list_of_tables);\n \n /* ... FROM ... */\n \telse if (strcasecmp(prev_wd, \"FROM\") == 0)", "msg_date": "Thu, 7 Mar 2002 15:48:19 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: date formatting and tab-complete patch" } ]
[ { "msg_contents": "\nHi All,\n\n\nSince I last posted to this list I have done some work\non a multi-threaded port of Postgres 7.0.2 that I have been kicking\naround for a while. There has been some mild interest\nin this in the past so I thought I might try and start a sourceforge\nproject with what I have so far.\n\n From past discussions, it is clear to me that a direct port\nof postgres which uses threads instead of processes is not a\ngood idea, how about an embedded version that uses threads.\nA multi-threaded postgres might be good for that.\nThe version I am working on is slower in terms of transaction\nthroughput than the current postgres but it uses less system\nresources and does not require shared memory.\n\nI know it is possible to embed the current postgres but I\nbelieve that is a single user system.\n\nComments?\n\n\nMyron Scott\nmkscott@sacadia.com\n\n", "msg_date": "Mon, 03 Dec 2001 15:20:44 -0800", "msg_from": "mkscott@sacadia.com", "msg_from_op": true, "msg_subject": "Using Threads (again)" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us] \n> Sent: 03 December 2001 16:05\n> To: Hor�k Daniel\n> Cc: pgsql-hackers@postgresql.org; Peter Eisentraut\n> Subject: Re: FW: [CYGWIN] 7.2b3 postmaster doesn't start on Win98 \n> \n> \n> =?iso-8859-1?Q?Hor=E1k_Daniel?= <horak@sit.plzen-city.cz> writes:\n> >> When trying to start the postmaster on win98se with cygwin\n> >> I get told that the data directory must be 0700, but when I\n> >> try to chmod to\n> >> 700, it apparently succeds, but nothing permissions stay at \n> >> 755. I suspect\n> >> this to be because win98 has no real file protection (just \n> a read only\n> >> attribute)\n> \n> > It works on WinNT, Win2K, ... because full file security is \n> > implemented only in this systems. There could be a dirty hack that \n> > disables the check (for 0700 permissions on $DATADIR) in \n> > src/backend/postmaster/postmaster.c. I don't know if it is \n> possible \n> > to do it during runtime for only Win9x systems.\n> \n> Ugh...\n> \n> Unless someone can think of a reasonable runtime check to \n> distinguish win98 from newer systems, I think we have little \n> choice but to make the data directory permissions check be \n> #ifndef __CYGWIN__. I don't like this much, but (a) I don't \n> want to hold up 7.2 while we look for better ideas, and (b) \n> no one should consider a Windoze box secure anyway ;-).\n\nThis check actually caused me *much* grief when I was testing on Win2K/XP.\nIt required that the cygwin ntsec option is enabled which in my case caused\nme even more problems with my Cygwin installation. I vote for the #ifndef\n__CYGWIN__...\n\nRegards, Dave.\n", "msg_date": "Tue, 4 Dec 2001 08:31:48 -0000 ", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: FW: [CYGWIN] 7.2b3 postmaster doesn't start on Win98 " }, { "msg_contents": "Dave Page <dpage@vale-housing.co.uk> writes:\n>> Unless someone can think of a reasonable runtime check to \n>> distinguish win98 from newer systems, I think we have little \n>> choice but to make the data directory permissions check be \n>> #ifndef __CYGWIN__. I don't like this much, but (a) I don't \n>> want to hold up 7.2 while we look for better ideas, and (b) \n>> no one should consider a Windoze box secure anyway ;-).\n\n> This check actually caused me *much* grief when I was testing on Win2K/XP.\n> It required that the cygwin ntsec option is enabled which in my case caused\n> me even more problems with my Cygwin installation. I vote for the #ifndef\n> __CYGWIN__...\n\nOh, so it's (in essence) an optional feature on Cygwin? And someone\nelse pointed out that it depends on the filesystem in use, too.\n\nOkay, I think the answer is clear: #ifndef __CYGWIN__ for 7.2. We can\nthink about nicer approaches for 7.3.\n\nI'll apply the change shortly.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Dec 2001 09:54:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [CYGWIN] 7.2b3 postmaster doesn't start on Win98 " } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us] \n> Sent: 04 December 2001 02:53\n> To: mlw\n> Cc: pgsql-hackers@postgreSQL.org\n> Subject: Re: FW: [CYGWIN] 7.2b3 postmaster doesn't start on Win98 \n> \n> \n> mlw <markw@mohawksoft.com> writes:\n> > I'll write and test something with cygwin this week if that would \n> > help. (If someone can get to it first it is something stupid like \n> > \"GetWindowsVersion()\" or something like that.\n> \n> Well, the non-stupid part is to know which return values \n> correspond to Windows versions that have proper file \n> permissions and which values to versions that don't. Given \n> that NT and the other versions are two separate code streams \n> (no?), I'm not sure that distinguishing this is trivial, and \n> even less sure that we should assume all future Windows \n> releases will have it. I'd be more comfortable with an autoconf-like\n> approach: actually probe the desired feature and see if it works.\n> \n> I was thinking this morning about trying to chmod the \n> directory and, if that doesn't report an error, assuming that \n> all is well. On Windows it'd presumably claim success \n> despite not being able to do what is asked for. But this \n> would definitely require testing.\n\nIt does (at least on my systems).\n\n/Dave\n", "msg_date": "Tue, 4 Dec 2001 08:35:45 -0000 ", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: FW: [CYGWIN] 7.2b3 postmaster doesn't start on Win98 " }, { "msg_contents": "Dave Page wrote:\n> \n> > -----Original Message-----\n> > From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> > Sent: 04 December 2001 02:53\n> > To: mlw\n> > Cc: pgsql-hackers@postgreSQL.org\n> > Subject: Re: FW: [CYGWIN] 7.2b3 postmaster doesn't start on Win98\n> >\n> >\n> > mlw <markw@mohawksoft.com> writes:\n> > > I'll write and test something with cygwin this week if that would\n> > > help. (If someone can get to it first it is something stupid like\n> > > \"GetWindowsVersion()\" or something like that.\n> >\n> > Well, the non-stupid part is to know which return values\n> > correspond to Windows versions that have proper file\n> > permissions and which values to versions that don't.\n\nIIRC, it depends also on filesystem, i.e. FAT32 on NT/2000 dos still\nnot have proper permissions.\n\n> > Given\n> > that NT and the other versions are two separate code streams\n> > (no?), I'm not sure that distinguishing this is trivial, and\n> > even less sure that we should assume all future Windows\n> > releases will have it. I'd be more comfortable with an autoconf-like\n> > approach: actually probe the desired feature and see if it works.\n> >\n> > I was thinking this morning about trying to chmod the\n> > directory and, if that doesn't report an error, assuming that\n> > all is well. On Windows it'd presumably claim success\n> > despite not being able to do what is asked for. But this\n> > would definitely require testing.\n> \n> It does (at least on my systems).\n\nIt does what ? Report an error, claim success or need testing ?\n\n--------------\nHannu\n", "msg_date": "Tue, 04 Dec 2001 11:06:15 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: FW: [CYGWIN] 7.2b3 postmaster doesn't start on Win98" } ]
[ { "msg_contents": "Hi.\n\nI has compiled a postgresql 7.1.3 into a Debian GNU/Linux. The\n\"configure\" option has the 'max-backends', 'enable-locale' and\n'enable-syslog' options.\nWhen I make a:\n\tpg_dump -O -x -Fc <database> > backupdatabase.sql\nThe program tell me a \"warning\" like this:\n\nArchiver: WARNING - requested compression not available in this\ninstallation - archive will be uncompressed\n\nWhat I make to do for obtain the necessary compression for the \"-Fc\"\nflag and work after with pg_restore?\n\nThank you very much.\n\nHave a nice day ;-)\nTooManySecrets\n\n-- \nManuel Trujillo manueltrujillo@dorna.es\nTechnical Engineer http://www.motograndprix.com\nDorna Sports S.L. +34 93 4702864\n", "msg_date": "Tue, 4 Dec 2001 09:45:13 +0100", "msg_from": "Manuel Trujillo <manueltrujillo@dorna.es>", "msg_from_op": true, "msg_subject": "compression -Fx \"problem\"" }, { "msg_contents": "Manuel Trujillo <manueltrujillo@dorna.es> writes:\n> The program tell me a \"warning\" like this:\n> Archiver: WARNING - requested compression not available in this\n> installation - archive will be uncompressed\n> What I make to do for obtain the necessary compression for the \"-Fc\"\n> flag and work after with pg_restore?\n\nYou need to install zlib, then reconfigure and rebuild Postgres.\nPay attention to whether the configure run finds zlib...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Dec 2001 09:56:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: compression -Fx \"problem\" " }, { "msg_contents": "Tom Lane writes:\n\n> You need to install zlib, then reconfigure and rebuild Postgres.\n> Pay attention to whether the configure run finds zlib...\n\nThis has got to stop. We've seen \"my cursor keys don't work in psql\", now\nwe see \"compression doesn't work\" every so often. Somehow we have to get\ndeterministic output from configure without \"paying attention\". Either\nrequire these libraries by default (my preference) or not, but not \"take\nwhat's there\".\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 4 Dec 2001 22:21:46 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] compression -Fx \"problem\" " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> This has got to stop. We've seen \"my cursor keys don't work in psql\", now\n> we see \"compression doesn't work\" every so often. Somehow we have to get\n> deterministic output from configure without \"paying attention\". Either\n> require these libraries by default (my preference) or not, but not \"take\n> what's there\".\n\nSo you're thinking \"error out unless user said --without-zlib or\n--without-readline\"?\n\nI could live with that, assuming that the error message mentioned that\nswitch as the way to build without the library.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Dec 2001 16:26:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] compression -Fx \"problem\" " }, { "msg_contents": "> Peter Eisentraut <peter_e@gmx.net> writes:\n> > This has got to stop. We've seen \"my cursor keys don't work in psql\", now\n> > we see \"compression doesn't work\" every so often. Somehow we have to get\n> > deterministic output from configure without \"paying attention\". Either\n> > require these libraries by default (my preference) or not, but not \"take\n> > what's there\".\n> \n> So you're thinking \"error out unless user said --without-zlib or\n> --without-readline\"?\n\nOr --my_cursor_keys_will_never_work_in_psql. :-)\n\nNot sure if they are going to make the connection between\n--without-readline and psql cursor keys.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Dec 2001 16:43:20 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] compression -Fx \"problem\"" }, { "msg_contents": "> Peter Eisentraut <peter_e@gmx.net> writes:\n> > This has got to stop. We've seen \"my cursor keys don't work in psql\", now\n> > we see \"compression doesn't work\" every so often. Somehow we have to get\n> > deterministic output from configure without \"paying attention\". Either\n> > require these libraries by default (my preference) or not, but not \"take\n> > what's there\".\n> \n> So you're thinking \"error out unless user said --without-zlib or\n> --without-readline\"?\n> \n> I could live with that, assuming that the error message mentioned that\n> switch as the way to build without the library.\n\nWhat I have seen some packages do is to report unusual configure\nfindings just a configure exits. Something like:\n\n\tYour psql cursor keys will not work because readline wasn't found\n\nor something like that for zlib.\n\nAdded to TODO:\n\n\t* Report failure to find readline or zlib at end of configure run\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 28 Dec 2001 14:47:23 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] compression -Fx \"problem\"" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Added to TODO:\n> \t* Report failure to find readline or zlib at end of configure run\n\nPeter will certainly not consider that an acceptable answer, since it\nhelps not at all for non-interactive builds.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 28 Dec 2001 15:05:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] compression -Fx \"problem\" " }, { "msg_contents": "On Sat, 2001-12-29 at 09:05, Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Added to TODO:\n> > \t* Report failure to find readline or zlib at end of configure run\n> \n> Peter will certainly not consider that an acceptable answer, since it\n> helps not at all for non-interactive builds.\n> \n\nDoing that sort of thing is a huge improvement though. Hopefully the\npeople who are doing auto builds have done them manually a few times\nfirst!\n\nTo take it a bit further though, you could perhaps have it refuse to\ncontinue beyond that without an explicit configure option. That would\nthen stop the auto build as well.\n\nRegards,\n\t\t\t\t\tAndrew.\n-- \n--------------------------------------------------------------------\nAndrew @ Catalyst .Net.NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)916-7201 MOB: +64(21)635-694 OFFICE: +64(4)499-2267\n Are you enrolled at http://schoolreunions.co.nz/ yet?\n\n", "msg_date": "29 Dec 2001 10:50:55 +1300", "msg_from": "Andrew McMillan <andrew@catalyst.net.nz>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] compression -Fx \"problem\"" } ]
[ { "msg_contents": "Hi!\nBinary files Postgres 7.1.3 for qnx6(Neutrino) avialable on http://qnx.org.ru\n \nAndy Latin\n----\n О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫ О©╫О©╫О©╫О©╫О©╫ http://mail.Rambler.ru/\n О©╫О©╫О©╫О©╫О©╫О©╫О©╫-О©╫О©╫О©╫О©╫О©╫О©╫О©╫ http://ad.rambler.ru/ban.clk?pg=1691&bn=9346\n", "msg_date": "Tue, 4 Dec 2001 11:51:52 +0300 (MSK)", "msg_from": "Andy Latin <303401@rambler.ru>", "msg_from_op": true, "msg_subject": "Postgres 7.1.3 for QNX6" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Hannu Krosing [mailto:hannu@tm.ee] \n> Sent: 04 December 2001 09:06\n> To: Dave Page\n> Cc: 'Tom Lane'; mlw; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] FW: [CYGWIN] 7.2b3 postmaster doesn't \n> start on Win98\n> \n> \n> Dave Page wrote:\n> > \n> > > -----Original Message-----\n> > > From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> > > Sent: 04 December 2001 02:53\n> > > To: mlw\n> > > Cc: pgsql-hackers@postgreSQL.org\n> > > Subject: Re: FW: [CYGWIN] 7.2b3 postmaster doesn't start on Win98\n> > >\n> > >\n> > > mlw <markw@mohawksoft.com> writes:\n> > > > I'll write and test something with cygwin this week if \n> that would \n> > > > help. (If someone can get to it first it is something \n> stupid like \n> > > > \"GetWindowsVersion()\" or something like that.\n> > >\n> > > Well, the non-stupid part is to know which return values \n> correspond \n> > > to Windows versions that have proper file permissions and which \n> > > values to versions that don't.\n> \n> IIRC, it depends also on filesystem, i.e. FAT32 on NT/2000 \n> dos still not have proper permissions.\n> \n> > > Given\n> > > that NT and the other versions are two separate code \n> streams (no?), \n> > > I'm not sure that distinguishing this is trivial, and \n> even less sure \n> > > that we should assume all future Windows releases will \n> have it. I'd \n> > > be more comfortable with an autoconf-like\n> > > approach: actually probe the desired feature and see if it works.\n> > >\n> > > I was thinking this morning about trying to chmod the \n> directory and, \n> > > if that doesn't report an error, assuming that all is well. On \n> > > Windows it'd presumably claim success despite not being \n> able to do \n> > > what is asked for. But this would definitely require testing.\n> > \n> > It does (at least on my systems).\n> \n> It does what ? Report an error, claim success or need testing ?\n\nAppears to succeed. I haven't tested any return values, however the chmod\ncertainly failed without giving any error message.\n\n/Dave\n", "msg_date": "Tue, 4 Dec 2001 09:19:29 -0000 ", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: FW: [CYGWIN] 7.2b3 postmaster doesn't start on Win9" } ]
[ { "msg_contents": "backend/port/dynaloader/sunos4.h has been changed between 7.1 and 7.2,\nthat causes compile error on SunOS4. \n\nrevision 1.8\ndate: 2001/05/14 21:45:53; author: petere; state: Exp; lines: +2 -2\nUse RTLD_GLOBAL flag for dlopen-style dynamic loaders.\n\nI don't know what this change is intended for, since RTLD_GLOBAL\napparently does not exist on SunOS4. I'll backout the change.\n--\nTatsuo Ishii\n", "msg_date": "Tue, 04 Dec 2001 22:18:45 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "dynaloader/sunos4.h" } ]
[ { "msg_contents": "I noticed an incorrect example in doc/src/sgml/func.sgml...\n\nbrent=# SELECT EXTRACT(SECOND FROM TIME '17:12:28.5');\n date_part \n-----------\n 28\n(1 row)\n\nThe documentation says this should return 28.5. Digging a bit, I\nnoticed the following (discrepancy?). Is this desired behavior?\n\nbrent=# select \"time\"('12:00:12.5');\n time \n-------------\n 12:00:12.50\n(1 row)\n\nbrent=# select '12:00:12.5'::time;\n time \n----------\n 12:00:12\n(1 row)\n\nIMO, one of these needs to be fixed before RC1 is rolled.\n\n\nOn a similar note, it would be neat if there was an sgml tag like\n <examplequery>\n SELECT EXTRACT(SECOND FROM TIME '17:12:28.5');\n </examplequery>\nthat would dynamically execute the query and insert the result in\nthe sgml file... This would ensure the docs always agree with current\nbehavior ;-)\n\ncheers.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Tue, 4 Dec 2001 08:27:12 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": true, "msg_subject": "text -> time cast problem" }, { "msg_contents": "Brent Verner <brent@rcfile.org> writes:\n> I noticed an incorrect example in doc/src/sgml/func.sgml...\n> brent=# SELECT EXTRACT(SECOND FROM TIME '17:12:28.5');\n> date_part \n> -----------\n> 28\n> (1 row)\n\n> The documentation says this should return 28.5.\n\nHistorically we've made EXTRACT(SECOND) return integral seconds, with\nMILLISECOND/MICROSECOND field names for the fractional seconds. So the\ndocs are incorrect with respect to the actual code behavior.\n\nBut ...\n\nThe SQL92 spec appears to intend that EXTRACT(SECOND) should return\nseconds *and* fractional seconds. In 6.6 syntax rule 4,\n\n 4) If <extract expression> is specified, then\n\n Case:\n\n a) If <datetime field> does not specify SECOND, then the data\n type of the result is exact numeric with implementation-\n defined precision and scale 0.\n\n b) Otherwise, the data type of the result is exact numeric\n with implementation-defined precision and scale. The\n implementation-defined scale shall not be less than the spec-\n ified or implied <time fractional seconds precision> or <in-\n terval fractional seconds precision>, as appropriate, of the\n SECOND <datetime field> of the <extract source>.\n\nIt looks to me like 4b *requires* the fractional part of the seconds\nfield to be returned. (Of course, we're blithely ignoring the aspect\nof this that requires an exact numeric result type, since our version\nof EXTRACT returns float8, but let's not worry about that fine point\nat the moment.)\n\nDon't think I want to change this behavior for 7.2, but it ought to be\non the TODO list to fix it for 7.3.\n\n\n> Digging a bit, I\n> noticed the following (discrepancy?). Is this desired behavior?\n\n> brent=# select \"time\"('12:00:12.5');\n> time \n> -------------\n> 12:00:12.50\n> (1 row)\n\n> brent=# select '12:00:12.5'::time;\n> time \n> ----------\n> 12:00:12\n> (1 row)\n\n> IMO, one of these needs to be fixed before RC1 is rolled.\n\nI'm not convinced that's broken. You're missing an important point\n(forgivable, because Thomas hasn't yet committed any documentation\nabout it): TIME now implies a precision specification, and the default\nis TIME(0), ie no fractional digits. Observe:\n\nregression=# select '12:00:12.6'::time(0);\n time\n----------\n 12:00:13\n(1 row)\n\nregression=# select '12:00:12.6'::time(2);\n time\n-------------\n 12:00:12.60\n(1 row)\n\nIn the pseudo-function-call case, there is no implicit precision\nspecification and thus the value does not get rounded.\n\nBTW, this means that\n\nSELECT EXTRACT(SECOND FROM TIME '17:12:28.5');\n\n*should* return 28, because the TIME literal is implicitly TIME(0).\nBut if it were written TIME(1) '17:12:28.5' or more precision, then\nI believe SQL92 requires the EXTRACT result to include the fraction.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Dec 2001 11:38:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: text -> time cast problem " }, { "msg_contents": "Tom Lane writes:\n\n> Brent Verner <brent@rcfile.org> writes:\n> > I noticed an incorrect example in doc/src/sgml/func.sgml...\n> > brent=# SELECT EXTRACT(SECOND FROM TIME '17:12:28.5');\n> > date_part\n> > -----------\n> > 28\n> > (1 row)\n>\n> > The documentation says this should return 28.5.\n>\n> Historically we've made EXTRACT(SECOND) return integral seconds, with\n> MILLISECOND/MICROSECOND field names for the fractional seconds. So the\n> docs are incorrect with respect to the actual code behavior.\n\nNope, the docs represent the behavior of the code at the time the docs\nwere written. The code is now in error with respect to the documented\nbehaviour. A quick check shows that PostgreSQL 7.0.2 agrees with\nincluding the fractional part. Probably this was broken as part of the\ntime/timestamp precision changes. Definitely looks like a show-stopper to\nme.\n\n> BTW, this means that\n>\n> SELECT EXTRACT(SECOND FROM TIME '17:12:28.5');\n>\n> *should* return 28, because the TIME literal is implicitly TIME(0).\n> But if it were written TIME(1) '17:12:28.5' or more precision, then\n\nThat appears to be what it does, but it's not correct. I point you to\nSQL92:\n\n 16)The data type of a <time literal> that does not specify <time\n zone interval> is TIME(P), where P is the number of digits in\n <seconds fraction>, if specified, and 0 otherwise. The data\n type of a <time literal> that specifies <time zone interval>\n is TIME(P) WITH TIME ZONE, where P is the number of digits in\n <seconds fraction>, if specified, and 0 otherwise.\n\nIn this \"time literal\" context, TIME does not take a precision value at\nall. The new code certainly has this wrong.\n\nFor details, I refer you to my Oct 5 message \"Unhappiness with forced\nprecision conversion for timestamp\", where we already discussed\nessentially the same issue, but apparently we never did anything about it.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 4 Dec 2001 23:00:54 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: text -> time cast problem " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Nope, the docs represent the behavior of the code at the time the docs\n> were written. The code is now in error with respect to the documented\n> behaviour. A quick check shows that PostgreSQL 7.0.2 agrees with\n> including the fractional part.\n\n[ checks ... ] As does 7.1. You're right, that is how it used to behave.\n\n> For details, I refer you to my Oct 5 message \"Unhappiness with forced\n> precision conversion for timestamp\", where we already discussed\n> essentially the same issue, but apparently we never did anything about it.\n\nI think the rest of us were waiting on Lockhart to opine about it ...\nnot to mention do something about it ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Dec 2001 17:30:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: text -> time cast problem " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> That appears to be what it does, but it's not correct. I point you to\n> SQL92:\n\n> 16)The data type of a <time literal> that does not specify <time\n> zone interval> is TIME(P), where P is the number of digits in\n> <seconds fraction>, if specified, and 0 otherwise. The data\n> type of a <time literal> that specifies <time zone interval>\n> is TIME(P) WITH TIME ZONE, where P is the number of digits in\n> <seconds fraction>, if specified, and 0 otherwise.\n\n> In this \"time literal\" context, TIME does not take a precision value at\n> all. The new code certainly has this wrong.\n\nI believe it is a reasonable extension for us to accept\n\n\t\ttime(2) '17:12:28.123'\n\nas producing '17:12:28.12'. This accords with our general extension to\naccept <any-type-name> <string-literal> as a typed constant, whereas I\nbelieve that SQL92 only envisions certain specific type names being used\nin this way.\n\nBut you are definitely right that\n\n\t\ttime '17:12:28.123'\n\nshould not strip the fractional digits. From this it is a small step\nto asserting that\n\n\t\t'17:12:28.123'::time\n\nshouldn't either; in general we'd like TYPE 'LIT' and 'LIT'::TYPE to\nproduce the same answers.\n\n> For details, I refer you to my Oct 5 message \"Unhappiness with forced\n> precision conversion for timestamp\", where we already discussed\n> essentially the same issue, but apparently we never did anything about it.\n\nI think you have put your finger on the heart of the problem. Some\nfurther research shows that it's not EXTRACT(SECOND) that is refusing\nto produce a fractional part; the problem is with the time literal.\n\nAs an experiment, I made the attached patch to gram.y, which implements\nthe change I originally proposed in the older thread: time/timestamp\ntype names that don't explicitly specify a precision should get typmod\n-1, which will mean no coercion to a specific precision. This does not\nfollow SQL92's notion of having specific default precisions for these\ntypes, but it does agree with our current handling of NUMERIC (no forced\ndefault precision there either). I make the following observations:\n\n1. All the regression tests still pass.\n\n2. The case I was unhappy about in October works nicely now:\n\nregression=# select '2001-10-04 13:52:42.845985-04'::timestamp;\n timestamptz\n-------------------------------\n 2001-10-04 13:52:42.845985-04\n(1 row)\n\n3. The cases Brent is unhappy about all pass:\n\nregression=# SELECT EXTRACT(SECOND FROM TIME '17:12:28.5');\n date_part\n-----------\n 28.5\n(1 row)\n\nregression=# select \"time\"('12:00:12.5');\n time\n-------------\n 12:00:12.50\n(1 row)\n\nregression=# select '12:00:12.5'::time;\n time\n-------------\n 12:00:12.50\n(1 row)\n\n\nThis needs further thought and testing before I'd dare call it a\nsolution, but it does seem to suggest the direction we should pursue.\n\n\t\t\tregards, tom lane\n\n\n*** src/backend/parser/gram.y.orig\tThu Nov 15 23:08:33 2001\n--- src/backend/parser/gram.y\tTue Dec 4 17:52:10 2001\n***************\n*** 4058,4064 ****\n \t\t\t\t{\n \t\t\t\t\t$$ = $1;\n \t\t\t\t\tif ($2 != -1)\n! \t\t\t\t\t\t$$->typmod = ((($2 & 0x7FFF) << 16) | 0xFFFF);\n \t\t\t\t}\n \t\t| ConstInterval '(' Iconst ')' opt_interval\n \t\t\t\t{\n--- 4058,4064 ----\n \t\t\t\t{\n \t\t\t\t\t$$ = $1;\n \t\t\t\t\tif ($2 != -1)\n! \t\t\t\t\t\t$$->typmod = (($2 << 16) | 0xFFFF);\n \t\t\t\t}\n \t\t| ConstInterval '(' Iconst ')' opt_interval\n \t\t\t\t{\n***************\n*** 4328,4337 ****\n \t\t\t\t\t * - thomas 2001-09-06\n \t\t\t\t\t */\n \t\t\t\t\t$$->timezone = $2;\n! \t\t\t\t\t/* SQL99 specified a default precision of six.\n! \t\t\t\t\t * - thomas 2001-09-30\n! \t\t\t\t\t */\n! \t\t\t\t\t$$->typmod = 6;\n \t\t\t\t}\n \t\t| TIME '(' Iconst ')' opt_timezone\n \t\t\t\t{\n--- 4328,4334 ----\n \t\t\t\t\t * - thomas 2001-09-06\n \t\t\t\t\t */\n \t\t\t\t\t$$->timezone = $2;\n! \t\t\t\t\t$$->typmod = -1;\n \t\t\t\t}\n \t\t| TIME '(' Iconst ')' opt_timezone\n \t\t\t\t{\n***************\n*** 4352,4361 ****\n \t\t\t\t\t\t$$->name = xlateSqlType(\"timetz\");\n \t\t\t\t\telse\n \t\t\t\t\t\t$$->name = xlateSqlType(\"time\");\n! \t\t\t\t\t/* SQL99 specified a default precision of zero.\n! \t\t\t\t\t * - thomas 2001-09-30\n! \t\t\t\t\t */\n! \t\t\t\t\t$$->typmod = 0;\n \t\t\t\t}\n \t\t;\n \n--- 4349,4355 ----\n \t\t\t\t\t\t$$->name = xlateSqlType(\"timetz\");\n \t\t\t\t\telse\n \t\t\t\t\t\t$$->name = xlateSqlType(\"time\");\n! \t\t\t\t\t$$->typmod = -1;\n \t\t\t\t}\n \t\t;\n \n***************\n*** 5603,5609 ****\n \t\t\t\t\tn->val.val.str = $2;\n \t\t\t\t\t/* precision is not specified, but fields may be... */\n \t\t\t\t\tif ($3 != -1)\n! \t\t\t\t\t\tn->typename->typmod = ((($3 & 0x7FFF) << 16) | 0xFFFF);\n \t\t\t\t\t$$ = (Node *)n;\n \t\t\t\t}\n \t\t| ConstInterval '(' Iconst ')' Sconst opt_interval\n--- 5597,5603 ----\n \t\t\t\t\tn->val.val.str = $2;\n \t\t\t\t\t/* precision is not specified, but fields may be... */\n \t\t\t\t\tif ($3 != -1)\n! \t\t\t\t\t\tn->typename->typmod = (($3 << 16) | 0xFFFF);\n \t\t\t\t\t$$ = (Node *)n;\n \t\t\t\t}\n \t\t| ConstInterval '(' Iconst ')' Sconst opt_interval", "msg_date": "Tue, 04 Dec 2001 18:15:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: text -> time cast problem " }, { "msg_contents": "[2001-12-04 18:15] Tom Lane said:\n| Peter Eisentraut <peter_e@gmx.net> writes:\n| > That appears to be what it does, but it's not correct. I point you to\n| > SQL92:\n| \n| > 16)The data type of a <time literal> that does not specify <time\n| > zone interval> is TIME(P), where P is the number of digits in\n| > <seconds fraction>, if specified, and 0 otherwise. The data\n| > type of a <time literal> that specifies <time zone interval>\n| > is TIME(P) WITH TIME ZONE, where P is the number of digits in\n| > <seconds fraction>, if specified, and 0 otherwise.\n| \n| > In this \"time literal\" context, TIME does not take a precision value at\n| > all. The new code certainly has this wrong.\n\nThe current handling of <time literal> and <timestamp literal> appear \nto be correct from my reading of the sql standards.\n\n| But you are definitely right that\n| \n| \t\ttime '17:12:28.123'\n\nsql-99 seems to contradict this assertion.\npage 160 (Syntax Rules, 6.1 <data type>)\n \n 30) If <time precision> is not specified, then 0 (zero) is implicit.\n If <timestamp precision> is not specified, then 6 is implicit.\n\nmeaning (to me) that \"TIME\" should be equivalent \"TIME(0)\".\n\n [snip]\n\n| in general we'd like TYPE 'LIT' and 'LIT'::TYPE to\n| produce the same answers.\n\nI agree wholly with this statement.\n\n [snip]\n\nTo get back to my original problem report... \n\n I believe the proper solution would be to update the documentation \nto reflect the fact that \"TIME 'hh:mm:ss.ff'\" will correctly drop \nthe '.ff' seconds fraction.\n\n That said, how should \"time\"('hh:mm:ss.ff') behave? How could\n<time precision> be specified in this syntax? If there is no way\nto specify <time precision>, I believe we should drop the seconds\nfraction from <time string>. Is there any reason we couldn't drop\nthis typename-as-a-function-call syntax for types like \"time\" and\n\"timestamp\"?\n\ncheers.\n brent\n\np.s. sorry for not replying sooner...\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Thu, 6 Dec 2001 11:44:49 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": true, "msg_subject": "Re: text -> time cast problem" }, { "msg_contents": "Brent Verner <brent@rcfile.org> writes:\n> [2001-12-04 18:15] Tom Lane said:\n> | But you are definitely right that\n> | \n> | \t\ttime '17:12:28.123'\n\n> sql-99 seems to contradict this assertion.\n> page 160 (Syntax Rules, 6.1 <data type>)\n \n> 30) If <time precision> is not specified, then 0 (zero) is implicit.\n> If <timestamp precision> is not specified, then 6 is implicit.\n\nBut that's <data type>, ie, they're specifying the implied width of\na table column that's declared \"foo time\". The rules for <time literal>\nare different.\n\nOur problem is that we want to generalize the notion of <time literal>\nto be <datatype> <string literal>, and this makes it hard to have\ndatatype-specific rules that differ from the rules that apply in the\ncolumn-datatype context.\n\nMy thought is that we should resolve this conflict by rejecting the\npart of the spec that assigns fixed default precisions to time\nand timestamp columns, the same as we have done for type numeric.\nThere's no benefit to the user in that requirement; it's only a\ncrutch for implementations that cannot cope with variable-width\ncolumns effectively. If people want a column that rounds fractional\ninputs to integral seconds, let 'em say \"TIME(0)\". But I don't\nthink that \"TIME\" should do so, especially when the spec provides\nno alternative way to get the effect of \"time with no particular\nprecision restriction\". It's the old \"text vs varchar(N)\" game all\nover again.\n\n> I believe the proper solution would be to update the documentation \n> to reflect the fact that \"TIME 'hh:mm:ss.ff'\" will correctly drop \n> the '.ff' seconds fraction.\n\nNo, because that behavior is *not* correct, neither per spec nor per\nour historical behavior.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Dec 2001 13:00:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: text -> time cast problem " }, { "msg_contents": "[2001-12-06 13:00] Tom Lane said:\n| Brent Verner <brent@rcfile.org> writes:\n| > [2001-12-04 18:15] Tom Lane said:\n| > | But you are definitely right that\n| > | \n| > | \t\ttime '17:12:28.123'\n| \n| > sql-99 seems to contradict this assertion.\n| > page 160 (Syntax Rules, 6.1 <data type>)\n| \n| > 30) If <time precision> is not specified, then 0 (zero) is implicit.\n| > If <timestamp precision> is not specified, then 6 is implicit.\n| \n| But that's <data type>, ie, they're specifying the implied width of\n| a table column that's declared \"foo time\". The rules for <time literal>\n| are different.\n\nUnderstood now.\n\nI'd misunderstood the meaning of \"if specified\" in the <time literal>\ndefinition; specifically, I interpreted it as \"if specified by \n<time precision>\" as opposed to the intended meaning of \"if specified \nin <time string>\"... so off I went seeking additional definitions\nto support treating \"TIME <time string>\" as \"TIME(0) <time string>\"...\n\nThanks for swinging the clue stick my way :-)\n\n| Our problem is that we want to generalize the notion of <time literal>\n| to be <datatype> <string literal>, and this makes it hard to have\n| datatype-specific rules that differ from the rules that apply in the\n| column-datatype context.\n| \n| My thought is that we should resolve this conflict by rejecting the\n| part of the spec that assigns fixed default precisions to time\n| and timestamp columns, the same as we have done for type numeric.\n| There's no benefit to the user in that requirement; it's only a\n| crutch for implementations that cannot cope with variable-width\n| columns effectively. If people want a column that rounds fractional\n| inputs to integral seconds, let 'em say \"TIME(0)\". But I don't\n| think that \"TIME\" should do so, especially when the spec provides\n| no alternative way to get the effect of \"time with no particular\n| precision restriction\". It's the old \"text vs varchar(N)\" game all\n| over again.\n\nThis seems fair. Would this approach imply that CURRENT_TIME and \nCURRENT_TIMESTAMP should not apply default precision to their return \nvalues? Right now, \"CURRENT_TIME\" is equivalent to \"CURRENT_TIME(0)\" \nand \"CURRENT_TIMESTAMP\" eq to \"CURRENT_TIMESTAMP(6)\".\n\ncheers.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Fri, 7 Dec 2001 01:09:35 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": true, "msg_subject": "Re: text -> time cast problem" }, { "msg_contents": "Brent Verner <brent@rcfile.org> writes:\n> This seems fair. Would this approach imply that CURRENT_TIME and \n> CURRENT_TIMESTAMP should not apply default precision to their return \n> values? Right now, \"CURRENT_TIME\" is equivalent to \"CURRENT_TIME(0)\" \n> and \"CURRENT_TIMESTAMP\" eq to \"CURRENT_TIMESTAMP(6)\".\n\nYes, I had been thinking that myself, but hadn't got round to mentioning\nit to the list yet. (Even if you do accept default precisions for time\n& timestamp columns, I can see nothing in the spec that justifies\napplying those default precisions to CURRENT_TIME/TIMESTAMP. AFAICS,\nthe precision of their results when they are given no argument is\njust plain not specified.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Dec 2001 09:20:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: text -> time cast problem " }, { "msg_contents": "> > This seems fair. Would this approach imply that CURRENT_TIME and\n> > CURRENT_TIMESTAMP should not apply default precision to their return\n> > values? Right now, \"CURRENT_TIME\" is equivalent to \"CURRENT_TIME(0)\"\n> > and \"CURRENT_TIMESTAMP\" eq to \"CURRENT_TIMESTAMP(6)\".\n> Yes, I had been thinking that myself, but hadn't got round to mentioning\n> it to the list yet. (Even if you do accept default precisions for time\n> & timestamp columns, I can see nothing in the spec that justifies\n> applying those default precisions to CURRENT_TIME/TIMESTAMP. AFAICS,\n> the precision of their results when they are given no argument is\n> just plain not specified.)\n\nI'll shift the default precisions of CURRENT_TIME to match that of\nCURRENT_TIMESTAMP, which is currently six (6). As you might know, 7.2\nhas sub-second system time available, which was not true in previous\nreleases. But that time is only good to microseconds, so the six digits\nof precision is a good match for that.\n\n - Thomas\n", "msg_date": "Sat, 08 Dec 2001 15:59:52 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: text -> time cast problem" }, { "msg_contents": "Just a quick question for 7.1.3:\n\nI notice that where as you can do this:\n\ninsert into table values ('now');\n\nTo insert a current timestamp, if you try using the more standard syntax:\n\ninsert into table values ('CURRENT_TIMESTAMP');\n\nYou get the word 'current' inserted. However if you do this:\n\ninsert into table values (CURRENT_TIMESTAMP);\n\nIt works as expected.\n\nIs there anything wrong with any of these behaviours, and has any of it been\nchanged for 7.2? I kind think that the quoted CURRENT_TIMESTAMP should work\njust like the unquoted...\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Tom Lane\n> Sent: Friday, 7 December 2001 10:21 PM\n> To: Brent Verner\n> Cc: Peter Eisentraut; PostgreSQL Development; Thomas Lockhart\n> Subject: Re: [HACKERS] text -> time cast problem\n>\n>\n> Brent Verner <brent@rcfile.org> writes:\n> > This seems fair. Would this approach imply that CURRENT_TIME and\n> > CURRENT_TIMESTAMP should not apply default precision to their return\n> > values? Right now, \"CURRENT_TIME\" is equivalent to \"CURRENT_TIME(0)\"\n> > and \"CURRENT_TIMESTAMP\" eq to \"CURRENT_TIMESTAMP(6)\".\n>\n> Yes, I had been thinking that myself, but hadn't got round to mentioning\n> it to the list yet. (Even if you do accept default precisions for time\n> & timestamp columns, I can see nothing in the spec that justifies\n> applying those default precisions to CURRENT_TIME/TIMESTAMP. AFAICS,\n> the precision of their results when they are given no argument is\n> just plain not specified.)\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n", "msg_date": "Mon, 10 Dec 2001 10:25:40 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: text -> time cast problem " }, { "msg_contents": "...\n> Is there anything wrong with any of these behaviours, and has any of it been\n> changed for 7.2? I kind think that the quoted CURRENT_TIMESTAMP should work\n> just like the unquoted...\n\nNo, it is pretty much the same as before. Quoted 'current_timestamp' has\nnothing to do with the SQL9x standard, while unquoted CURRENT_TIMESTAMP\ndoes. For 7.2, we are dropping the somewhat interesting 'current' value,\nwhich was evaluated at the time a math operation was performed, at which\ntime it became equivalent to 'now'.\n\n - Thomas\n", "msg_date": "Mon, 10 Dec 2001 05:03:07 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: text -> time cast problem" }, { "msg_contents": "> > > This seems fair. Would this approach imply that CURRENT_TIME and\n> > > CURRENT_TIMESTAMP should not apply default precision to their return\n> > > values? Right now, \"CURRENT_TIME\" is equivalent to \"CURRENT_TIME(0)\"\n> > > and \"CURRENT_TIMESTAMP\" eq to \"CURRENT_TIMESTAMP(6)\".\n> > Yes, I had been thinking that myself, but hadn't got round to mentioning\n> > it to the list yet. (Even if you do accept default precisions for time\n> > & timestamp columns, I can see nothing in the spec that justifies\n> > applying those default precisions to CURRENT_TIME/TIMESTAMP. AFAICS,\n> > the precision of their results when they are given no argument is\n> > just plain not specified.)\n> \n> I'll shift the default precisions of CURRENT_TIME to match that of\n> CURRENT_TIMESTAMP, which is currently six (6). As you might know, 7.2\n> has sub-second system time available, which was not true in previous\n> releases. But that time is only good to microseconds, so the six digits\n> of precision is a good match for that.\n\nIs this all resolved?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 29 Dec 2001 00:10:12 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: text -> time cast problem" }, { "msg_contents": "...\n> Is this all resolved?\n\nSome time ago, yes. (Pun intended ;)\n\n - Thomas\n", "msg_date": "Sat, 29 Dec 2001 08:09:39 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: text -> time cast problem" }, { "msg_contents": "> ...\n> > Is this all resolved?\n> \n> Some time ago, yes. (Pun intended ;)\n\nThanks. Sometimes I can not tell from the email messages.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 29 Dec 2001 11:54:09 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: text -> time cast problem" } ]
[ { "msg_contents": "\n\nBill Studenmund wrote:\n\n>>\n>>But as long as COPY IN considers that delimiter spec to mean \"any one of\n>>these characters\", and not a multicharacter string, we couldn't do that.\n>>\n>>If we restrict DELIMITERS strings to be exactly one character for a\n>>release or three, we could think about implementing this idea of\n>>multicharacter delimiter strings later on. Not sure if anyone really\n>>needs it though.\n>>\n\\r\\n is quite popular (row) delimiter on some systems (and causes \nsometimes a weird box\nchar to appear at the end of last database field :), but I doubt I can \ngive any examples\nof multichar field delimiters\n\n>> In any case, the current behavior is inconsistent.\n>>\n>\n>I think this restriction sounds fine, and quite practical. :-)\n>\nI sincerely doubt that anyone knowingly :) uses this undocumented \nfeature for copy in,\nas it can be found out only by trial and error.\n\nMuch better to remove it, enforce it in code as Bruce suggested, and \ndocument it.\n\n------------------\nHannu\n\n\n", "msg_date": "Tue, 04 Dec 2001 23:31:13 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": true, "msg_subject": "Re: Undocumented feature costs a lot of performance in" }, { "msg_contents": "I have been fooling around profiling various ways of inserting wide\n(8000-byte, not all that wide) bytea fields, per Brent Verner's note\nof a few days ago. COPY IN should be, and is, the fastest way to\ndo it. But I was rather startled to discover that 25% of the runtime\nof COPY IN went to an inefficient way of fetching single bytes from\npqcomm.c (pq_getbytes(&ch, 1) instead of ch = pq_getbyte()), and\n20% of what's left after fixing that is going into the strchr() call\nin CopyReadAttribute.\n\nNow the point of that strchr() call is to detect whether the current\ncharacter is the column delimiter. The COPY reference page clearly\nsays:\n\n\tBy default, a text copy uses a tab (\"\\t\") character as a\n\tdelimiter between fields. The field delimiter may be changed to\n\tany other single character with the keyword phrase USING\n\tDELIMITERS. Characters in data fields which happen to match the\n\tdelimiter character will be backslash quoted. Note that the\n\tdelimiter is always a single character. If multiple characters\n\tare specified in the delimiter string, only the first character\n\tis used.\n\nand indeed, only the first character is used by COPY OUT. But COPY IN\nis presently coded so that if multiple characters are mentioned in\nUSING DELIMITERS, any one of them will be taken as a field delimiter.\n\nI would like to change the code to just \"if (c == delim[0])\",\nwhich should buy back most of that 20% and make the behavior match the\ndocumentation. Question for the list: is this a bad change? Is anyone\nout there actually using this undocumented behavior?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Dec 2001 14:49:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Undocumented feature costs a lot of performance in COPY IN" }, { "msg_contents": "> I would like to change the code to just \"if (c == delim[0])\",\n> which should buy back most of that 20% and make the behavior match the\n> documentation. Question for the list: is this a bad change? Is anyone\n> out there actually using this undocumented behavior?\n\nYes, please fix it. In fact, I think we should throw an error if more\nthan one character is specified as a delimiter. Saying we ignore\nmultiple characters in the documentation is not enough when we silently\nignore them in the code.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Dec 2001 15:07:01 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Undocumented feature costs a lot of performance in COPY" }, { "msg_contents": "On Tue, 4 Dec 2001, Tom Lane wrote:\n\n> \tBy default, a text copy uses a tab (\"\\t\") character as a\n> \tdelimiter between fields. The field delimiter may be changed to\n> \tany other single character with the keyword phrase USING\n> \tDELIMITERS. Characters in data fields which happen to match the\n> \tdelimiter character will be backslash quoted. Note that the\n> \tdelimiter is always a single character. If multiple characters\n> \tare specified in the delimiter string, only the first character\n> \tis used.\n>\n> and indeed, only the first character is used by COPY OUT. But COPY IN\n> is presently coded so that if multiple characters are mentioned in\n> USING DELIMITERS, any one of them will be taken as a field delimiter.\n>\n> I would like to change the code to just \"if (c == delim[0])\",\n> which should buy back most of that 20% and make the behavior match the\n> documentation. Question for the list: is this a bad change? Is anyone\n> out there actually using this undocumented behavior?\n\nI think you should make the change. Because, as I understand it, when you\ngive multiple delimiter characters COPY OUT will not delimit characters\nother than the first, since they won't be treated special. But COPY IN\nwill treat them special; you will read in more columns than you output.\nThus as it is, you can't COPY IN something you COPY OUT'd.\n\nOne alternative would be to make the code use different paths for the\njust-one and many delimiter cases. But then COPY OUT would need fixing.\n\nTake care,\n\nBill\n\n\n", "msg_date": "Tue, 4 Dec 2001 12:12:19 -0800 (PST)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": false, "msg_subject": "Re: Undocumented feature costs a lot of performance in" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Yes, please fix it. In fact, I think we should throw an error if more\n> than one character is specified as a delimiter. Saying we ignore\n> multiple characters in the documentation is not enough when we silently\n> ignore them in the code.\n\nWell, it'd be an easy enough addition:\n\n\tif (strlen(delim) != 1)\n\t elog(ERROR, \"COPY delimiter must be a single character\");\n\nThis isn't multibyte-aware, but then neither is the implementation;\ndelimiters that are multibyte characters won't work at the moment.\n\nAny comments out there?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Dec 2001 15:14:58 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Undocumented feature costs a lot of performance in COPY IN " }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> and indeed, only the first character is used by COPY OUT. But COPY IN\n> is presently coded so that if multiple characters are mentioned in\n> USING DELIMITERS, any one of them will be taken as a field delimiter.\n> \n> I would like to change the code to just \"if (c == delim[0])\",\n> which should buy back most of that 20% and make the behavior match the\n> documentation. Question for the list: is this a bad change? Is anyone\n> out there actually using this undocumented behavior?\n\nNot I.\n\nAs an utter nitpick, the syntax should IMHO be USING DELIMITER (no S)\nif there is only one possible delimiter character. But that *would*\nbreak lots of apps so I don't advocate it. ;)\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n", "msg_date": "04 Dec 2001 15:19:27 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Undocumented feature costs a lot of performance in COPY IN" }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Yes, please fix it. In fact, I think we should throw an error if more\n> > than one character is specified as a delimiter. Saying we ignore\n> > multiple characters in the documentation is not enough when we silently\n> > ignore them in the code.\n> \n> Well, it'd be an easy enough addition:\n> \n> \tif (strlen(delim) != 1)\n> \t elog(ERROR, \"COPY delimiter must be a single character\");\n> \n> This isn't multibyte-aware, but then neither is the implementation;\n> delimiters that are multibyte characters won't work at the moment.\n\nMy point was that the documentation was saying it could only be one\ncharacter, and that we would ignore any characters after the first one,\nbut there was no enforcement in the code.\n\nThe right way to do it is to just say in the documentation it has to be\none character, and throw an error in the code if it isn't.\n\nLimitations should be enforced in the code, if possible, not just\nmentioned in the documenation, which may or may not get read.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Dec 2001 15:20:52 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Undocumented feature costs a lot of performance in COPY" }, { "msg_contents": "Bill Studenmund <wrstuden@netbsd.org> writes:\n> One alternative would be to make the code use different paths for the\n> just-one and many delimiter cases. But then COPY OUT would need fixing.\n\nWell, it's not clear what COPY OUT should *do* with multiple\nalternatives, anyway. Pick one at random? I guess it does that now,\nif you consider \"always use the first one\" as a random choice. The\nreal problem is that it will only backslash the first one, too. That\nmeans that data emitted with DELIMITERS \"|_=\", say, will fail to be\nreloaded correctly if that same DELIMITERS string is given to COPY IN\n--- because any _ or = characters in the data won't be backslashed,\nbut would need to be to keep COPY IN from treating them as delimiters.\n\nFor COPY OUT's purposes, a sensible interpretation of a multicharacter\ndelimiter string would be that the whole string is emitted as the\ndelimiter. Eg,\n\n\tCOPY OUT WITH DELIMITERS \"<TAB>\";\n\n\tfoo<TAB>bar<TAB>baz\n\t...\n\nBut as long as COPY IN considers that delimiter spec to mean \"any one of\nthese characters\", and not a multicharacter string, we couldn't do that.\n\nIf we restrict DELIMITERS strings to be exactly one character for a\nrelease or three, we could think about implementing this idea of\nmulticharacter delimiter strings later on. Not sure if anyone really\nneeds it though. In any case, the current behavior is inconsistent.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Dec 2001 15:22:58 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Undocumented feature costs a lot of performance in COPY IN " }, { "msg_contents": "On Tue, 4 Dec 2001, Tom Lane wrote:\n\n> Bill Studenmund <wrstuden@netbsd.org> writes:\n> > One alternative would be to make the code use different paths for the\n> > just-one and many delimiter cases. But then COPY OUT would need fixing.\n>\n> Well, it's not clear what COPY OUT should *do* with multiple\n> alternatives, anyway. Pick one at random? I guess it does that now,\n> if you consider \"always use the first one\" as a random choice. The\n\nI think that'd be fine.\n\n> real problem is that it will only backslash the first one, too. That\n\nIck. I was thinking that if you gave multiple delimiters, it would escape\neach one. Which would be slow, and is why I think seperate code paths\nwould be good. :-)\n\n> means that data emitted with DELIMITERS \"|_=\", say, will fail to be\n> reloaded correctly if that same DELIMITERS string is given to COPY IN\n> --- because any _ or = characters in the data won't be backslashed,\n> but would need to be to keep COPY IN from treating them as delimiters.\n>\n> For COPY OUT's purposes, a sensible interpretation of a multicharacter\n> delimiter string would be that the whole string is emitted as the\n> delimiter. Eg,\n>\n> \tCOPY OUT WITH DELIMITERS \"<TAB>\";\n>\n> \tfoo<TAB>bar<TAB>baz\n> \t...\n>\n> But as long as COPY IN considers that delimiter spec to mean \"any one of\n> these characters\", and not a multicharacter string, we couldn't do that.\n>\n> If we restrict DELIMITERS strings to be exactly one character for a\n> release or three, we could think about implementing this idea of\n> multicharacter delimiter strings later on. Not sure if anyone really\n> needs it though. In any case, the current behavior is inconsistent.\n\nI think this restriction sounds fine, and quite practical. :-)\n\nTake care,\n\nBill\n\n", "msg_date": "Tue, 4 Dec 2001 12:31:47 -0800 (PST)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": false, "msg_subject": "Re: Undocumented feature costs a lot of performance in" }, { "msg_contents": "> Tom Lane <tgl@sss.pgh.pa.us> writes:\n> \n> > and indeed, only the first character is used by COPY OUT. But COPY IN\n> > is presently coded so that if multiple characters are mentioned in\n> > USING DELIMITERS, any one of them will be taken as a field delimiter.\n> > \n> > I would like to change the code to just \"if (c == delim[0])\",\n> > which should buy back most of that 20% and make the behavior match the\n> > documentation. Question for the list: is this a bad change? Is anyone\n> > out there actually using this undocumented behavior?\n> \n> Not I.\n> \n> As an utter nitpick, the syntax should IMHO be USING DELIMITER (no S)\n> if there is only one possible delimiter character. But that *would*\n> break lots of apps so I don't advocate it. ;)\n\nWe could support keywords DELIMITER and DELIMITERS and only document the first\none.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Dec 2001 16:18:34 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Undocumented feature costs a lot of performance in COPY" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> We could support keywords DELIMITER and DELIMITERS and only document\n> the first one.\n\nOne could also argue that it should be WITH DELIMITER for more\nconsistency with the other optional clauses.\n\nBut let's put that in the TODO list, not try to get it done now...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Dec 2001 16:22:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Undocumented feature costs a lot of performance in COPY IN " }, { "msg_contents": "> Well, it'd be an easy enough addition:\n> \n> \tif (strlen(delim) != 1)\n> \t elog(ERROR, \"COPY delimiter must be a single character\");\n> \n> This isn't multibyte-aware, but then neither is the implementation;\n> delimiters that are multibyte characters won't work at the moment.\n> \n> Any comments out there?\n\nI think it will be acceptable for multibyte users only ASCII\ncharacters could be a candidate of delimiters.\nI don't think anybody wants to use Kanji as a delimiter:-)\n--\nTatsuo Ishii\n", "msg_date": "Wed, 05 Dec 2001 09:57:30 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Undocumented feature costs a lot of performance in" }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > We could support keywords DELIMITER and DELIMITERS and only document\n> > the first one.\n> \n> One could also argue that it should be WITH DELIMITER for more\n> consistency with the other optional clauses.\n> \n> But let's put that in the TODO list, not try to get it done now...\n\nUpdated TODO:\n\nCOPY\n\t...\n o Change syntax to WITH DELIMITER, (keep old syntax around?)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 28 Dec 2001 14:43:46 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Undocumented feature costs a lot of performance in COPY" }, { "msg_contents": "On Fri, 28 Dec 2001, Bruce Momjian wrote:\n\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > We could support keywords DELIMITER and DELIMITERS and only document\n> > > the first one.\n> > \n> > One could also argue that it should be WITH DELIMITER for more\n> > consistency with the other optional clauses.\n> > \n> > But let's put that in the TODO list, not try to get it done now...\n> \n> Updated TODO:\n> \n> COPY\n> \t...\n> o Change syntax to WITH DELIMITER, (keep old syntax around?)\n> \n\nAn attached patch implements this. The problem with implementing this is\nthat the new syntax is:\n\n\t.... WITH DELIMITERS '<delim>' WITH NULL AS '<char>'\n\nNaturally, this leads to a shift/reduce conflict. Solution is more or less\nthat used with CREATE DATABASE WITH ... WITH ... etc. The only ugly bit\nwas mixing this with the old USING DELIMITERS ... syntax. I don't like the\nsolution -- I get the feeling there's a better way to do it. \n\nThe other option of course is to update yylex() to create a new token\nin the same way that the UNIONJOIN terminal is created. But I think this\nis a bit messy.\n\nIdeas or is this okay?\n\nGavin", "msg_date": "Sat, 5 Jan 2002 12:32:29 +1100 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: Undocumented feature costs a lot of performance in COPY" }, { "msg_contents": "\nSaved for 7.3.\n\n\n---------------------------------------------------------------------------\n\nGavin Sherry wrote:\n> On Fri, 28 Dec 2001, Bruce Momjian wrote:\n> \n> > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > > We could support keywords DELIMITER and DELIMITERS and only document\n> > > > the first one.\n> > > \n> > > One could also argue that it should be WITH DELIMITER for more\n> > > consistency with the other optional clauses.\n> > > \n> > > But let's put that in the TODO list, not try to get it done now...\n> > \n> > Updated TODO:\n> > \n> > COPY\n> > \t...\n> > o Change syntax to WITH DELIMITER, (keep old syntax around?)\n> > \n> \n> An attached patch implements this. The problem with implementing this is\n> that the new syntax is:\n> \n> \t.... WITH DELIMITERS '<delim>' WITH NULL AS '<char>'\n> \n> Naturally, this leads to a shift/reduce conflict. Solution is more or less\n> that used with CREATE DATABASE WITH ... WITH ... etc. The only ugly bit\n> was mixing this with the old USING DELIMITERS ... syntax. I don't like the\n> solution -- I get the feeling there's a better way to do it. \n> \n> The other option of course is to update yylex() to create a new token\n> in the same way that the UNIONJOIN terminal is created. But I think this\n> is a bit messy.\n> \n> Ideas or is this okay?\n> \n> Gavin\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 4 Jan 2002 21:28:32 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Undocumented feature costs a lot of performance in COPY" }, { "msg_contents": " \nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nGavin Sherry wrote:\n> On Fri, 28 Dec 2001, Bruce Momjian wrote:\n> \n> > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > > We could support keywords DELIMITER and DELIMITERS and only document\n> > > > the first one.\n> > > \n> > > One could also argue that it should be WITH DELIMITER for more\n> > > consistency with the other optional clauses.\n> > > \n> > > But let's put that in the TODO list, not try to get it done now...\n> > \n> > Updated TODO:\n> > \n> > COPY\n> > \t...\n> > o Change syntax to WITH DELIMITER, (keep old syntax around?)\n> > \n> \n> An attached patch implements this. The problem with implementing this is\n> that the new syntax is:\n> \n> \t.... WITH DELIMITERS '<delim>' WITH NULL AS '<char>'\n> \n> Naturally, this leads to a shift/reduce conflict. Solution is more or less\n> that used with CREATE DATABASE WITH ... WITH ... etc. The only ugly bit\n> was mixing this with the old USING DELIMITERS ... syntax. I don't like the\n> solution -- I get the feeling there's a better way to do it. \n> \n> The other option of course is to update yylex() to create a new token\n> in the same way that the UNIONJOIN terminal is created. But I think this\n> is a bit messy.\n> \n> Ideas or is this okay?\n> \n> Gavin\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Feb 2002 21:01:12 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Undocumented feature costs a lot of performance in COPY" }, { "msg_contents": "\nGavin, can I get documentation patches to match this patch? Thanks.\n\n---------------------------------------------------------------------------\n\nGavin Sherry wrote:\n> On Fri, 28 Dec 2001, Bruce Momjian wrote:\n> \n> > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > > We could support keywords DELIMITER and DELIMITERS and only document\n> > > > the first one.\n> > > \n> > > One could also argue that it should be WITH DELIMITER for more\n> > > consistency with the other optional clauses.\n> > > \n> > > But let's put that in the TODO list, not try to get it done now...\n> > \n> > Updated TODO:\n> > \n> > COPY\n> > \t...\n> > o Change syntax to WITH DELIMITER, (keep old syntax around?)\n> > \n> \n> An attached patch implements this. The problem with implementing this is\n> that the new syntax is:\n> \n> \t.... WITH DELIMITERS '<delim>' WITH NULL AS '<char>'\n> \n> Naturally, this leads to a shift/reduce conflict. Solution is more or less\n> that used with CREATE DATABASE WITH ... WITH ... etc. The only ugly bit\n> was mixing this with the old USING DELIMITERS ... syntax. I don't like the\n> solution -- I get the feeling there's a better way to do it. \n> \n> The other option of course is to update yylex() to create a new token\n> in the same way that the UNIONJOIN terminal is created. But I think this\n> is a bit messy.\n> \n> Ideas or is this okay?\n> \n> Gavin\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 24 Feb 2002 22:42:10 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Undocumented feature costs a lot of performance in COPY" }, { "msg_contents": "\nSeems the original title about \"feature causes performance in COPY\" was\nconfusing. This patch merely fixes the identified TODO item in the\ngrammar about using WITH in COPY.\n\nI will apply tomorrow.\n\n---------------------------------------------------------------------------\n\nGavin Sherry wrote:\n> On Fri, 28 Dec 2001, Bruce Momjian wrote:\n> \n> > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > > We could support keywords DELIMITER and DELIMITERS and only document\n> > > > the first one.\n> > > \n> > > One could also argue that it should be WITH DELIMITER for more\n> > > consistency with the other optional clauses.\n> > > \n> > > But let's put that in the TODO list, not try to get it done now...\n> > \n> > Updated TODO:\n> > \n> > COPY\n> > \t...\n> > o Change syntax to WITH DELIMITER, (keep old syntax around?)\n> > \n> \n> An attached patch implements this. The problem with implementing this is\n> that the new syntax is:\n> \n> \t.... WITH DELIMITERS '<delim>' WITH NULL AS '<char>'\n> \n> Naturally, this leads to a shift/reduce conflict. Solution is more or less\n> that used with CREATE DATABASE WITH ... WITH ... etc. The only ugly bit\n> was mixing this with the old USING DELIMITERS ... syntax. I don't like the\n> solution -- I get the feeling there's a better way to do it. \n> \n> The other option of course is to update yylex() to create a new token\n> in the same way that the UNIONJOIN terminal is created. But I think this\n> is a bit messy.\n> \n> Ideas or is this okay?\n> \n> Gavin\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 5 Mar 2002 01:38:06 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "WITH DELIMITERS in COPY" }, { "msg_contents": "Hi Bruce,\n\nOn Tue, 5 Mar 2002, Bruce Momjian wrote:\n\n> \n> Seems the original title about \"feature causes performance in COPY\" was\n> confusing. \n\nOops.\n\n> This patch merely fixes the identified TODO item in the\n> grammar about using WITH in COPY.\n\nNow that I look at this patch again I don't think I like the\nsyntax.\n\nCOPY [BINARY] <relation> [WITH OIDS] TO | FROM <file> [[USING DELIMITERS |\nWITH DELIMITER] <delimiter> [WITH NULL AS <char>]\n\nIt isn't very elegant.\n\n1) I forced the parser to be able to handle multiple WITHs, but that\ndoesn't mean its right. I can't remember why I didn't propose a better\nsyntax back then, such as:\n\n... [WITH [DELIMITER <delimiter>,] [NULL AS <char>]]\n\n2) Given (1), Why does WITH OIDS belong where it is now? Why not have it\nas an 'option' at the end?\n\nAnyone have any opinion on this?\n\n\n", "msg_date": "Tue, 5 Mar 2002 21:21:58 +1100 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] WITH DELIMITERS in COPY" }, { "msg_contents": "Gavin Sherry <swm@linuxworld.com.au> writes:\n> Now that I look at this patch again I don't think I like the\n> syntax.\n\n> COPY [BINARY] <relation> [WITH OIDS] TO | FROM <file> [[USING DELIMITERS |\n> WITH DELIMITER] <delimiter> [WITH NULL AS <char>]\n\n> It isn't very elegant.\n\n> 1) I forced the parser to be able to handle multiple WITHs, but that\n> doesn't mean its right.\n\nIt seems wrong to me. The other statements that use WITH use only one\nWITH to introduce a list of option clauses.\n\n I can't remember why I didn't propose a better\n> syntax back then, such as:\n\n> ... [WITH [DELIMITER <delimiter>,] [NULL AS <char>]]\n\nThe other statements you might model this on don't use commas either.\nThe closest thing to the precedents would be\n\n\t... [WITH copyoption [copyoption ...]]\n\n\tcopyoption := DELIMITER delim\n\t | NULL AS nullstring\n\t | etc\n\nTo get some modicum of backwards compatibility, we could allow either\nDELIMITER or DELIMITERS as a copy-option keyword, and we could allow\nUSING as a substitute for the initial WITH. This still won't be quite\nbackwards compatible for statements that use both of the old option\nclauses; how worried are we about that? Maybe we could persuade the\nparser to handle\n\n\t... [ WITH | USING copyoption [ [WITH] copyoption ... ]]\n\nbut my that seems ugly.\n\n> 2) Given (1), Why does WITH OIDS belong where it is now? Why not have it\n> as an 'option' at the end?\n\nHistorical precedent, mainly. Changing this would break existing\npg_dump files, so I'm not eager to do it. (AFAIR pg_dump never uses\nDELIMITERS nor NULL AS, so it doesn't care if you change that part\nof the syntax.)\n\nIf we were working in a green field I'd vote for moving BINARY into the\nWITH-options too, but we aren't. Again that seems too likely to break\nthings in the name of a small amount of added consistency.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 05 Mar 2002 11:17:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] WITH DELIMITERS in COPY " }, { "msg_contents": "Tom Lane writes:\n > [ COPY syntax ]\n > If we were working in a green field I'd vote for moving BINARY into the\n > WITH-options too, but we aren't. Again that seems too likely to break\n > things in the name of a small amount of added consistency.\n\nHawabout 'COPY ...' retains the existing syntax but if used like 'COPY\nTABLE ...' sane syntax is used?\n", "msg_date": "Tue, 5 Mar 2002 17:07:14 +0000", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] WITH DELIMITERS in COPY " }, { "msg_contents": "Lee Kindness wrote:\n> Tom Lane writes:\n> > [ COPY syntax ]\n> > If we were working in a green field I'd vote for moving BINARY into the\n> > WITH-options too, but we aren't. Again that seems too likely to break\n> > things in the name of a small amount of added consistency.\n> \n> Hawabout 'COPY ...' retains the existing syntax but if used like 'COPY\n> TABLE ...' sane syntax is used?\n\nInteresting idea, but TABLE just seems to be a noise word to me. COPY\nis one of those commands that it is hard to change because pg_dump\nrelies on it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 5 Mar 2002 13:01:36 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] WITH DELIMITERS in COPY" }, { "msg_contents": "\nHere is the original patch, which is now rejected as we discuss a new\npatch on hackers.\n\n---------------------------------------------------------------------------\n\nGavin Sherry wrote:\n> On Fri, 28 Dec 2001, Bruce Momjian wrote:\n> \n> > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > > We could support keywords DELIMITER and DELIMITERS and only document\n> > > > the first one.\n> > > \n> > > One could also argue that it should be WITH DELIMITER for more\n> > > consistency with the other optional clauses.\n> > > \n> > > But let's put that in the TODO list, not try to get it done now...\n> > \n> > Updated TODO:\n> > \n> > COPY\n> > \t...\n> > o Change syntax to WITH DELIMITER, (keep old syntax around?)\n> > \n> \n> An attached patch implements this. The problem with implementing this is\n> that the new syntax is:\n> \n> \t.... WITH DELIMITERS '<delim>' WITH NULL AS '<char>'\n> \n> Naturally, this leads to a shift/reduce conflict. Solution is more or less\n> that used with CREATE DATABASE WITH ... WITH ... etc. The only ugly bit\n> was mixing this with the old USING DELIMITERS ... syntax. I don't like the\n> solution -- I get the feeling there's a better way to do it. \n> \n> The other option of course is to update yylex() to create a new token\n> in the same way that the UNIONJOIN terminal is created. But I think this\n> is a bit messy.\n> \n> Ideas or is this okay?\n> \n> Gavin\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 5 Mar 2002 13:02:16 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Undocumented feature costs a lot of performance in COPY" }, { "msg_contents": "\nGavin, I will do the legwork on this if you wish. I think we need to\nuse DefElem to store the COPY params, rather than using specific fields\nin CopyStmt.\n\nWould you send me your original patch so I am make sure I hit\neverything. I can't seem to find a copy. If you would like to work on\nit, I can give you what I have and walk you through the process.\n\n---------------------------------------------------------------------------\n\nGavin Sherry wrote:\n> Hi Bruce,\n> \n> On Tue, 5 Mar 2002, Bruce Momjian wrote:\n> \n> > \n> > Seems the original title about \"feature causes performance in COPY\" was\n> > confusing. \n> \n> Oops.\n> \n> > This patch merely fixes the identified TODO item in the\n> > grammar about using WITH in COPY.\n> \n> Now that I look at this patch again I don't think I like the\n> syntax.\n> \n> COPY [BINARY] <relation> [WITH OIDS] TO | FROM <file> [[USING DELIMITERS |\n> WITH DELIMITER] <delimiter> [WITH NULL AS <char>]\n> \n> It isn't very elegant.\n> \n> 1) I forced the parser to be able to handle multiple WITHs, but that\n> doesn't mean its right. I can't remember why I didn't propose a better\n> syntax back then, such as:\n> \n> ... [WITH [DELIMITER <delimiter>,] [NULL AS <char>]]\n> \n> 2) Given (1), Why does WITH OIDS belong where it is now? Why not have it\n> as an 'option' at the end?\n> \n> Anyone have any opinion on this?\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 14 Apr 2002 01:02:59 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] WITH DELIMITERS in COPY" }, { "msg_contents": "On Sun, 14 Apr 2002, Bruce Momjian wrote:\n\n> \n> Gavin, I will do the legwork on this if you wish. I think we need to\n\nNo matter. I intended to submit a patch to fix this.\n\n> use DefElem to store the COPY params, rather than using specific fields\n> in CopyStmt.\n\nDefElem would have required modification of code outside the parser (to\nkeep utility.c and DoCopy() happy) or otherwise an even messier loop\nexecuted as a result of CopyStmt than I have given in the attached patch, \nplus other issues with Yacc.\n\nThe patch attached maintains backward compatibility. The syntax is as\nfollows:\n\nCOPY [BINARY] <relname> [WITH OIDS] FROM/TO\n [USING DELIMITERS <delimiter>]\n [WITH [ DELIMITER <delimiter> | NULL AS <char> | OIDS ]]\n\nI was also going to allow BINARY in the WITH list, but there seems to be\nlittle point.\n\nNote that if you execute a query such as:\n\nCOPY pg_class TO '/some/path/file/out'\n\tUSING DELIMITERS <tab>\n\tWITH DELIMITER '|';\n\nThe code will give preference to WITH DELIMITER.\n\nIf no one can find fault with this or my implementation, I'll follow up\nwith documentation and psql patches (not sure that there is much point\npatching pg_dump).\n\nGavin", "msg_date": "Mon, 15 Apr 2002 03:34:06 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] WITH DELIMITERS in COPY" }, { "msg_contents": "Gavin Sherry writes:\n\n> The patch attached maintains backward compatibility. The syntax is as\n> follows:\n>\n> COPY [BINARY] <relname> [WITH OIDS] FROM/TO\n> [USING DELIMITERS <delimiter>]\n> [WITH [ DELIMITER <delimiter> | NULL AS <char> | OIDS ]]\n\nI think we should lose the WITH altogether. It's not any better than\nUSING.\n\nBut just saying \"OIDS\" is not very clear. In this case the WITH is\nnecessary.\n\n> Note that if you execute a query such as:\n>\n> COPY pg_class TO '/some/path/file/out'\n> \tUSING DELIMITERS <tab>\n> \tWITH DELIMITER '|';\n>\n> The code will give preference to WITH DELIMITER.\n\nThat should be an error.\n\n> If no one can find fault with this or my implementation, I'll follow up\n> with documentation and psql patches (not sure that there is much point\n> patching pg_dump).\n\npg_dump should use the new syntax.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sun, 14 Apr 2002 13:46:39 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] WITH DELIMITERS in COPY" }, { "msg_contents": "\nGavin, I see where you are going with the patch; creating a list in\ngram.y and stuffing CopyStmt directly there. However, I can't find any\nother instance of our stuffing things like that in gram.y. We do have\ncases using options like COPY in CREATE USER, and we do use DefElem.\n\nI realize it will require changes to other files like copy.c. However,\nit seems like the cleanest solution. I guess I am not excited about\nadding another way to handle WITH options into the code. Now, if you\nwant to argue that CREATE USER shouldn't use DefElem either, we can\ndiscuss that, but I think we need to be consistent in how we handle COPY\nvs. the other commands that use parameters. \n\nSee commands/user.c for an example of how it uses DefElem, and the tests\ndone to make sure conflicting arguments are not used or in copy's case,\nspecified twice. It just seems like that is the cleanest way to go.\n\nOne idea I had for the code is to allow BINARY as $2, and WITH OIDS in\nits current place, and all options in the new WITH location, and\nconcatentate them together into one DefElem list in gram.y, and pass\nthat to copy.c. That way, you can allow BINARY and others at the end\ntoo and the list is in one central place.\n\n---------------------------------------------------------------------------\n\nGavin Sherry wrote:\n> On Sun, 14 Apr 2002, Bruce Momjian wrote:\n> \n> > \n> > Gavin, I will do the legwork on this if you wish. I think we need to\n> \n> No matter. I intended to submit a patch to fix this.\n> \n> > use DefElem to store the COPY params, rather than using specific fields\n> > in CopyStmt.\n> \n> DefElem would have required modification of code outside the parser (to\n> keep utility.c and DoCopy() happy) or otherwise an even messier loop\n> executed as a result of CopyStmt than I have given in the attached patch, \n> plus other issues with Yacc.\n> \n> The patch attached maintains backward compatibility. The syntax is as\n> follows:\n> \n> COPY [BINARY] <relname> [WITH OIDS] FROM/TO\n> [USING DELIMITERS <delimiter>]\n> [WITH [ DELIMITER <delimiter> | NULL AS <char> | OIDS ]]\n> \n> I was also going to allow BINARY in the WITH list, but there seems to be\n> little point.\n> \n> Note that if you execute a query such as:\n> \n> COPY pg_class TO '/some/path/file/out'\n> \tUSING DELIMITERS <tab>\n> \tWITH DELIMITER '|';\n> \n> The code will give preference to WITH DELIMITER.\n> \n> If no one can find fault with this or my implementation, I'll follow up\n> with documentation and psql patches (not sure that there is much point\n> patching pg_dump).\n> \n> Gavin\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 14 Apr 2002 15:03:11 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] WITH DELIMITERS in COPY" }, { "msg_contents": "On Sun, 14 Apr 2002, Bruce Momjian wrote:\n\n> \n> Gavin, I see where you are going with the patch; creating a list in\n> gram.y and stuffing CopyStmt directly there. However, I can't find any\n> other instance of our stuffing things like that in gram.y. We do have\n> cases using options like COPY in CREATE USER, and we do use DefElem.\n\nCREATE DATABASE also fills out a list in the same fashion =). I will\nhowever have a look at revising this patch to use DefElem later today.\n\nThanks,\n\nGavin\n\n", "msg_date": "Mon, 15 Apr 2002 12:02:46 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] WITH DELIMITERS in COPY" }, { "msg_contents": "Gavin Sherry wrote:\n> On Sun, 14 Apr 2002, Bruce Momjian wrote:\n> \n> > \n> > Gavin, I see where you are going with the patch; creating a list in\n> > gram.y and stuffing CopyStmt directly there. However, I can't find any\n> > other instance of our stuffing things like that in gram.y. We do have\n> > cases using options like COPY in CREATE USER, and we do use DefElem.\n> \n> CREATE DATABASE also fills out a list in the same fashion =). I will\n> however have a look at revising this patch to use DefElem later today.\n\nOh, I see that now. Which method do people prefer. We should probably\nmake them all use the same mechanism.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 14 Apr 2002 22:08:14 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] WITH DELIMITERS in COPY" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Gavin Sherry wrote:\n>> CREATE DATABASE also fills out a list in the same fashion =). I will\n>> however have a look at revising this patch to use DefElem later today.\n\n> Oh, I see that now. Which method do people prefer. We should probably\n> make them all use the same mechanism.\n\nConsistency? Who needs consistency ;-) ?\n\nSeriously, I do not see a need to change either of these approaches\njust for the sake of changing it. CREATE DATABASE is okay as-is, and\nso are the statements that use DefElem. I tend to like DefElem better\nfor the statements that we change around frequently ... for instance\nthe recent changes to the set of volatility keywords for functions\ndidn't require any changes to the grammar or the parsenode definitions.\nBut I think that a simple struct definition is easier to understand,\nso I favor that for stable feature sets.\n\nAs for which one is better suited for COPY, I don't have a strong\nopinion, but lean to DefElem. Seems like COPY will probably keep\naccreting new features.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 14 Apr 2002 23:19:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] WITH DELIMITERS in COPY " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Gavin Sherry wrote:\n> >> CREATE DATABASE also fills out a list in the same fashion =). I will\n> >> however have a look at revising this patch to use DefElem later today.\n> \n> > Oh, I see that now. Which method do people prefer. We should probably\n> > make them all use the same mechanism.\n> \n> Consistency? Who needs consistency ;-) ?\n> \n> Seriously, I do not see a need to change either of these approaches\n> just for the sake of changing it. CREATE DATABASE is okay as-is, and\n> so are the statements that use DefElem. I tend to like DefElem better\n> for the statements that we change around frequently ... for instance\n> the recent changes to the set of volatility keywords for functions\n> didn't require any changes to the grammar or the parsenode definitions.\n> But I think that a simple struct definition is easier to understand,\n> so I favor that for stable feature sets.\n> \n> As for which one is better suited for COPY, I don't have a strong\n> opinion, but lean to DefElem. Seems like COPY will probably keep\n> accreting new features.\n\nThe code that bothered me about the CREATE DATABASE param processing\nwas:\n\n /* process additional options */\n foreach(l, $5)\n {\n\tList *optitem = (List *) lfirst(l);\n \n\tswitch (lfirsti(optitem))\n\t{\n\t case 1: \n\t\tn->dbpath = (char *) lsecond(optitem);\n\t\tbreak;\t \n\t case 2: \n\t\tn->dbtemplate = (char *) lsecond(optitem);\n\t\tbreak;\n\t case 3:\n\t\tn->encoding = lfirsti(lnext(optitem));\n\t\tbreak;\n\t case 4:\n\t\tn->dbowner = (char *) lsecond(optitem);\n\t\tbreak;\n\t}\n }\n\nI see what it is doing, but it seems quite unclear. Seeing that people\nare using this as a pattern for other param processing, I will work on a\npatch to convert this to DefElem.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Apr 2002 17:27:21 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] WITH DELIMITERS in COPY" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> The code that bothered me about the CREATE DATABASE param processing\n> was:\n\n> /* process additional options */\n> foreach(l, $5)\n> {\n> \tList *optitem = (List *) lfirst(l);\n \n> \tswitch (lfirsti(optitem))\n> \t{\n> \t case 1: \n> \t\tn->dbpath = (char *) lsecond(optitem);\n> \t\tbreak;\t \n> \t case 2: \n> \t\tn->dbtemplate = (char *) lsecond(optitem);\n> \t\tbreak;\n> \t case 3:\n> \t\tn->encoding = lfirsti(lnext(optitem));\n> \t\tbreak;\n> \t case 4:\n> \t\tn->dbowner = (char *) lsecond(optitem);\n> \t\tbreak;\n> \t}\n> }\n\n> I see what it is doing, but it seems quite unclear. Seeing that people\n> are using this as a pattern for other param processing, I will work on a\n> patch to convert this to DefElem.\n\nOh, I think we were talking at cross-purposes then. What you're really\nunhappy about is that this uses a list of two-element sublists? Yeah,\nI agree, that's a messy data structure; a list of DefElem would be\nperhaps cleaner. Not sure if it matters all that much though, since the\nlist only exists in the context of a few productions in gram.y. Perhaps\nadding a couple of lines of documentation would be better than changing\nthe code.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Apr 2002 19:34:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] WITH DELIMITERS in COPY " }, { "msg_contents": "Tom Lane wrote:\n> Oh, I think we were talking at cross-purposes then. What you're really\n> unhappy about is that this uses a list of two-element sublists? Yeah,\n> I agree, that's a messy data structure; a list of DefElem would be\n> perhaps cleaner. Not sure if it matters all that much though, since the\n> list only exists in the context of a few productions in gram.y. Perhaps\n> adding a couple of lines of documentation would be better than changing\n> the code.\n\nYea, documenation and/or a list of DefElem would be nicer. The problem\nis that people are going to copy this in the future, so I may as well do\nit right. Can't take more than 20 minutes.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Apr 2002 19:49:45 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] WITH DELIMITERS in COPY" }, { "msg_contents": "Gavin Sherry wrote:\n> > I see what it is doing, but it seems quite unclear. Seeing that people\n> > are using this as a pattern for other param processing, I will work on a\n> > patch to convert this to DefElem.\n> \n> Wouldn't a few macros clean this up better (ie, make it clearer)?\n> \n> #define CDBOPTDBPATH 1\n> \n> #define optparam(l)\t(char *)lsecond(l)\n> #define optparami(l)\t(int)lfirsti(lnext(l))\n> \n> foreach(l, $5)\n> {\n> List *optitem = (List *) lfirst(l);\n> \n> switch (lfirsti(optitem))\n> {\n> case CDBOPTDBPATH:\n> n->dbpath = optparam(optitem);\n> break;\n> \n> ...\n> \n> \n> Regardless, I guess that code is pointless since the consensus seems to be\n> that the use of DefElem is better since it allows for the abstraction of\n> the parameters list. Obviously a good thing if CREATE DATABASE, COPY etc\n> are to be extended often enough.\n> \n\nYes, macros would be the way to go if we didn't have a cleaner\nalternative.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Apr 2002 19:55:25 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] WITH DELIMITERS in COPY" }, { "msg_contents": "On Tue, 16 Apr 2002, Bruce Momjian wrote:\n\n> The code that bothered me about the CREATE DATABASE param processing\n> was:\n> \n> /* process additional options */\n> foreach(l, $5)\n> {\n> \tList *optitem = (List *) lfirst(l);\n> \n> \tswitch (lfirsti(optitem))\n> \t{\n> \t case 1: \n> \t\tn->dbpath = (char *) lsecond(optitem);\n> \t\tbreak;\t \n> \t case 2: \n> \t\tn->dbtemplate = (char *) lsecond(optitem);\n> \t\tbreak;\n> \t case 3:\n> \t\tn->encoding = lfirsti(lnext(optitem));\n> \t\tbreak;\n> \t case 4:\n> \t\tn->dbowner = (char *) lsecond(optitem);\n> \t\tbreak;\n> \t}\n> }\n> \n> I see what it is doing, but it seems quite unclear. Seeing that people\n> are using this as a pattern for other param processing, I will work on a\n> patch to convert this to DefElem.\n\nWouldn't a few macros clean this up better (ie, make it clearer)?\n\n#define CDBOPTDBPATH 1\n\n#define optparam(l)\t(char *)lsecond(l)\n#define optparami(l)\t(int)lfirsti(lnext(l))\n\n foreach(l, $5)\n {\n List *optitem = (List *) lfirst(l);\n\n switch (lfirsti(optitem))\n {\n case CDBOPTDBPATH:\n n->dbpath = optparam(optitem);\n break;\n\n...\n\n\nRegardless, I guess that code is pointless since the consensus seems to be\nthat the use of DefElem is better since it allows for the abstraction of\nthe parameters list. Obviously a good thing if CREATE DATABASE, COPY etc\nare to be extended often enough.\n\nGavin\n\n", "msg_date": "Wed, 17 Apr 2002 09:56:03 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] WITH DELIMITERS in COPY" }, { "msg_contents": "Bruce,\n\nAttached is a modified patch, using DefElem instead of the 'roll your own'\nmethod of collecting optional parameters to CopyStmt.\n\nNaturally, DoCopy() as well as a few support functions needed to be\nmodified to get this going. \n\nIn order to check if parameters were being passed more than once (COPY\n... WITH OIDS FROM ... WITH DELIMITER '\\t' OIDS), I have added a function\ndefelemmember() to list.c. I could not think, off the top of my head, of a\nmore elegant way to do this.\n\nGavin\n\n\nOn Sun, 14 Apr 2002, Bruce Momjian wrote:\n\n> \n> Gavin, I will do the legwork on this if you wish. I think we need to\n> use DefElem to store the COPY params, rather than using specific fields\n> in CopyStmt.\n> \n> Would you send me your original patch so I am make sure I hit\n> everything. I can't seem to find a copy. If you would like to work on\n> it, I can give you what I have and walk you through the process.\n> \n> ---------------------------------------------------------------------------\n> \n> Gavin Sherry wrote:\n> > Hi Bruce,\n> > \n> > On Tue, 5 Mar 2002, Bruce Momjian wrote:\n> > \n> > > \n> > > Seems the original title about \"feature causes performance in COPY\" was\n> > > confusing. \n> > \n> > Oops.\n> > \n> > > This patch merely fixes the identified TODO item in the\n> > > grammar about using WITH in COPY.\n> > \n> > Now that I look at this patch again I don't think I like the\n> > syntax.\n> > \n> > COPY [BINARY] <relation> [WITH OIDS] TO | FROM <file> [[USING DELIMITERS |\n> > WITH DELIMITER] <delimiter> [WITH NULL AS <char>]\n> > \n> > It isn't very elegant.\n> > \n> > 1) I forced the parser to be able to handle multiple WITHs, but that\n> > doesn't mean its right. I can't remember why I didn't propose a better\n> > syntax back then, such as:\n> > \n> > ... [WITH [DELIMITER <delimiter>,] [NULL AS <char>]]\n> > \n> > 2) Given (1), Why does WITH OIDS belong where it is now? Why not have it\n> > as an 'option' at the end?\n> > \n> > Anyone have any opinion on this?\n> > \n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> > \n> \n>", "msg_date": "Mon, 22 Apr 2002 03:39:46 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] WITH DELIMITERS in COPY" }, { "msg_contents": "\nI see in user.c::CreateUser:\n\n if (strcmp(defel->defname, \"password\") == 0 ||\n strcmp(defel->defname, \"encryptedPassword\") == 0 ||\n strcmp(defel->defname, \"unencryptedPassword\") == 0)\n {\n if (dpassword)\n elog(ERROR, \"CREATE USER: conflicting options\");\n dpassword = defel;\n if (strcmp(defel->defname, \"encryptedPassword\") == 0)\n encrypt_password = true;\n else if (strcmp(defel->defname, \"unencryptedPassword\") == 0)\n encrypt_password = false;\n }\n else if (strcmp(defel->defname, \"sysid\") == 0)\n {\n if (dsysid)\n elog(ERROR, \"CREATE USER: conflicting options\");\n dsysid = defel;\n }\n\nLooks like this is how we normally test for conflicting params. Does\nthis help?\n\n---------------------------------------------------------------------------\n\nGavin Sherry wrote:\n> Bruce,\n> \n> Attached is a modified patch, using DefElem instead of the 'roll your own'\n> method of collecting optional parameters to CopyStmt.\n> \n> Naturally, DoCopy() as well as a few support functions needed to be\n> modified to get this going. \n> \n> In order to check if parameters were being passed more than once (COPY\n> ... WITH OIDS FROM ... WITH DELIMITER '\\t' OIDS), I have added a function\n> defelemmember() to list.c. I could not think, off the top of my head, of a\n> more elegant way to do this.\n> \n> Gavin\n> \n> \n> On Sun, 14 Apr 2002, Bruce Momjian wrote:\n> \n> > \n> > Gavin, I will do the legwork on this if you wish. I think we need to\n> > use DefElem to store the COPY params, rather than using specific fields\n> > in CopyStmt.\n> > \n> > Would you send me your original patch so I am make sure I hit\n> > everything. I can't seem to find a copy. If you would like to work on\n> > it, I can give you what I have and walk you through the process.\n> > \n> > ---------------------------------------------------------------------------\n> > \n> > Gavin Sherry wrote:\n> > > Hi Bruce,\n> > > \n> > > On Tue, 5 Mar 2002, Bruce Momjian wrote:\n> > > \n> > > > \n> > > > Seems the original title about \"feature causes performance in COPY\" was\n> > > > confusing. \n> > > \n> > > Oops.\n> > > \n> > > > This patch merely fixes the identified TODO item in the\n> > > > grammar about using WITH in COPY.\n> > > \n> > > Now that I look at this patch again I don't think I like the\n> > > syntax.\n> > > \n> > > COPY [BINARY] <relation> [WITH OIDS] TO | FROM <file> [[USING DELIMITERS |\n> > > WITH DELIMITER] <delimiter> [WITH NULL AS <char>]\n> > > \n> > > It isn't very elegant.\n> > > \n> > > 1) I forced the parser to be able to handle multiple WITHs, but that\n> > > doesn't mean its right. I can't remember why I didn't propose a better\n> > > syntax back then, such as:\n> > > \n> > > ... [WITH [DELIMITER <delimiter>,] [NULL AS <char>]]\n> > > \n> > > 2) Given (1), Why does WITH OIDS belong where it is now? Why not have it\n> > > as an 'option' at the end?\n> > > \n> > > Anyone have any opinion on this?\n> > > \n> > > \n> > > \n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> > > \n> > \n> > \n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Apr 2002 12:19:21 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] WITH DELIMITERS in COPY" } ]
[ { "msg_contents": "I will be in Japan with SRA from December 6-14.\n\nI have updated the HISTORY file as of today. If people need to generate\nSGML while I am gone, merely cut/paste the relevant portions from\nHISTORY to release.sgml. Keep in mind any \"<\" and \">\" characters have\nto be remapped. Also, the release date is usually unknown when the docs\nare built. It has to be modified just before release when the date is\ndetermined.\n\nAlso, I have updated the English and Developer's FAQ with new items, and\nI HTML-ified the new items in the Developer's FAQ.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Dec 2001 15:26:05 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Release info updated" } ]
[ { "msg_contents": "So far I think I have done the SunOS4 port except the parallel\nregression test. Unfortunately the machine is in production and I\ncould not increase the shmem parameters. Instead I have done the\nserial test and got 4 errors (see attached regression.diffs). Also I\nnoticed that the test script uses diff -C3 which is not available on\nall platforms.\n\nBTW, this effort might be the last one since my company would give up\nto maintain SunOS4 machines in the near future.\n\nKarel, Could you check my modifications to formatting.c?\n\nRCS file: /cvsroot/pgsql/src/backend/utils/adt/formatting.c,v\nretrieving revision 1.45\ndiff -c -r1.45 formatting.c\n*** formatting.c\t2001/11/19 09:05:01\t1.45\n--- formatting.c\t2001/12/05 02:04:25\n***************\n*** 4140,4146 ****\n \t\t\t\t\t\tNp->inout_p += strlen(Np->inout_p) - 1;\n \t\t\t\t\t}\n \t\t\t\t\telse\n! \t\t\t\t\t\tNp->inout_p += sprintf(Np->inout_p, \"%15s\", Np->number_p) - 1;\n \t\t\t\t\tbreak;\n \n \t\t\t\tcase NUM_rn:\n--- 4140,4149 ----\n \t\t\t\t\t\tNp->inout_p += strlen(Np->inout_p) - 1;\n \t\t\t\t\t}\n \t\t\t\t\telse\n! \t\t\t\t\t{\n! \t\t\t\t\t\tsprintf(Np->inout_p, \"%15s\", Np->number_p);\n! \t\t\t\t\t\tNp->inout_p += strlen(Np->inout_p) - 1;\n! \t\t\t\t\t}\n \t\t\t\t\tbreak;\n \n \t\t\t\tcase NUM_rn:\n***************\n*** 4150,4156 ****\n \t\t\t\t\t\tNp->inout_p += strlen(Np->inout_p) - 1;\n \t\t\t\t\t}\n \t\t\t\t\telse\n! \t\t\t\t\t\tNp->inout_p += sprintf(Np->inout_p, \"%15s\", str_tolower(Np->number_p)) - 1;\n \t\t\t\t\tbreak;\n \n \t\t\t\tcase NUM_th:\n--- 4153,4162 ----\n \t\t\t\t\t\tNp->inout_p += strlen(Np->inout_p) - 1;\n \t\t\t\t\t}\n \t\t\t\t\telse\n! \t\t\t\t\t{\n! \t\t\t\t\t\tsprintf(Np->inout_p, \"%15s\", str_tolower(Np->number_p));\n! \t\t\t\t\t\tNp->inout_p += strlen(Np->inout_p) - 1;\n! \t\t\t\t\t}\n \t\t\t\t\tbreak;\n \n \t\t\t\tcase NUM_th:\n***************\n*** 4664,4670 ****\n \t\t}\n \n \t\torgnum = (char *) palloc(MAXFLOATWIDTH + 1);\n! \t\tlen = sprintf(orgnum, \"%.0f\", fabs(val));\n \t\tif (Num.pre > len)\n \t\t\tplen = Num.pre - len;\n \t\tif (len >= FLT_DIG)\n--- 4670,4677 ----\n \t\t}\n \n \t\torgnum = (char *) palloc(MAXFLOATWIDTH + 1);\n! \t\tsprintf(orgnum, \"%.0f\", fabs(val));\n! \t\tlen = strlen(orgnum);\n \t\tif (Num.pre > len)\n \t\t\tplen = Num.pre - len;\n \t\tif (len >= FLT_DIG)", "msg_date": "Wed, 05 Dec 2001 11:49:16 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "SunOS4 port" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> So far I think I have done the SunOS4 port except the parallel\n> regression test. Unfortunately the machine is in production and I\n> could not increase the shmem parameters. Instead I have done the\n> serial test and got 4 errors (see attached regression.diffs).\n\nMost of those look insignificant --- either results of the known\nproblem that SunOS's atoi doesn't detect overflow, or uninteresting\ngeometry diffs. But I'm worried about the diffs in the bit test.\nCould you look into that?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 Dec 2001 00:01:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SunOS4 port " }, { "msg_contents": "> Tatsuo,\n> \n> I've got a SunOS 5.7 machine (in production) which compiles fine. I am\n> getting the Unix Domain Socket problem in regression (pre parallel\n> test) however. When I can take it down (next weekend) I will increase the\n> shm settings but is there anything I should do to get around the domain\n> socket issue (other than switch to internet server)?\n\nI have been working on SunOS4, not Solaris.\n\nAnyway, I thought your problem has been solved in current, since\nthe accept queue length in listen() is now expanded, no?\n--\nTatsuo Ishii\n", "msg_date": "Wed, 05 Dec 2001 14:19:31 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: SunOS4 port" }, { "msg_contents": "> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > So far I think I have done the SunOS4 port except the parallel\n> > regression test. Unfortunately the machine is in production and I\n> > could not increase the shmem parameters. Instead I have done the\n> > serial test and got 4 errors (see attached regression.diffs).\n> \n> Most of those look insignificant --- either results of the known\n> problem that SunOS's atoi doesn't detect overflow, or uninteresting\n> geometry diffs. But I'm worried about the diffs in the bit test.\n> Could you look into that?\n\nIt appears that it at least caused by buggy memcmp() on SunOS4. In\nbit_cmp():\n\n\tcmp = memcmp(VARBITS(arg1), VARBITS(arg2), Min(bytelen1,\n\tbytelen2));\n\nwould return unexpected result if the sign bit of the first byte is\non. For example, B'11011000000' is smaller than B'00000000000'.\nThe only solution would be having our own memcmp on SunOS4.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 05 Dec 2001 15:08:33 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: SunOS4 port " }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> The only solution would be having our own memcmp on SunOS4.\n\nUm. Well, we could do that I suppose ... does anyone think\nit's worth the trouble?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 Dec 2001 01:15:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SunOS4 port " }, { "msg_contents": "> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > The only solution would be having our own memcmp on SunOS4.\n> \n> Um. Well, we could do that I suppose ... does anyone think\n> it's worth the trouble?\n\nIn 7.0 or earlier has a configure check if memcmp is 8 bit clean. In\n7.1 someone has removed that test and now we get the failure on\nSunOS4.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 05 Dec 2001 15:54:01 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: SunOS4 port " }, { "msg_contents": "On Wed, Dec 05, 2001 at 11:49:16AM +0900, Tatsuo Ishii wrote:\n\n> Karel, Could you check my modifications to formatting.c?\n\n It seems right.\n\n Thanks.\n Karel\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Wed, 5 Dec 2001 09:06:47 +0100", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: SunOS4 port" }, { "msg_contents": "> > Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > > The only solution would be having our own memcmp on SunOS4.\n> > \n> > Um. Well, we could do that I suppose ... does anyone think\n> > it's worth the trouble?\n> \n> In 7.0 or earlier has a configure check if memcmp is 8 bit clean. In\n> 7.1 someone has removed that test and now we get the failure on\n> SunOS4.\n\nI checked out 7.0 using tag REL7_0_PATCHES, and I see this in configure.in:\n\n\tAC_FUNC_MEMCMP\n\nand this in configure:\n\n\techo \"$ac_t\"\"$ac_cv_func_memcmp_clean\" 1>&6\n\ttest $ac_cv_func_memcmp_clean = no && LIBOBJS=\"$LIBOBJS memcmp.${ac_objext}\"\n\nHowever, I don't see any memcmp.c file, nor in 6.5 either. I wonder if\nthis was removed from configure.in because it was no longer being used.\n\nThe big question is whether we can backpatch in the memcmp code and have\nit working cleanly in 7.2.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 5 Dec 2001 12:51:12 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: SunOS4 port" }, { "msg_contents": "Hi,\n\nI am trying to debug something on the jdbc list and can't figure out\nwhat is going on here\n\nI'm running postgres 7.1.3 on linux\n\nHere's the table\n\nCREATE TABLE \"savingsaccount\" (\n \"id\" varchar(3) NOT NULL,\n \"firstname\" varchar(24),\n \"lastname\" varchar(24),\n \"balance\" numeric(10,2),\n CONSTRAINT \"pk_savings_account\" PRIMARY KEY (\"id\")\n);\n\nIf I do the select from my machine I get this in the logs\n\n2001-12-05 12:51:47 [3210] DEBUG: query: select id from\nsavingsaccount where balance between 1 and 200\n2001-12-05 12:51:47 [3210] DEBUG: ProcessQuery\n2\n\nThere is another program running on another machine which get's this\nresult??\n\n\n2001-12-05 12:33:56 [3156] DEBUG: query: select id from\nsavingsaccount where balance between 1.00 and 5.00\n2001-12-05 12:33:56 [3156] ERROR: Unable to identify an operator '>='\nfor types 'numeric' and 'float8'\n\nI even tried with decimals\n\n2001-12-05 12:55:27 [3220] DEBUG: query: select id from\nsavingsaccount where balance between\n0.900000000000000022204460492503130808472\n63336181640625 and 199.900000000000005684341886080801486968994140625\n2001-12-05 12:55:27 [3220] DEBUG: ProcessQuery\n2\n\nAnyone have a clue what's going on here?\n\nDave\n\n", "msg_date": "Wed, 5 Dec 2001 13:09:28 -0500", "msg_from": "\"Dave Cramer\" <dave@fastcrypt.com>", "msg_from_op": false, "msg_subject": "Error using between on a numeric" }, { "msg_contents": "> > In 7.0 or earlier has a configure check if memcmp is 8 bit clean. In\n> > 7.1 someone has removed that test and now we get the failure on\n> > SunOS4.\n> \n> I checked out 7.0 using tag REL7_0_PATCHES, and I see this in configure.in:\n> \n> \tAC_FUNC_MEMCMP\n> \n> and this in configure:\n> \n> \techo \"$ac_t\"\"$ac_cv_func_memcmp_clean\" 1>&6\n> \ttest $ac_cv_func_memcmp_clean = no && LIBOBJS=\"$LIBOBJS memcmp.${ac_objext}\"\n> \n> However, I don't see any memcmp.c file, nor in 6.5 either. I wonder if\n> this was removed from configure.in because it was no longer being used.\n\nYou are right. It seems the result of configure check has been never\nused in any release. The LIBOBJS variable is not used anywhere in any\nMakefile. I wonder why we had that check at all. The reason why\nnon-8bit clean memcmp has not been a problem with SunOS4 port was just\nmemcmp's return value was evaluated 0 or not. However, bit data type\nimplementation uses the fact that whether the value is greater than or\nless than 0 and bit type appeared since 7.1. I guess that is the\nreason why we don't see any memcmp problem before 7.1.\n\n> The big question is whether we can backpatch in the memcmp code and have\n> it working cleanly in 7.2.\n\nSo there is no such a thing \"memcmp code\", I guess:-) I think we could\ngive up the bit data type support for SunOS4 port. Apparently SunOS4\nusers have not been used bit data type. So we do not make\nthings worse for 7.2 anyway.\n--\nTatsuo Ishii\n", "msg_date": "Thu, 06 Dec 2001 10:39:57 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: SunOS4 port" }, { "msg_contents": "On Wed, Dec 05, 2001 at 02:19:31PM +0900, Tatsuo Ishii allegedly wrote:\n> > Tatsuo,\n> > \n> > I've got a SunOS 5.7 machine (in production) which compiles fine. I am\n> > getting the Unix Domain Socket problem in regression (pre parallel\n> > test) however. When I can take it down (next weekend) I will increase the\n> > shm settings but is there anything I should do to get around the domain\n> > socket issue (other than switch to internet server)?\n> \n> I have been working on SunOS4, not Solaris.\n> \n> Anyway, I thought your problem has been solved in current, since\n> the accept queue length in listen() is now expanded, no?\n\nAs far as I know this has indeed been fixed.\n\nCheers,\n\nMathijs\n", "msg_date": "Fri, 7 Dec 2001 16:13:34 +0100", "msg_from": "Mathijs Brands <mathijs@ilse.nl>", "msg_from_op": false, "msg_subject": "Re: SunOS4 port" }, { "msg_contents": "Tatsuo Ishii writes:\n\n> The reason why non-8bit clean memcmp has not been a problem with\n> SunOS4 port was just memcmp's return value was evaluated 0 or not.\n> However, bit data type implementation uses the fact that whether the\n> value is greater than or less than 0 and bit type appeared since 7.1.\n> I guess that is the reason why we don't see any memcmp problem before\n> 7.1.\n\nThe return value of memcmp() is also used by bytea and oidvector. As long\nas you don't need comparison results, and memcmp gives wrong results\nconsistently then you might even get away with it, but a disfunctional\noidvector cannot be taken as lightly as the bit types.\n\nI've put the SunOS 4 platform under \"Unsupported\" with the comment\n\"memcmp() does not work correctly, so probably not reliable\". Seasoned\nSunOS 4 users might know what that implies.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 19 Dec 2001 19:46:40 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: SunOS4 port" }, { "msg_contents": "> Tatsuo Ishii writes:\n> \n> > The reason why non-8bit clean memcmp has not been a problem with\n> > SunOS4 port was just memcmp's return value was evaluated 0 or not.\n> > However, bit data type implementation uses the fact that whether the\n> > value is greater than or less than 0 and bit type appeared since 7.1.\n> > I guess that is the reason why we don't see any memcmp problem before\n> > 7.1.\n> \n> The return value of memcmp() is also used by bytea and oidvector. As long\n> as you don't need comparison results, and memcmp gives wrong results\n> consistently then you might even get away with it, but a disfunctional\n> oidvector cannot be taken as lightly as the bit types.\n> \n> I've put the SunOS 4 platform under \"Unsupported\" with the comment\n> \"memcmp() does not work correctly, so probably not reliable\". Seasoned\n> SunOS 4 users might know what that implies.\n\nI am working with Tatsuo to test the new memcmp.c on SunOS and will\nreport back when the test is completed and all regression tests pass.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 19 Dec 2001 13:53:47 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: SunOS4 port" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> The return value of memcmp() is also used by bytea and oidvector. As long\n> as you don't need comparison results, and memcmp gives wrong results\n> consistently then you might even get away with it, but a disfunctional\n> oidvector cannot be taken as lightly as the bit types.\n\noidvector only checks the result for equal or not equal to 0. AFAIK,\nthe issue with SunOS memcmp is not that it gets equality wrong, it's\nthat it sorts unequal values wrong. So oidvector will work.\n\nA quick search shows that bit, bytea, and the contrib/tsearch module\nare the only places in 7.2 that care about the sign of memcmp's result.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 19 Dec 2001 14:14:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SunOS4 port " } ]
[ { "msg_contents": "Does anybody know why test/regress/pg_regress.sh overrides --port or\nPGPORT? It would be nice if we could do two distinct tests at the same\ntime. But pg_regress.sh forces to use port 65432.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 05 Dec 2001 15:29:47 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "pg_regress.sh overrides PGPORT" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> Does anybody know why test/regress/pg_regress.sh overrides --port or\n> PGPORT? It would be nice if we could do two distinct tests at the same\n> time. But pg_regress.sh forces to use port 65432.\n\nWell, it should surely *not* use PGPORT; that's likely to be occupied.\nBut a --port switch doesn't seem unreasonable.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 Dec 2001 06:32:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_regress.sh overrides PGPORT " }, { "msg_contents": "> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > Does anybody know why test/regress/pg_regress.sh overrides --port or\n> > PGPORT? It would be nice if we could do two distinct tests at the same\n> > time. But pg_regress.sh forces to use port 65432.\n> \n> Well, it should surely *not* use PGPORT; that's likely to be occupied.\n> But a --port switch doesn't seem unreasonable.\n\nAdded to TODO:\n\n\t* Add --port flag to regression tests\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 29 Dec 2001 00:22:28 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_regress.sh overrides PGPORT" } ]
[ { "msg_contents": "This is an intermediate report for AIX 5L port.\n\no gmake check sometimes hungs up\n\no pgbench almost always hungs up\n\nIn summary current on AIX 5L is quite unstable(7.1 worked very well).\nFYI, here are the back trace info while pgbench hungs up. This\nincludes backtraces for 3 processes. I will continue to investigate as\nlong as the hardware is available for me.\n\n[ps indicates UPDATE]\nsemop(??, ??, ??) at 0xd02be73c\nIpcSemaphoreLock() at 0x100091d0\nLWLockAcquire() at 0x10019df4\nLockRelease() at 0x10052540\nUnlockRelation() at 0x10051950\nindex_endscan() at 0x10051104\nExecCloseR() at 0x100a0584\nExecEndIndexScan() at 0x100a3a30\nExecEndNode() at 0x1009e960\nEndPlan() at 0x1009d700\nExecutorEnd() at 0x1009e434\nProcessQuery() at 0x100b60e8\npg_exec_query_string() at 0x1001c5b4\nPostgresMain() at 0x1001c0a8\nDoBackend() at 0x10003380\nBackendStartup() at 0x1000287c\nServerLoop() at 0x10002be8\nPostmasterMain() at 0x10004934\nmain() at 0x100004ec\n(dbx) \n\n\n[ps indicates UPDATE]\nsemop(??, ??, ??) at 0xd02be73c\nIpcSemaphoreLock() at 0x100091d0\nLWLockAcquire() at 0x10019df4\nLockRelease() at 0x10052540\nUnlockRelation() at 0x10051950\nrelation_close() at 0x10014f1c\nSearchCatCache() at 0x100103f8\nSearchSysCache() at 0x1000daac\nIndexSupportInitialize() at 0x100ae150\nRelationInitIndexAccessInfo() at 0x100332a4\nRelationBuildDesc() at 0x10030a20\nRelationNameGetRelation() at 0x100337b0\nrelation_openr() at 0x10014f84\nindex_openr() at 0x100513dc\nSearchCatCache() at 0x1001022c\nSearchSysCache() at 0x1000daac\npg_aclcheck() at 0x1008d844\nExecCheckRTEPerms() at 0x1009c838\nExecCheckRTPerms() at 0x1009c94c\nExecCheckQueryPerms() at 0x1009cb28\nInitPlan() at 0x1009da10\nExecutorStart() at 0x1009e6f4\nProcessQuery() at 0x100b6028\npg_exec_query_string() at 0x1001c5b4\nPostgresMain() at 0x1001c0a8\nDoBackend() at 0x10003380\nBackendStartup() at 0x1000287c\nServerLoop() at 0x10002be8\nPostmasterMain() at 0x10004934\nmain() at 0x100004ec\n(dbx) \n\n[ps indicates UPDATE waiting]\nsemop(??, ??, ??) at 0xd02be73c\nIpcSemaphoreLock() at 0x100091d0\nProcSleep() at 0x1001d680\nWaitOnLock() at 0x10051e70\nLockAcquire() at 0x10052d00\nXactLockTableWait() at 0x10051580\nheap_update() at 0x100133d0\nExecReplace() at 0x1009ce28\nExecutePlan() at 0x1009d678\nExecutorRun() at 0x1009e564\nProcessQuery() at 0x100b60c4\npg_exec_query_string() at 0x1001c5b4\nPostgresMain() at 0x1001c0a8\nDoBackend() at 0x10003380\nBackendStartup() at 0x1000287c\nServerLoop() at 0x10002be8\nPostmasterMain() at 0x10004934\nmain() at 0x100004ec\n", "msg_date": "Wed, 05 Dec 2001 17:41:04 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Intermediate report for AIX 5L port" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> In summary current on AIX 5L is quite unstable(7.1 worked very well).\n\nThis looks like a problem with the new LWLock logic. I guess I have\nto answer for that ;-). I am willing to dig into it if you can get\nme access to the machine.\n\nBTW, is this a single-CPU or multi-CPU machine?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Dec 2001 15:01:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Intermediate report for AIX 5L port " }, { "msg_contents": "> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > In summary current on AIX 5L is quite unstable(7.1 worked very well).\n> \n> This looks like a problem with the new LWLock logic. I guess I have\n> to answer for that ;-). I am willing to dig into it if you can get\n> me access to the machine.\n\nThanks. Since the machine is behind a firewall, I have to negotiate\nwith the network admin. Once I succeed it, I will let you know.\n\n> BTW, is this a single-CPU or multi-CPU machine?\n\nIt's a 4-way machine.\n--\nTatsuo Ishii\n", "msg_date": "Fri, 07 Dec 2001 09:13:34 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: Intermediate report for AIX 5L port " } ]
[ { "msg_contents": "Hi,\n\nLet me ask you some questions about heap_insert().\nI'm trying to insert tuples into a existing table by using intenal\nfunctions\nbypassing the query executor.\nFor example, let's assume that the table name is \"MyTable\" having a Box\ntype attribute.\nAnd I want to insert a box '((1,1),(2,2))' into the table.\n\nThe following is an internal function call sequence that I think I\nshould\nfollow.\n\nDoes it make sense?\n\n{\n Relation relation;\n Oid oid;\n HeapTuple tup;\n Box box;\n\n r = heap_open(\"MyTable\", RowExclusiveLock);\n \n box.high.x = 2;\n box.high.y = 2;\n box.low.x = 1;\n box.low.y = 1;\n \n ..........................\n \n oid = insert_heap(r, tup);\n heap_close(r, NoLock)\n}\n\nNow, what I don't know is how to set values to HeapTuple tup.\nThe following data structure shows the fields that I need to fill in. \nI found some fields are set by heap_insert() function, and I marked\nwith \"&\". But I have still some fields needing to be set. Especilly,\nI have no idea on what I need to set for a box, the actual data which is\ngoing to be stored on PostgreSQL's database.\n\nDoes anybody know a good example concerning my question?typedef struct\nHeapTupleHeaderData\n{ \n Oid t_oid; /* & OID of this tuple -- 4\nbytes */\n CommandId t_cmin; /* & insert CID stamp -- 4 bytes\neach */\n CommandId t_cmax; /* delete CommandId stamp */\n TransactionId t_xmin; /* & insert XID stamp -- 4 bytes\neach */\n TransactionId t_xmax; /* & delete XID stamp */\n ItemPointerData t_ctid; /* current TID of this or\nnewer tuple */\n int16 t_natts; /* number of attributes */\n uint16 t_infomask; /* & various infos */\n uint8 t_hoff; /* sizeof() tuple header */\n bits8 t_bits[MinHeapTupleBitmapSize / 8];\n} HeapTupleHeaderData;\n\ntypedef struct HeapTupleData\n{\n uint32 t_len; /* length of *t_data */\n ItemPointerData t_self; /* SelfItemPointer */\n Oid t_tableOid; /* & table the tuple came from\n*/\n MemoryContext t_datamcxt; /* memory context of\nallocation */\n HeapTupleHeader t_data; /* -> tuple header and data */\n} HeapTupleData;\n\n\nCheers.\n", "msg_date": "Wed, 05 Dec 2001 12:49:13 +0000", "msg_from": "Seung-Hyun Jeong <jeongs@cs.man.ac.uk>", "msg_from_op": true, "msg_subject": "about heap_insert() function" }, { "msg_contents": "Seung-Hyun Jeong <jeongs@cs.man.ac.uk> writes:\n> I'm trying to insert tuples into a existing table by using intenal\n> functions bypassing the query executor.\n\n> Now, what I don't know is how to set values to HeapTuple tup.\n> The following data structure shows the fields that I need to fill in. \n\nYou shouldn't be touching any of that directly. You want to use\nheap_formtuple to build the tuple. All you need for that is a tuple\ndescriptor (which you get from the open relation), and two arrays of\nDatum values and null flags.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Dec 2001 19:04:58 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: about heap_insert() function " } ]
[ { "msg_contents": "i have created a table\nuserlist with fields username varchar(10)\nnow i want to change it to username varchar(30)\nwithout dropping or recreating the table.\nhow do i do it?\ntina\n\n\n\n\n\n\n\ni have created a table\nuserlist with fields username \nvarchar(10)\nnow i want to change it to username \nvarchar(30)\nwithout dropping or recreating the \ntable.\nhow do i do it?\ntina", "msg_date": "Wed, 5 Dec 2001 18:42:07 +0530", "msg_from": "\"tinar\" <tinarajraj@hotmail.com>", "msg_from_op": true, "msg_subject": "how to chane the type" }, { "msg_contents": "\nOn Wed, 5 Dec 2001, tinar wrote:\n\n> i have created a table\n> userlist with fields username varchar(10)\n> now i want to change it to username varchar(30)\n> without dropping or recreating the table.\n> how do i do it?\n\nThe best way is to recreate the table and rename\nthem around. If you *REALLY* don't want to do\nthat and have a recent backup (yes, I'm serious),\nyou can muck with pg_attribute and change\natttypmod for the attribute in question\n(from 14 to 34).\n\n\n", "msg_date": "Thu, 6 Dec 2001 08:31:09 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: how to chane the type" }, { "msg_contents": "What's the essential problem with changing column types in postgres? Is it\nsimilar to the DROP COLUMN problem?\n\nIf the answer is that the table format only has allocated enough space per\nrow for the existing type, then how is it possible that Stephen's hack below\nwill not break things?\n\nChris\n\n> -----Original Message-----\n> From: pgsql-sql-owner@postgresql.org\n> [mailto:pgsql-sql-owner@postgresql.org]On Behalf Of Stephan Szabo\n> Sent: Friday, 7 December 2001 12:31 AM\n> To: tinar\n> Cc: pgsql-sql@postgresql.org\n> Subject: Re: [SQL] how to chane the type\n>\n>\n>\n> On Wed, 5 Dec 2001, tinar wrote:\n>\n> > i have created a table\n> > userlist with fields username varchar(10)\n> > now i want to change it to username varchar(30)\n> > without dropping or recreating the table.\n> > how do i do it?\n>\n> The best way is to recreate the table and rename\n> them around. If you *REALLY* don't want to do\n> that and have a recent backup (yes, I'm serious),\n> you can muck with pg_attribute and change\n> atttypmod for the attribute in question\n> (from 14 to 34).\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n", "msg_date": "Fri, 7 Dec 2001 09:45:35 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: how to chane the type" }, { "msg_contents": "\nOn Fri, 7 Dec 2001, Christopher Kings-Lynne wrote:\n\n> What's the essential problem with changing column types in postgres? Is it\n> similar to the DROP COLUMN problem?\n>\n> If the answer is that the table format only has allocated enough space per\n> row for the existing type, then how is it possible that Stephen's hack below\n> will not break things?\n\nThe hack below only works to change the max length of variable length\nattributes and only upward. I'd be very wary of trying to change the real\ntype of a value except between ones that are bitwise compatible (like I\nthink varchar and text are technically, but I'm not sure).\n\n> > The best way is to recreate the table and rename\n> > them around. If you *REALLY* don't want to do\n> > that and have a recent backup (yes, I'm serious),\n> > you can muck with pg_attribute and change\n> > atttypmod for the attribute in question\n> > (from 14 to 34).\n\n", "msg_date": "Thu, 6 Dec 2001 18:05:14 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: how to chane the type" }, { "msg_contents": "OK, I'm kind of interested now in how the variable length attributes are\nactually stored on disk, that you are able to increase them, but not\ndecrease?\n\nI would have thought the other way around?\n\nChris\n\n> -----Original Message-----\n> From: Stephan Szabo [mailto:sszabo@megazone23.bigpanda.com]\n> Sent: Friday, 7 December 2001 10:05 AM\n> To: Christopher Kings-Lynne\n> Cc: tinar; pgsql-sql@postgresql.org\n> Subject: RE: [SQL] how to chane the type\n>\n>\n>\n> On Fri, 7 Dec 2001, Christopher Kings-Lynne wrote:\n>\n> > What's the essential problem with changing column types in\n> postgres? Is it\n> > similar to the DROP COLUMN problem?\n> >\n> > If the answer is that the table format only has allocated\n> enough space per\n> > row for the existing type, then how is it possible that\n> Stephen's hack below\n> > will not break things?\n>\n> The hack below only works to change the max length of variable length\n> attributes and only upward. I'd be very wary of trying to change the real\n> type of a value except between ones that are bitwise compatible (like I\n> think varchar and text are technically, but I'm not sure).\n>\n> > > The best way is to recreate the table and rename\n> > > them around. If you *REALLY* don't want to do\n> > > that and have a recent backup (yes, I'm serious),\n> > > you can muck with pg_attribute and change\n> > > atttypmod for the attribute in question\n> > > (from 14 to 34).\n>\n\n", "msg_date": "Fri, 7 Dec 2001 10:09:59 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: [SQL] how to change the type" }, { "msg_contents": "\nOn Fri, 7 Dec 2001, Christopher Kings-Lynne wrote:\n\n> OK, I'm kind of interested now in how the variable length attributes are\n> actually stored on disk, that you are able to increase them, but not\n> decrease?\n>\n> I would have thought the other way around?\n\nIIRC, the values are stored as length + data. I think char() might\ndo wierd things (I don't know if the trailing spaces are stored), but\nvarchar() and text should be expandable because anything that could have\nfit before should still fit and look the same. Going down is\nproblematic, because if you have a varchar(5) field where one value is say\n'abcd' and you make it varchar(3) what happens?\n\n\n", "msg_date": "Thu, 6 Dec 2001 18:41:09 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: [SQL] how to change the type" }, { "msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> IIRC, the values are stored as length + data. I think char() might\n> do wierd things (I don't know if the trailing spaces are stored), but\n> varchar() and text should be expandable because anything that could have\n> fit before should still fit and look the same.\n\nYup, exactly.\n\n> Going down is\n> problematic, because if you have a varchar(5) field where one value is say\n> 'abcd' and you make it varchar(3) what happens?\n\nWhat would actually happen right now is nothing: the value would still\nbe 'abcd' and would still read out that way. The 3-char limit would\nonly get enforced during inserts and updates of the column.\n\nchar(N) does store the trailing spaces, so altering N would give\nunwanted results: again, existing values would read out with the old\nwidth until updated. You could fix this by issuing\n\n\tUPDATE tab SET col = col\n\nafter tweaking the pg_attribute.atttypmod value. (AFAICS, any \"clean\"\nimplementation would have to do just that internally, with the same\nunpleasant space and speed implications as we've discussed for DROP\nCOLUMN.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Dec 2001 10:36:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [SQL] how to change the type " }, { "msg_contents": "On Fri, 7 Dec 2001, Tom Lane wrote:\n\n> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> > Going down is\n> > problematic, because if you have a varchar(5) field where one value is say\n> > 'abcd' and you make it varchar(3) what happens?\n>\n> What would actually happen right now is nothing: the value would still\n> be 'abcd' and would still read out that way. The 3-char limit would\n> only get enforced during inserts and updates of the column.\n\nThat's what I figured, but I also assume that'd be \"wrong\" in a pure sense\nsince the value is invalid for the new datatype, so I figure its safer\nto say up only. :)\n\n", "msg_date": "Fri, 7 Dec 2001 13:41:46 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: [SQL] how to change the type " }, { "msg_contents": "> char(N) does store the trailing spaces, so altering N would give\n> unwanted results: again, existing values would read out with the old\n> width until updated. You could fix this by issuing\n>\n> \tUPDATE tab SET col = col\n>\n> after tweaking the pg_attribute.atttypmod value. (AFAICS, any \"clean\"\n> implementation would have to do just that internally, with the same\n> unpleasant space and speed implications as we've discussed for DROP\n> COLUMN.)\n\nCan I take this opportunity to give my little thought on operations like\nthese (alter column type, drop column, etc.?)\n\nIf the DBA had to issue these commands every 5 minutes, then the speed and\nspace implications would be bad, yeah. However, if all I want to do is drop\na column once every 6 months, then I don't really care that the operation\nmight take a minute and might consume lots of disk space...\n\n\nChris\n\n", "msg_date": "Mon, 10 Dec 2001 10:32:56 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: [SQL] how to change the type " } ]
[ { "msg_contents": "I saw the above message on PostgreSQL 7.1.3 on powerpc-ibm-aix4.3.2.0. I\nlooked in the sources for this message and it appears to be a startup\nmessage. I looked at the process and while it may be that the child\nprocesses were killed, the postmaster process is still going back to\nthe day I started it.\n\nI ran this on NetBSD for many months with no problem. Does anyone know of\nany issue that I need to be aware of under AIX? Can someone explain what\nis happening here? I also get a lot of \"The Data Base System is starting up\"\nand \"The Data Base System is in recovery mode\" messages as well.\n\nWe also had an issue the other day with processes that could not be killed.\nEven kill -9 would not kill the backends. Very weird.\n\nThanks.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Wed, 5 Dec 2001 11:52:15 -0500 (EST)", "msg_from": "darcy@druid.net (D'Arcy J.M. Cain)", "msg_from_op": true, "msg_subject": "database system was interrupted at..." }, { "msg_contents": "darcy@druid.net (D'Arcy J.M. Cain) writes:\n> I ran this on NetBSD for many months with no problem. Does anyone know of\n> any issue that I need to be aware of under AIX? Can someone explain what\n> is happening here? I also get a lot of \"The Data Base System is starting up\"\n> and \"The Data Base System is in recovery mode\" messages as well.\n\nSounds to me like you are suffering backend crashes; this other stuff is\njust crash recovery activity. Tell us about \"backend unexpectedly quit\"\nlog messages, core dump files, that sort of thing.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 Dec 2001 17:23:58 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: database system was interrupted at... " } ]
[ { "msg_contents": "I may have some spare hacking time coming up, and I was thinking about\nadding a PL (Scheme) to PostgreSQL. I've found some code out there\nfor a small Scheme interpreter, but it appears to be licensed under a\n\"BSD with advertising clause\" license (as opposed to the \"modern BSD\"\nlicense of PostgreSQL). The license text includes:\n\n Redistributions of source code must retain the above copyright\n notice, this list of conditions and the following disclaimer.\n\n Redistributions in binary form must reproduce the above copyright\n notice, this list of conditions and the following disclaimer in\n the documentation and/or other materials provided with the\n distribution.\n\nWould it be a problem to include this code in PG? \n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n\n", "msg_date": "05 Dec 2001 13:17:39 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": true, "msg_subject": "Licensing" }, { "msg_contents": "> I may have some spare hacking time coming up, and I was thinking about\n> adding a PL (Scheme) to PostgreSQL. I've found some code out there\n> for a small Scheme interpreter, but it appears to be licensed under a\n> \"BSD with advertising clause\" license (as opposed to the \"modern BSD\"\n> license of PostgreSQL). The license text includes:\n> \n> Redistributions of source code must retain the above copyright\n> notice, this list of conditions and the following disclaimer.\n> \n> Redistributions in binary form must reproduce the above copyright\n> notice, this list of conditions and the following disclaimer in\n> the documentation and/or other materials provided with the\n> distribution.\n> \n> Would it be a problem to include this code in PG? \n\nI think it would be fine. I don't think the old/new BSD license is any\nmajor distinction. We moved from old to new with no issues a while ago.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 5 Dec 2001 13:48:33 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Licensing" }, { "msg_contents": "That's not the 'BSD with advertising clause', that's the modern BSD\nlicense, with a standard acknowledgement clause. The orginal advertisement\nclause looks like:\n\n(quoting from http://www.gnu.org/philosophy/bsd.html)\n\n3. All advertising materials mentioning features or use of this software\n must display the following acknowledgement: This product includes\n software developed by the University of California, Berkeley and\n its contributors.\n\nUnless that (or something like it) is in there, you're o.k. BTW, kudos\nfor thinking about licensing issues _before_ doing the work.\n\nRoss\n\nP.S. Which package are you looking at, BTW?\n\n\nOn Wed, Dec 05, 2001 at 01:17:39PM -0500, Doug McNaught wrote:\n> I may have some spare hacking time coming up, and I was thinking about\n> adding a PL (Scheme) to PostgreSQL. I've found some code out there\n> for a small Scheme interpreter, but it appears to be licensed under a\n> \"BSD with advertising clause\" license (as opposed to the \"modern BSD\"\n> license of PostgreSQL). The license text includes:\n> \n> Redistributions of source code must retain the above copyright\n> notice, this list of conditions and the following disclaimer.\n> \n> Redistributions in binary form must reproduce the above copyright\n> notice, this list of conditions and the following disclaimer in\n> the documentation and/or other materials provided with the\n> distribution.\n> \n> Would it be a problem to include this code in PG? \n> \n> -Doug\n> -- \n> Let us cross over the river, and rest under the shade of the trees.\n> --T. J. Jackson, 1863\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n", "msg_date": "Wed, 5 Dec 2001 12:58:51 -0600", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: Licensing" }, { "msg_contents": "On Wed, Dec 05, 2001 at 01:48:33PM -0500, Bruce Momjian wrote:\n> > \n> > Would it be a problem to include this code in PG? \n> \n> I think it would be fine. I don't think the old/new BSD license is any\n> major distinction. We moved from old to new with no issues a while ago.\n\nShame on you Bruce: lawyers don't understand 'just a little change in\nthe license' We've gotten away with the change by making a good faith\neffort to contact all copyright holders, and then assuming that those who\ndid not respond have assigned/abandoned their rights. And, the biggest\nholder of them all, Berkeley, _did_ change their license. Probably a bit\n'squidgy' but I think we'd win it in court, not that it'd come to that.\n\nRoss\n\n", "msg_date": "Wed, 5 Dec 2001 13:05:12 -0600", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: Licensing" }, { "msg_contents": "\"Ross J. Reedstrom\" <reedstrm@rice.edu> writes:\n\n> That's not the 'BSD with advertising clause', that's the modern BSD\n> license, with a standard acknowledgement clause. The orginal advertisement\n> clause looks like:\n\n[...]\n\nAhh, good to know. Sounds like it'll be OK then.\n\n> Unless that (or something like it) is in there, you're o.k. BTW, kudos\n> for thinking about licensing issues _before_ doing the work.\n\nHeh, if there's one thing I hate it's wasted effort. ;)\n\n> P.S. Which package are you looking at, BTW?\n\nTinyScheme: \n\nhttp://tinyscheme.sourceforge.net/home.html\n\nIt's not the prettiest code I've ever seen, but it's small, not too\nhard to read, and complete enough to be useful from what I can see. I \nthink it'll be an interesting project.\n\nI am still thinking of maybe writing my own Scheme implementation, but \nthere are already N of them out there. If I can use something that's\ncompatible license-wise and reasonably functional, so much the better.\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n", "msg_date": "05 Dec 2001 14:23:57 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": true, "msg_subject": "Re: Licensing" }, { "msg_contents": "\n\nDoug McNaught wrote:\n\n>\"Ross J. Reedstrom\" <reedstrm@rice.edu> writes:\n>\n>>Yeah, a quick search on freshmet.net found TinyScheme and KSI Scheme as \n>>candidates for an embeddable Scheme interpreter. KSI is described as\n>>'well documented: but the doumentation is in Russian'\n>>\n>\n>Whereas TinyScheme is lightly documented, but in English, so it's a\n>win for me. ;)\n>\nHave you checked SIOD? It _seems_ to have the old BSD licence (perhaps \nbecause it\nis not updated since '97 ;)\n\nBut it is small and lightweight (and has connectivity to msql, oracle, \nrdb and sybase)\n\nSCM which was initially based on it is probably a no-no for GPL, but \nactually I think\nthat most PL's could be considered simple add-ons and be distributed in \n./contrib where\nwe have historically been more liberal concernong \"other\" licenses.\n\nGuile is probably out for same reasons, plus being fat anyway ;)\n\n\n\nRefs:\nSIOD: http://people.delphi.com/gjc/siod.html\nSCM: http://www.swiss.ai.mit.edu/~jaffer/SCM.html\nGuile: http://www.gnu.org/software/guile/guile.html\n------------\nHannu\n\n", "msg_date": "Thu, 06 Dec 2001 02:04:01 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Licensing" }, { "msg_contents": "On Wed, Dec 05, 2001 at 02:23:57PM -0500, Doug McNaught wrote:\n> \"Ross J. Reedstrom\" <reedstrm@rice.edu> writes:\n> > P.S. Which package are you looking at, BTW?\n> \n> TinyScheme: \n> \n> http://tinyscheme.sourceforge.net/home.html\n> \n> It's not the prettiest code I've ever seen, but it's small, not too\n> hard to read, and complete enough to be useful from what I can see. I \n> think it'll be an interesting project.\n> \n> I am still thinking of maybe writing my own Scheme implementation, but \n> there are already N of them out there. If I can use something that's\n> compatible license-wise and reasonably functional, so much the better.\n\nYeah, a quick search on freshmet.net found TinyScheme and KSI Scheme as \ncandidates for an embeddable Scheme interpreter. KSI is described as\n'well documented: but the doumentation is in Russian'\n\nRoss\n", "msg_date": "Wed, 5 Dec 2001 16:08:10 -0600", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: Licensing" }, { "msg_contents": "\"Ross J. Reedstrom\" <reedstrm@rice.edu> writes:\n\n> Yeah, a quick search on freshmet.net found TinyScheme and KSI Scheme as \n> candidates for an embeddable Scheme interpreter. KSI is described as\n> 'well documented: but the doumentation is in Russian'\n\nWhereas TinyScheme is lightly documented, but in English, so it's a\nwin for me. ;)\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n", "msg_date": "05 Dec 2001 17:38:19 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": true, "msg_subject": "Re: Licensing" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n\n\n> Have you checked SIOD? It _seems_ to have the old BSD licence\n> (perhaps because it is not updated since '97 ;)\n> \n> But it is small and lightweight (and has connectivity to msql,\n> oracle, rdb and sybase)\n\nI did look at it (I actually played with it a long time ago), but it\ndidn't seem to be intended to be used as an extension language,\nwhereas TinyScheme is (you can have multiple interpreters in a\nprocess, each with its own heap etc). \n\n> SCM which was initially based on it is probably a no-no for GPL, but\n> actually I think that most PL's could be considered simple add-ons\n> and be distributed in ./contrib where we have historically been more\n> liberal concernong \"other\" licenses.\n\nTrue, but I'd like to think it's barely conceivable that my work could \ngo into the mainline code (not that Scheme has that big of an\naudience, so it might not be worth it). ;)\n\n> Guile is probably out for same reasons, plus being fat anyway ;)\n\nYeah, bloat city. It's a nice system, but way overkill IMHO for most\nextension-language uses. Not that Perl/Python aren't big too, but\nScheme's supposed to be \"small\".\n\nThanks very much for the suggestions!\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n", "msg_date": "05 Dec 2001 19:10:54 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": true, "msg_subject": "Re: Licensing" }, { "msg_contents": "Doug McNaught wrote:\n> \n> Hannu Krosing <hannu@tm.ee> writes:\n> \n> > Have you checked SIOD? It _seems_ to have the old BSD licence\n> > (perhaps because it is not updated since '97 ;)\n> >\n> > But it is small and lightweight (and has connectivity to msql,\n> > oracle, rdb and sybase)\n> \n> I did look at it (I actually played with it a long time ago), but it\n> didn't seem to be intended to be used as an extension language,\n> whereas TinyScheme is (you can have multiple interpreters in a\n> process, each with its own heap etc).\n\nIs there a standard way of making \"safe\" (no filesystem/network \naccess etc.) version of TinyScheme ?\n\nJust recently there was much fuss about PL/Python's ability to read\nfiles. Or are you planning to make only an untrusted PL ?\n\n---------------\nHannu\n", "msg_date": "Thu, 06 Dec 2001 12:52:44 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Licensing" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n\n> Is there a standard way of making \"safe\" (no filesystem/network \n> access etc.) version of TinyScheme ?\n\nYeah, very standard: add it to the code. ;) There are no \"hooks\" for\nthat kind of access control currently. I'm not aware of a Scheme\nsystem (except possibly Guile, just because it has everything else:)\nthat has such.\n\n> Just recently there was much fuss about PL/Python's ability to read\n> files. Or are you planning to make only an untrusted PL ?\n\nProbably start off untrusted and add trusted afterward--I'd like to\nget the basics (interface between Scheme and PG datatypes, SPI glue) in\nplace and get familiar with the code. It shouldn't be too hard to add \na \"capability mask\" to the interpreter structure and put permission\nchecks in.\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n", "msg_date": "06 Dec 2001 11:47:37 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": true, "msg_subject": "Re: Licensing" }, { "msg_contents": "Ross J. Reedstrom writes:\n\n> Yeah, a quick search on freshmet.net found TinyScheme and KSI Scheme as\n> candidates for an embeddable Scheme interpreter. KSI is described as\n> 'well documented: but the doumentation is in Russian'\n\nSeveral months ago I looked to do exactly what Doug is proposing now.\nThere are a few dozen(!) freely available scheme implementations out there\nthat claim to be embeddable, but I haven't found a single one that a)\ncompiled cleanly, b) was documented, and c) could be used in a way that\nwouldn't require changing the postmaster startup code. Most scheme\nimplementations play weird tricks with the stack for efficiency, but I\ndon't want that kind of thing in PostgreSQL.\n\nGood luck anyway. ;-)\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 6 Dec 2001 20:31:04 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Licensing" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n\n> Several months ago I looked to do exactly what Doug is proposing now.\n> There are a few dozen(!) freely available scheme implementations out there\n> that claim to be embeddable, but I haven't found a single one that a)\n> compiled cleanly, b) was documented, and c) could be used in a way that\n> wouldn't require changing the postmaster startup code. Most scheme\n> implementations play weird tricks with the stack for efficiency, but I\n> don't want that kind of thing in PostgreSQL.\n\nThat's one nice thing about TinyScheme--it's a fairly non-tricky\nimplementation (as far as stack and pointer hackery). \n\nAs for (c), I don't anticipate any need to mess with the startup\ncode. An interpreter instance is a self-contained struct that can be\ninstantiated when a Scheme function is invoked, not before (and\ncached of course for later use). \n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n", "msg_date": "06 Dec 2001 15:18:57 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": true, "msg_subject": "Re: Licensing" } ]
[ { "msg_contents": "Hi,\n\nWhile daily using pg_dump for a while the need for the following\nfeatures grew significantly.\nI finally came to the point of implementing them myself ;-)\n\n- pg_dump outputs the data unsorted but to manage the data in a version\ncontrol system you need it consistently sorted. So a flag to sort by\neither primary key or left to right would be of great value. (--sorted\n?)\n\n- pg_dump outputs referential constraints as 3 triggers (near to two\ndifferent tables) per constraint. A mode which outputs the original\nstatement (alter table ... add constraint) would be more sql standard\nconformant, portable and readable. But ... you might get into trouble if\nthe referenced table creation command is output later.\n\nIf we call this switch --sql-standard it might also prefer the short\n(standard compliant) form for index creation [create index X on Y(Z,W)]\nand some other things.\n\nSo, I'm kindly asking for your opinion regarding this two features.\nDoes anybody plan to implement them? Do you have reasons against?\n\n Christof\n\n\n", "msg_date": "Thu, 06 Dec 2001 12:05:16 +0100", "msg_from": "Christof Petig <christof@petig-baender.de>", "msg_from_op": true, "msg_subject": "pg_dump: Sorted output, referential integrity statements" }, { "msg_contents": "\nOn Thu, 6 Dec 2001, Christof Petig wrote:\n\n> - pg_dump outputs referential constraints as 3 triggers (near to two\n> different tables) per constraint. A mode which outputs the original\n> statement (alter table ... add constraint) would be more sql standard\n> conformant, portable and readable. But ... you might get into trouble if\n> the referenced table creation command is output later.\n\nThere's some interesting timing things with this. Pretty much the\nalter statements have to be after the creates for all the tables at least\ndue to recursive constraints. When you're using insert statements (-d)\nsince the restore doesn't appear to be in a transaction, all the data\nneeds to have been loaded as well (again due to recursive constraints).\nIn fact, there's *no* guarantee that even with a transaction that a\nrestore of the current database state statement by statement will succeed\nsince the user may have done odd things to insert the data.\nIf the data's already there, the alter table is going to check each row\nfor validity which can be kinda slow right now on big restores, we'd\nprobably need to make a better check.\n\n", "msg_date": "Thu, 6 Dec 2001 07:59:10 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump: Sorted output, referential integrity statements" }, { "msg_contents": "At 12:05 6/12/01 +0100, Christof Petig wrote:\n>\n>- pg_dump outputs the data unsorted\n\nNot quite correct; it outputs them in an order that is designed to improve\nthe chances of dependencies being satisfied, and improve the performance of\na full restore (a modified OID order).\n\n> but to manage the data in a version\n>control system you need it consistently sorted. So a flag to sort by\n>either primary key or left to right would be of great value. (--sorted\n>?)\n\nNot really very generalizable when you consider user defined types,\ntriggers etc.\n\n\n>- pg_dump outputs referential constraints as 3 triggers (near to two\n>different tables) per constraint. A mode which outputs the original\n>statement (alter table ... add constraint) would be more sql standard\n\nAbosolutely; with time we are moving pg_dump to use standard SQL.\n\n\n>So, I'm kindly asking for your opinion regarding this two features.\n>Does anybody plan to implement them?\n\nNo plans for the first one, but sorting by ('object-type', 'object-name')\nwould be close to trivial, if there is much interest/support for it.\n\nThe second (SQL conformance) is high on my list; a few people (Chris &\nStephen?) have been working hard to implement 'alter table add/etc\nconstraint'. When this is stable, we will move pg_dump in that direction.\nBut as of 7.1, there were still wrinkles in the the implementation that\nmeant it was unsuitable for pg_dump. Not sure about the status in 7.2.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Fri, 07 Dec 2001 22:54:18 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: pg_dump: Sorted output, referential integrity" }, { "msg_contents": "Stephan Szabo wrote:\n\nSince nobody answered concerning the sort issue, I guess\n- nobody is planning or implementing this\n- nobody disagrees this might be handy to have\n\n\n> On Thu, 6 Dec 2001, Christof Petig wrote:\n>\n> > - pg_dump outputs referential constraints as 3 triggers (near to two\n> > different tables) per constraint. A mode which outputs the original\n> > statement (alter table ... add constraint) would be more sql standard\n> > conformant, portable and readable. But ... you might get into trouble if\n> > the referenced table creation command is output later.\n>\n> There's some interesting timing things with this. Pretty much the\n> alter statements have to be after the creates for all the tables at least\n> due to recursive constraints. When you're using insert statements (-d)\n> since the restore doesn't appear to be in a transaction, all the data\n> needs to have been loaded as well (again due to recursive constraints).\n> In fact, there's *no* guarantee that even with a transaction that a\n> restore of the current database state statement by statement will succeed\n> since the user may have done odd things to insert the data.\n> If the data's already there, the alter table is going to check each row\n> for validity which can be kinda slow right now on big restores, we'd\n> probably need to make a better check.\n\nThe propose was mainly made to make the output more readable if you dump a\nsingle table (per pg_dump call). This would also use portable sql commands so\nit's easier to migrate data (given that you also specify -D).\n\nYours\n Christof\n\n\n", "msg_date": "Fri, 07 Dec 2001 13:09:49 +0100", "msg_from": "Christof Petig <christof@petig-baender.de>", "msg_from_op": true, "msg_subject": "Re: pg_dump: Sorted output, referential integrity statements" }, { "msg_contents": "Philip Warner wrote:\n\nAh, yes. Now I remember it was you improving pg_dump.\n\n> At 12:05 6/12/01 +0100, Christof Petig wrote:\n> >\n> >- pg_dump outputs the data unsorted\n>\n> Not quite correct; it outputs them in an order that is designed to improve\n> the chances of dependencies being satisfied, and improve the performance of\n> a full restore (a modified OID order).\n\nThat's perfect - unless you want to diff two pg_dumps\n\n> > but to manage the data in a version\n> >control system you need it consistently sorted. So a flag to sort by\n> >either primary key or left to right would be of great value. (--sorted\n> >?)\n>\n> Not really very generalizable when you consider user defined types,\n> triggers etc.\n\nHmmm. But if we have a primary key on columns (A,B,C) and request the data\n'order by A,B,C' this should be portable, shouldn't it?\nIf we don't have a primary key simply ordering by 1,2,3,...n should also work.\nOr am I missing something?\n\n> >- pg_dump outputs referential constraints as 3 triggers (near to two\n> >different tables) per constraint. A mode which outputs the original\n> >statement (alter table ... add constraint) would be more sql standard\n>\n> Abosolutely; with time we are moving pg_dump to use standard SQL.\n\nGreat news.\n\n> >So, I'm kindly asking for your opinion regarding this two features.\n> >Does anybody plan to implement them?\n>\n> No plans for the first one, but sorting by ('object-type', 'object-name')\n> would be close to trivial, if there is much interest/support for it.\n\nI don't understand what you mean by 'sorting by object-type/name', can you\ngive me an example. Simply adding an (optional) order by clause was the one I\nintended.\n\n> The second (SQL conformance) is high on my list; a few people (Chris &\n> Stephen?) have been working hard to implement 'alter table add/etc\n> constraint'. When this is stable, we will move pg_dump in that direction.\n> But as of 7.1, there were still wrinkles in the the implementation that\n> meant it was unsuitable for pg_dump. Not sure about the status in 7.2.\n\nOh, I was targeting 7.2. I can not surely tell about 7.2, but have seen cvs\nlogs implementing similar things.\n\n Christof\n\n\n", "msg_date": "Fri, 07 Dec 2001 15:16:26 +0100", "msg_from": "Christof Petig <christof@petig-baender.de>", "msg_from_op": true, "msg_subject": "Re: pg_dump: Sorted output, referential integrity" }, { "msg_contents": "On Fri, Dec 07, 2001 at 03:16:26PM +0100, Christof Petig wrote:\n> Philip Warner wrote:\n> \n> Ah, yes. Now I remember it was you improving pg_dump.\n> \n> > At 12:05 6/12/01 +0100, Christof Petig wrote:\n> > >\n> > >- pg_dump outputs the data unsorted\n> >\n> > Not quite correct; it outputs them in an order that is designed to improve\n> > the chances of dependencies being satisfied, and improve the performance of\n> > a full restore (a modified OID order).\n> \n> That's perfect - unless you want to diff two pg_dumps\n\nI've ran into this myself. However, I've never wanted to diff a full dump,\nusually just schema comparisions - I usually _know_ which database has\nthe current data, I just want to be sure I can move it over. For schema\ncomparisions, it's easy enough to generate a 'diffable' file that reflects\nthe schema, something like:\n\nselect relname||'.'||attname from pg_class c, pg_attribute a\nwhere attrelid=c.oid and attnum >0 and relname !~ '^pg' order by\nrelname,attname;\n\nHmm, I do see that sometimes it'd be nice to do a full diff, really. The\n'oid order' was a nice hack to avoid having to do a full dependency\nanalysis on db objects, but they're not stable. I think with oids\ngoing away as much as possible, anyway, we're probably going to have\nto bite the bullet and do dependencies, one way or another. There are\na number of features that are often requested that all boil down to\nknowing dependencies: dropping the auto-generated sequence for a serial,\nalong with the table - reparsing various functions/views/etc. when the\nunderlying tables are modified, etc.\n\nRoss\n\n\n", "msg_date": "Fri, 7 Dec 2001 09:17:16 -0600", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: pg_dump: Sorted output, referential integrity" }, { "msg_contents": "\n> The second (SQL conformance) is high on my list; a few people (Chris &\n> Stephen?) have been working hard to implement 'alter table add/etc\n> constraint'. When this is stable, we will move pg_dump in that direction.\n> But as of 7.1, there were still wrinkles in the the implementation that\n> meant it was unsuitable for pg_dump. Not sure about the status in 7.2.\n\nWell, the biggest thing I see on using alter table add constraint for\nforeign keys is the expense involved if you do it after the tables are\npopulated. I chose the theoretical cleanliness of checking each row\nusing the code we had over the speed of doing a special check for the\nalter table case, although I'm considering reversing that for 7.3 to make\nthe alter table more reasonable and make it possible for you to consider\ndoing it.\n\n\n", "msg_date": "Fri, 7 Dec 2001 13:34:32 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump: Sorted output, referential integrity" }, { "msg_contents": "At 15:16 7/12/01 +0100, Christof Petig wrote:\n>>\n>> Not really very generalizable when you consider user defined types,\n>> triggers etc.\n>\n>Hmmm. But if we have a primary key on columns (A,B,C) and request the data\n>'order by A,B,C' this should be portable, shouldn't it?\n>If we don't have a primary key simply ordering by 1,2,3,...n should also\nwork.\n>Or am I missing something?\n\nMy mistake; I thought you wanted to compare metadata. Sorting data by PK\nseems like a reasonable thing to do.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sat, 08 Dec 2001 10:02:50 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: pg_dump: Sorted output, referential integrity" }, { "msg_contents": "At 10:02 8/12/01 +1100, Philip Warner wrote:\n>At 15:16 7/12/01 +0100, Christof Petig wrote:\n>>>\n>>> Not really very generalizable when you consider user defined types,\n>>> triggers etc.\n>>\n>>Hmmm. But if we have a primary key on columns (A,B,C) and request the data\n>>'order by A,B,C' this should be portable, shouldn't it?\n>>If we don't have a primary key simply ordering by 1,2,3,...n should also\n>work.\n>>Or am I missing something?\n>\n>My mistake; I thought you wanted to compare metadata. Sorting data by PK\n>seems like a reasonable thing to do.\n>\n\nTo make the dump diff-able, we probably need to sort the metadata by type &\nname as well.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sat, 08 Dec 2001 10:21:37 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: pg_dump: Sorted output, referential integrity" }, { "msg_contents": "> > > but to manage the data in a version\n> > >control system you need it consistently sorted. So a flag to sort by\n> > >either primary key or left to right would be of great value. (--sorted\n> > >?)\n> >\n> > Not really very generalizable when you consider user defined types,\n> > triggers etc.\n>\n> Hmmm. But if we have a primary key on columns (A,B,C) and request the data\n> 'order by A,B,C' this should be portable, shouldn't it?\n> If we don't have a primary key simply ordering by 1,2,3,...n\n> should also work.\n> Or am I missing something?\n\nI can see how ordering a dump by the primary key would be a neat way of\n'clustering' your data after a restore, however I have qualms about the\nscalability of such a scheme. What if someone has a 100GB table? They may\nhave arranged things so that they never get a sort from it or something, or\nit might take ages. However I guess if it's an optional parameter it might\nbe neat.\n\nMy feeling is that it won't happen unless you actually code it into a patch\nthat makes it a parameter to pg_dump. Having an actual patch is a great way\nof getting something you want done ;)\n\nAlternatively, have you tried just writing a PERL script (or some clever sed\nscript) that will just sort the COPY FROM sections...?\n\nChris\n\n", "msg_date": "Mon, 10 Dec 2001 10:29:17 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: pg_dump: Sorted output, referential integrity" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n\n> > > > but to manage the data in a version\n> > > >control system you need it consistently sorted. So a flag to sort by\n> > > >either primary key or left to right would be of great value. (--sorted\n> > > >?)\n> > >\n> > > Not really very generalizable when you consider user defined types,\n> > > triggers etc.\n> >\n> > Hmmm. But if we have a primary key on columns (A,B,C) and request the data\n> > 'order by A,B,C' this should be portable, shouldn't it?\n> > If we don't have a primary key simply ordering by 1,2,3,...n\n> > should also work.\n> > Or am I missing something?\n>\n> I can see how ordering a dump by the primary key would be a neat way of\n> 'clustering' your data after a restore, however I have qualms about the\n> scalability of such a scheme. What if someone has a 100GB table? They may\n> have arranged things so that they never get a sort from it or something, or\n> it might take ages. However I guess if it's an optional parameter it might\n> be neat.\n>\n> My feeling is that it won't happen unless you actually code it into a patch\n> that makes it a parameter to pg_dump. Having an actual patch is a great way\n> of getting something you want done ;)\n>\n> Alternatively, have you tried just writing a PERL script (or some clever sed\n> script) that will just sort the COPY FROM sections...?\n\nThat's beyond my perl skills. And I believe sed to be not the right tool. (hmm,\nperhaps split (at 'COPY FROM' and at '\\.'), then sort, then cat ... many\n(perhaps big) temporary files, let the db do the hard work)\n\nBut making a patch to pg_dump is a matter of (say) up to 4 hours.\nI'll do it since you seem to like it and nobody started doing it so far.\n\nChristof\n\n\n", "msg_date": "Mon, 10 Dec 2001 09:21:40 +0100", "msg_from": "Christof Petig <christof@petig-baender.de>", "msg_from_op": true, "msg_subject": "Re: pg_dump: Sorted output, referential integrity" }, { "msg_contents": "> But making a patch to pg_dump is a matter of (say) up to 4 hours.\n> I'll do it since you seem to like it and nobody started doing it so far.\n\nWell, I'm in no way a major developer, so even if I do like it, I don't know\nwhat the chances are of it making its way into the tree.\n\nChris\n\n", "msg_date": "Mon, 10 Dec 2001 16:53:49 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: pg_dump: Sorted output, referential integrity" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n\n> > But making a patch to pg_dump is a matter of (say) up to 4 hours.\n> > I'll do it since you seem to like it and nobody started doing it so far.\n>\n> Well, I'm in no way a major developer, so even if I do like it, I don't know\n> what the chances are of it making its way into the tree.\n\nIf I stop using C++ comments '//', the chance might grew better ;-) [I\napologize again]\n\nSince Philip also likes it ...\nI would say it's a good feature to have and up to now most of my patches went\ninto the tree. So the chances are not that bad (though definitely not for 7.2).\n\n Christof\n\n\n", "msg_date": "Mon, 10 Dec 2001 12:04:52 +0100", "msg_from": "Christof Petig <christof@petig-baender.de>", "msg_from_op": true, "msg_subject": "Re: pg_dump: Sorted output, referential integrity" }, { "msg_contents": "At 13:34 7/12/01 -0800, Stephan Szabo wrote:\n>\n>Well, the biggest thing I see on using alter table add constraint for\n>foreign keys is the expense involved if you do it after the tables are\n>populated.\n\nIs it really worse than loading the tables with the constraint in place?\n\n\n>I chose the theoretical cleanliness of checking each row\n>using the code we had over the speed of doing a special check for the\n>alter table case,\n\nOut of curiosity - what was the difference?\n\nBy the sounds of it, we may get 'alter table' in pg_dump by 7.3 or 7.4.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 11 Dec 2001 13:36:01 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: pg_dump: Sorted output, referential integrity" }, { "msg_contents": "\nOn Tue, 11 Dec 2001, Philip Warner wrote:\n\n> At 13:34 7/12/01 -0800, Stephan Szabo wrote:\n> >\n> >Well, the biggest thing I see on using alter table add constraint for\n> >foreign keys is the expense involved if you do it after the tables are\n> >populated.\n>\n> Is it really worse than loading the tables with the constraint in place?\n\nI'd say its better than while loading, but currently the check isn't\nperformed at all I think, because the create constraint trigger\nstatements are after data load and they don't check the data at all.\nAt least that's how I remember it, I could be wrong.\n\n> >I chose the theoretical cleanliness of checking each row\n> >using the code we had over the speed of doing a special check for the\n> >alter table case,\n>\n> Out of curiosity - what was the difference?\n\nThe check could be performed in a single statment on the fktable with\na not exists (limit 1). I've sort of hoped that the optimizer would\nbe able to potentially pick a better plan than run the subselect once\nfor every row in the fktable. :) But at the time, I wasn't comfortable\nwith mucking with the triggers themselves, and that would have meant\nhaving two things that each had a copy of the fk check logic.\n\n> By the sounds of it, we may get 'alter table' in pg_dump by 7.3 or 7.4.\n\nThat'd be cool. :)\n\n", "msg_date": "Mon, 10 Dec 2001 20:56:54 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump: Sorted output, referential integrity" }, { "msg_contents": "Christof Petig wrote:\n\n> Christopher Kings-Lynne wrote:\n>\n> > > But making a patch to pg_dump is a matter of (say) up to 4 hours.\n> > > I'll do it since you seem to like it and nobody started doing it so far.\n> >\n> > Well, I'm in no way a major developer, so even if I do like it, I don't know\n> > what the chances are of it making its way into the tree.\n>\n> If I stop using C++ comments '//', the chance might grew better ;-) [I\n> apologize again]\n>\n> Since Philip also likes it ...\n> I would say it's a good feature to have.\n\nHere's the patch. It's not as efficient as it might be (if dumpTable_order_by had\nindinfo around) but it works. I'm not clear about quoting when using sorted output\nin 'COPY' style. So if anybody has good test cases around (tables with strange\ncharacters), please check it.\n\nAlso I don't know whether the sorting behaviour is sensible when it comes to\ninheritance. Can someone using inheritance please check it.\n\nIf you like the patch I'll provide documentation patches.\n\n-----\n\nThis patch implements:\n -T alias '--sort' which sorts by primary key / the columns in output order\n\n Yours\n Christof", "msg_date": "Tue, 11 Dec 2001 12:23:58 +0100", "msg_from": "Christof Petig <christof@petig-baender.de>", "msg_from_op": true, "msg_subject": "Re: pg_dump: PATCH for Sorted output" }, { "msg_contents": "Stephan Szabo wrote:\n>\n> On Tue, 11 Dec 2001, Philip Warner wrote:\n>\n> > At 13:34 7/12/01 -0800, Stephan Szabo wrote:\n> > >\n> > >Well, the biggest thing I see on using alter table add constraint for\n> > >foreign keys is the expense involved if you do it after the tables are\n> > >populated.\n> >\n> > Is it really worse than loading the tables with the constraint in place?\n>\n> I'd say its better than while loading, but currently the check isn't\n> performed at all I think, because the create constraint trigger\n> statements are after data load and they don't check the data at all.\n> At least that's how I remember it, I could be wrong.\n\n You're not. This discussion came up a couple of times, and\n the answer is allways the same.\n\n We don't want to define the constraints with ALTER TABLE\n because this means checking data on restore that doesn't need\n to be checked at all (in theory). If he has a crash of a\n critical system and restores from a dump, I bet the farm that\n he wants it FAST.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Tue, 11 Dec 2001 10:34:44 -0500 (EST)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump: Sorted output, referential integrity" }, { "msg_contents": "\nOn Tue, 11 Dec 2001, Jan Wieck wrote:\n\n> Stephan Szabo wrote:\n> >\n> > On Tue, 11 Dec 2001, Philip Warner wrote:\n> >\n> > > At 13:34 7/12/01 -0800, Stephan Szabo wrote:\n> > > >\n> > > >Well, the biggest thing I see on using alter table add constraint for\n> > > >foreign keys is the expense involved if you do it after the tables are\n> > > >populated.\n> > >\n> > > Is it really worse than loading the tables with the constraint in place?\n> >\n> > I'd say its better than while loading, but currently the check isn't\n> > performed at all I think, because the create constraint trigger\n> > statements are after data load and they don't check the data at all.\n> > At least that's how I remember it, I could be wrong.\n>\n> You're not. This discussion came up a couple of times, and\n> the answer is allways the same.\n>\n> We don't want to define the constraints with ALTER TABLE\n> because this means checking data on restore that doesn't need\n> to be checked at all (in theory). If he has a crash of a\n> critical system and restores from a dump, I bet the farm that\n> he wants it FAST.\n\nI'd say as an optional parameter to dump, it's definately not a bad idea\n(like the idea of a --sql or whatever) since the user has to explicitly\nask for it. I think for the rest of the cases it comes down to what people\nwant it to do.\n\n\n", "msg_date": "Tue, 11 Dec 2001 10:30:38 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump: Sorted output, referential integrity" }, { "msg_contents": "At 10:34 11/12/01 -0500, Jan Wieck wrote:\n>\n> We don't want to define the constraints with ALTER TABLE\n> because this means checking data on restore that doesn't need\n> to be checked at all (in theory). If he has a crash of a\n> critical system and restores from a dump, I bet the farm that\n> he wants it FAST.\n\nThis is just an argument for (a) using ALTER TABLE (since it will \nalso prevent PK indexes being created, and make it FASTer), and \n(b) the ability to 'SET ALL CONSTRAINTS OFF' (or similar) to \nprevent the ALTER TABLE from forcing validation of the constraint.\n\nThe current situation of creating constraint triggers is IMO not \nacceptable in the long term.\n\nThere are also enough people who just restore one table to warrant\nthe ability for pg_dump to optionally run with constraints ON.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 12 Dec 2001 14:03:47 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: pg_dump: Sorted output, referential integrity" }, { "msg_contents": "If you're going to allow bypassing data integrity checks (great for\nspeed!) perhaps one should be introduced to quickly confirm the\nintegrity of the file itself? A checksum on the first line will\nvalidate the contents through the rest of the file. It'll take a few\nminutes to confirm a multi-GB sized file but in comparison to load\ntime it may be worthwhile to look into.\n\nThat way you can ensure it's the same as it was when it was dumped and\nfsck or other accidental editing didn't remove the middle of it.\nIntentional modifications won't be stopped but backups should be\ntreated the same as the database is security wise.\n\n--\nRod Taylor\n\nThis message represents the official view of the voices in my head\n\n----- Original Message -----\nFrom: \"Philip Warner\" <pjw@rhyme.com.au>\nTo: \"Jan Wieck\" <janwieck@yahoo.com>; \"Stephan Szabo\"\n<sszabo@megazone23.bigpanda.com>\nCc: \"Christof Petig\" <christof@petig-baender.de>; \"PostgreSQL Hackers\"\n<pgsql-hackers@postgresql.org>\nSent: Tuesday, December 11, 2001 10:03 PM\nSubject: Re: [HACKERS] pg_dump: Sorted output, referential integrity\n\n\n> At 10:34 11/12/01 -0500, Jan Wieck wrote:\n> >\n> > We don't want to define the constraints with ALTER TABLE\n> > because this means checking data on restore that doesn't need\n> > to be checked at all (in theory). If he has a crash of a\n> > critical system and restores from a dump, I bet the farm that\n> > he wants it FAST.\n>\n> This is just an argument for (a) using ALTER TABLE (since it will\n> also prevent PK indexes being created, and make it FASTer), and\n> (b) the ability to 'SET ALL CONSTRAINTS OFF' (or similar) to\n> prevent the ALTER TABLE from forcing validation of the constraint.\n>\n> The current situation of creating constraint triggers is IMO not\n> acceptable in the long term.\n>\n> There are also enough people who just restore one table to warrant\n> the ability for pg_dump to optionally run with constraints ON.\n>\n>\n> ----------------------------------------------------------------\n> Philip Warner | __---_____\n> Albatross Consulting Pty. Ltd. |----/ - \\\n> (A.B.N. 75 008 659 498) | /(@) ______---_\n> Tel: (+61) 0500 83 82 81 | _________ \\\n> Fax: (+61) 0500 83 82 82 | ___________ |\n> Http://www.rhyme.com.au | / \\|\n> | --________--\n> PGP key available upon request, | /\n> and from pgp5.ai.mit.edu:11371 |/\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n", "msg_date": "Tue, 11 Dec 2001 23:36:50 -0500", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: pg_dump: Sorted output, referential integrity" }, { "msg_contents": "Jan Wieck writes:\n\n> We don't want to define the constraints with ALTER TABLE\n> because this means checking data on restore that doesn't need\n> to be checked at all (in theory). If he has a crash of a\n> critical system and restores from a dump, I bet the farm that\n> he wants it FAST.\n\nUm, if he has a *crash* of a *critical* system, doesn't he want his data\nchecked before he puts it back online?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 12 Dec 2001 23:25:21 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: pg_dump: Sorted output, referential integrity" }, { "msg_contents": "Peter Eisentraut wrote:\n> Jan Wieck writes:\n>\n> > We don't want to define the constraints with ALTER TABLE\n> > because this means checking data on restore that doesn't need\n> > to be checked at all (in theory). If he has a crash of a\n> > critical system and restores from a dump, I bet the farm that\n> > he wants it FAST.\n>\n> Um, if he has a *crash* of a *critical* system, doesn't he want his data\n> checked before he puts it back online?\n\n The data came (in theory!!!) from an intact, consistent\n database. So the dump content is (theoretically) known to be\n consistent, thus no check required.\n\n The difference between theory and practice? There is none,\n theoretically :-)\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Wed, 12 Dec 2001 18:19:11 -0500 (EST)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump: Sorted output, referential integrity" } ]
[ { "msg_contents": "Hello,\n\nWe have to evaluate the possibility to integrate an open source RDBMS in our\nsoftware developments.\nI checked some stuff on Postgres, but I now have to find out whether\nPostgres integrates Stored Procedures as a feature.\n\nCan anyone tell me if it does ? In that case, my company would use this\nrdbms for several future products.\n\nMany thanks in advance.\n\nAdditionally, if anyone could give me some pointers to it in the huge\npostgres doc...\n\nRegards,\n\nJ�r�me Courat\nJava developer.\n\n\n", "msg_date": "Thu, 6 Dec 2001 14:45:18 +0100", "msg_from": "\"J���r���me Courat\" <jerome.courat@gecko.fr.eu.org>", "msg_from_op": true, "msg_subject": "[BASIC FEATURES] stored procedures in Postgresql ?" }, { "msg_contents": "\"J�r�me Courat\" <jerome.courat@gecko.fr.eu.org> writes:\n\n> Hello,\n> \n> We have to evaluate the possibility to integrate an open source RDBMS in our\n> software developments.\n> I checked some stuff on Postgres, but I now have to find out whether\n> Postgres integrates Stored Procedures as a feature.\n\nDepends on your definition of \"stored procedure\". Postgres allows\nuser-written functions, stored in the database, and callable from\nqueries. What these functions can't currenty do is return result\nsets, which is what a lot of people mean by \"stored procedure\".\n\nHowever, it's my understanding that in 7.2 (which is currently in\nbeta) functions can return open cursors, which gives you a lot of the\nsame functionality as returning result sets.\n\nAlso, functions can be written in several languages, including Perl,\nPython, and Tcl as well as straight C and PGSQL (which is similar to\nOracle's PL/SQL).\n\n> Can anyone tell me if it does ? In that case, my company would use this\n> rdbms for several future products.\n\nI hope my response has been helpful.\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n", "msg_date": "06 Dec 2001 13:13:18 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: [BASIC FEATURES] stored procedures in Postgresql ?" }, { "msg_contents": "On Thu, Dec 06, 2001 at 01:13:18PM -0500, Doug McNaught wrote:\n> However, it's my understanding that in 7.2 (which is currently in\n> beta) functions can return open cursors, which gives you a lot of the\n> same functionality as returning result sets.\n\nGives it also the possibility to returning result sets to the client ??\n\nI want to code a scenario (e.g. within a rule) like:\n\n id = nextval('idseq');\n INSERT INTO tab ( id, ... ) VALUES ( id, ... );\n /* return the result of the following query to the user: */\n SELECT * FROM tab WHERE tab.id = id;\n\nThe problem is that there is no way to put the value of the `id'\nvariable into the last query, when the last query is put into a place,\nwhere its result set is returned to the client (e.g. as the last query\nin a rule).\n\nCan I return an open cursor to the client ? Can I otherwise return the\nresult set of an open cursor, which was returned by a server-side\nfunction, to the client ?\n\n-- \nHolger Krug\nhkrug@rationalizer.com\n", "msg_date": "Fri, 7 Dec 2001 08:30:47 +0100", "msg_from": "Holger Krug <hkrug@rationalizer.com>", "msg_from_op": false, "msg_subject": "Re: Using Cursor in PostgreSQL 7.2" }, { "msg_contents": "Holger Krug <hkrug@rationalizer.com> writes:\n\n> On Thu, Dec 06, 2001 at 01:13:18PM -0500, Doug McNaught wrote:\n> > However, it's my understanding that in 7.2 (which is currently in\n> > beta) functions can return open cursors, which gives you a lot of the\n> > same functionality as returning result sets.\n> \n> Gives it also the possibility to returning result sets to the client ??\n\nSee the docs; I don't know much more about it than what I\nposted--haven't played with it myself yet.\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n", "msg_date": "07 Dec 2001 12:07:40 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Using Cursor in PostgreSQL 7.2" }, { "msg_contents": "C functions returning sets are entirely possible, and there's even some\ndocumentation about how to do it in src/backend/utils/fmgr/README (which\nneeds to be transposed to present tense and moved into the SGML docs,\nbut it's better than nothing).\n\nThere is at least one simple example in the 7.2 sources: see \npg_stat_get_backend_idset() in src/backend/utils/adt/pgstatfuncs.c,\nand observe its usage in the pg_stat views, eg at the bottom of\nhttp://developer.postgresql.org/docs/postgres/monitoring-stats.html\n\nThere is not presently any support for this sort of thing in plpgsql\nor any of the other PL languages, however.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Dec 2001 12:38:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Using Cursor in PostgreSQL 7.2 " }, { "msg_contents": "Tom Lane wrote:\n\n> C functions returning sets are entirely possible, and there's even some\n> documentation about how to do it in src/backend/utils/fmgr/README (which\n> needs to be transposed to present tense and moved into the SGML docs,\n> but it's better than nothing).\n> \n> There is at least one simple example in the 7.2 sources: see \n> pg_stat_get_backend_idset() in src/backend/utils/adt/pgstatfuncs.c,\n> and observe its usage in the pg_stat views, eg at the bottom of\n> http://developer.postgresql.org/docs/postgres/monitoring-stats.html\n> \n\n\nIt looks like the stats monitoring functions suffer from the same \nlimitation that I hit with dblink:\n\n\nlt_lcat=# SELECT pg_stat_get_backend_pid(S.backendid) AS procpid, \npg_stat_get_backend_activity(S.backendid) AS current_query FROM (SELECT \npg_stat_get_backend_idset() AS backendid) AS S; \n procpid | current_query\n---------+---------------\n 12713 |\n 12762 |\n(2 rows)\n\n\nlt_lcat=# SELECT pg_stat_get_backend_pid(S.backendid) AS procpid, \npg_stat_get_backend_activity(S.backendid) AS current_query FROM (SELECT \npg_stat_get_backend_idset() AS backendid) AS S where \npg_stat_get_backend_pid(S.backendid) = 12713;\nERROR: Set-valued function called in context that cannot accept a set\n\n\nlt_lcat=# SELECT pg_stat_get_backend_pid(S.backendid) AS procpid, \npg_stat_get_backend_activity(S.backendid) AS current_query FROM (SELECT \npg_stat_get_backend_idset() AS backendid UNION ALL SELECT 1 WHERE FALSE) \nAS S where pg_stat_get_backend_pid(S.backendid) = 12713;\n procpid | current_query\n---------+---------------\n 12713 |\n(1 row)\n\nThe UNION is ugly but allows it to work. Tom discussed the reason this \nis needed on: http://fts.postgresql.org/db/mw/msg.html?mid=120239.\n\nJoe\n\n\n\n", "msg_date": "Fri, 07 Dec 2001 10:45:23 -0800", "msg_from": "Joe Conway <joseph.conway@home.com>", "msg_from_op": false, "msg_subject": "Re: Using Cursor in PostgreSQL 7.2" }, { "msg_contents": "On Fri, Dec 07, 2001 at 12:38:03PM -0500, Tom Lane wrote:\n> C functions returning sets are entirely possible, and there's even some\n> documentation about how to do it in src/backend/utils/fmgr/README (which\n> needs to be transposed to present tense and moved into the SGML docs,\n> but it's better than nothing).\n\nThank you ! I very appreciate your answer.\n\nSo what I have to do to let a PL/PGSQL function to return a set to the\nclient is:\n\n1) let the PL/PGSQL return a cursor\n2) write a general C wrapper function cursor_to_set(cursor) which gets a\n cursor and returns the result set\n\nMy additional questions:\n\n* Is 2) possible if the nature of the cursor is not known in advance ?\n* Is the implementation of cursor_to_set very complicated or can it done\n with the documentation cited in your mail ?\n\nI think such a function cursor_to_set, if possible, would be very\nuseful, wouldn't it ?\n\n> There is not presently any support for this sort of thing in plpgsql\n> or any of the other PL languages, however.\n\nHaving `cursor_to_set' it would be half as bad !\n\n-- \nHolger Krug\nhkrug@rationalizer.com\n", "msg_date": "Fri, 7 Dec 2001 20:06:11 +0100", "msg_from": "Holger Krug <hkrug@rationalizer.com>", "msg_from_op": false, "msg_subject": "Re: Using Cursor in PostgreSQL 7.2" }, { "msg_contents": "Joe Conway <joseph.conway@home.com> writes:\n> It looks like the stats monitoring functions suffer from the same \n> limitation that I hit with dblink:\n\nUrgh, you're right:\n\nregression=# select * from pg_stat_activity;\n datid | datname | procpid | usesysid | usename | current_query \n---------+------------+---------+----------+----------+---------------\n 3833396 | regression | 2625 | 1 | postgres | \n(1 row)\n\nregression=# select * from pg_stat_activity where procpid = 2625;\nERROR: Set-valued function called in context that cannot accept a set\nregression=# \n\nThis probably qualifies as a \"must fix\" problem. I guess I'll have to\nadd the test for set-valued functions that I was reluctant to add\nbefore.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Dec 2001 16:38:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Using Cursor in PostgreSQL 7.2 " }, { "msg_contents": "Joe Conway <joseph.conway@home.com> writes:\n> It looks like the stats monitoring functions suffer from the same \n> limitation that I hit with dblink:\n\nI've added the missing checks in the planner; possibly you could get rid\nof that UNION hack now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Dec 2001 18:53:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Using Cursor in PostgreSQL 7.2 " }, { "msg_contents": "Tom Lane wrote:\n\n> I've added the missing checks in the planner; possibly you could get rid\n> of that UNION hack now.\n> \n\n\n*Moved to hackers*\n\nI confirmed the UNION hack is no longer required. Thanks! Is it too late \nto change the README in contrib/dblink?\n\n\nA side issue I noticed is that recent changes to contrib/*/Makefile seem \nto cause 'MODULE_PATHNAME' in *.sql.in files to become \n'$libdir/modulename' in the resulting *.sql files. Example:\n\nin rtree_gist.sql.in:\n-- define the GiST support methods\ncreate function gbox_consistent(opaque,box,int4) returns bool as \n'MODULE_PATHNAME' language 'C';\n\nbecomes in rtree_gist.sql:\n-- define the GiST support methods\ncreate function gbox_consistent(opaque,box,int4) returns bool as \n'$libdir/rtree_gist' language 'C';\n\nSame thing happens in (at least) dblink.sql, fuzzystrmatch.sql, and \narray_iterator.sql.\n\nI'm not sure right off how to fix it though :(\n\nJoe\n\n", "msg_date": "Wed, 12 Dec 2001 09:42:45 -0800", "msg_from": "Joe Conway <joseph.conway@home.com>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Using Cursor in PostgreSQL 7.2" }, { "msg_contents": "Joe Conway <joseph.conway@home.com> writes:\n> I confirmed the UNION hack is no longer required. Thanks! Is it too late \n> to change the README in contrib/dblink?\n\nNo, I don't think that's a problem. Send a patch.\n\n> A side issue I noticed is that recent changes to contrib/*/Makefile seem \n> to cause 'MODULE_PATHNAME' in *.sql.in files to become \n> '$libdir/modulename' in the resulting *.sql files. Example:\n\nThis is correct behavior now; in fact, full paths in CREATE FUNCTION\nshould be deprecated...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Dec 2001 12:53:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Using Cursor in PostgreSQL 7.2 " }, { "msg_contents": "Tom Lane wrote:\n\n> Joe Conway <joseph.conway@home.com> writes:\n> \n>>I confirmed the UNION hack is no longer required. Thanks! Is it too late \n>>to change the README in contrib/dblink?\n>>\n> \n> No, I don't think that's a problem. Send a patch.\n> \n\nHere's a (documentation only) patch for the contrib/dblink README.\n\nThanks,\n\nJoe", "msg_date": "Wed, 12 Dec 2001 20:08:22 -0800", "msg_from": "Joe Conway <joseph.conway@home.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [GENERAL] Using Cursor in PostgreSQL 7.2" }, { "msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\n> Tom Lane wrote:\n> \n> > Joe Conway <joseph.conway@home.com> writes:\n> > \n> >>I confirmed the UNION hack is no longer required. Thanks! Is it too late \n> >>to change the README in contrib/dblink?\n> >>\n> > \n> > No, I don't think that's a problem. Send a patch.\n> > \n> \n> Here's a (documentation only) patch for the contrib/dblink README.\n> \n> Thanks,\n> \n> Joe\n\n> *** README.dblink.orig\tMon Jun 18 12:09:50 2001\n> --- README.dblink\tWed Dec 12 19:57:34 2001\n> ***************\n> *** 82,88 ****\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 13 Dec 2001 05:49:01 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [GENERAL] Using Cursor in PostgreSQL 7.2" } ]
[ { "msg_contents": "Hi,\n\n I had the same problem with gprof(1) that all the timing\n information is zero. Today I found out what's going on.\n\n Under Linux fork(2) resets the ITIMER_PROF. Getting the\n itimer before the fork(2) with getitimer(2) and if pid==0\n setting it with setitimer(2) to what it was makes profiling\n work.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Thu, 6 Dec 2001 10:04:30 -0500 (EST)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": true, "msg_subject": "Profiling under Linux" } ]
[ { "msg_contents": "I just found out something about Oracle which that looks like something\nthat could be doable in PostgreSQL. \n\nWhat do you all think:\n\nOracle's version is something like this:\n\ncreate [public] database link using [...]\n\nselect * from sometable@remotelink\n\n\nI was thinking how this could be done with postgreSQL. How hard would it\nbe to make something that is similar to a view, but executes a query\nremotely? I envision something like this:\n\ncreate [public] link name query using [...]\n\nThe table link will be similar to a view. It could be used like this:\n\nCREATE LINK test as select * from test WITH 'user=postgres host=remote\ndb=data';\n\nSELECT * from test;\n\nor \n\nSELECT * from fubar join test on (fubar.id = test.id) ;\n\nSo, what do you think? Impossible, possible, too hard? too easy?\n", "msg_date": "Thu, 06 Dec 2001 13:28:04 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Remote connections?" }, { "msg_contents": "On Thu, Dec 06, 2001 at 01:28:04PM -0500, mlw wrote:\n> I just found out something about Oracle which that looks like something\n> that could be doable in PostgreSQL. \n> \n> What do you all think:\n> \n> Oracle's version is something like this:\n> \n> create [public] database link using [...]\n> \n> select * from sometable@remotelink\n> \n> \n> I was thinking how this could be done with postgreSQL. How hard would it\n> be to make something that is similar to a view, but executes a query\n> remotely? I envision something like this:\n> \n> create [public] link name query using [...]\n> \n> The table link will be similar to a view. It could be used like this:\n> \n> CREATE LINK test as select * from test WITH 'user=postgres host=remote\n> db=data';\n> \n> SELECT * from test;\n> \n> or \n> \n> SELECT * from fubar join test on (fubar.id = test.id) ;\n> \n> So, what do you think? Impossible, possible, too hard? too easy?\n\nHere we come, full circle. This is just about where I came on board.\nMany moons ago, I started looking at Mariposa, in the hopes of forward\npatching it into PostgreSQL, and generalizing the 'remote' part to allow\nexactly the sort of access you described above.\n\nThe biggest problem with this is transactional semantics: you need\ntwo-stage commits to get this right, and we don't hav'em. (Has there\nbeen an indepth discussion concerning what how hard it would be to do\nthat with postgresql?) \n\nThe _actual_ biggest problem was my lack of knowledge of the PostgreSQL\ncodebase ;-)\n\nRoss\n-- \nRoss Reedstrom, Ph.D. reedstrm@rice.edu\nExecutive Director phone: 713-348-6166\nGulf Coast Consortium for Bioinformatics fax: 713-348-6182\nRice University MS-39\nHouston, TX 77005\n", "msg_date": "Thu, 6 Dec 2001 14:06:16 -0600", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: Remote connections?" }, { "msg_contents": "\"Ross J. Reedstrom\" wrote:\n> \n> On Thu, Dec 06, 2001 at 01:28:04PM -0500, mlw wrote:\n> > I just found out something about Oracle which that looks like something\n> > that could be doable in PostgreSQL.\n> >\n> > What do you all think:\n> >\n> > Oracle's version is something like this:\n> >\n> > create [public] database link using [...]\n> >\n> > select * from sometable@remotelink\n> >\n> >\n> > I was thinking how this could be done with postgreSQL. How hard would it\n> > be to make something that is similar to a view, but executes a query\n> > remotely? I envision something like this:\n> >\n> > create [public] link name query using [...]\n> >\n> > The table link will be similar to a view. It could be used like this:\n> >\n> > CREATE LINK test as select * from test WITH 'user=postgres host=remote\n> > db=data';\n> >\n> > SELECT * from test;\n> >\n> > or\n> >\n> > SELECT * from fubar join test on (fubar.id = test.id) ;\n> >\n> > So, what do you think? Impossible, possible, too hard? too easy?\n> \n> Here we come, full circle. This is just about where I came on board.\n> Many moons ago, I started looking at Mariposa, in the hopes of forward\n> patching it into PostgreSQL, and generalizing the 'remote' part to allow\n> exactly the sort of access you described above.\n> \n> The biggest problem with this is transactional semantics: you need\n> two-stage commits to get this right, and we don't hav'em. (Has there\n> been an indepth discussion concerning what how hard it would be to do\n> that with postgresql?)\n> \n> The _actual_ biggest problem was my lack of knowledge of the PostgreSQL\n> codebase ;-)\n\nI think we can we can dispense worrying about two stage commits, if we\nassume that remote connections are treated as views with no rules. As\nlong as remote tables are \"read only\" then the implementation is much\neasier.\n\nI too find the internals of PostgreSQL virtually incomprehensible at the\ninternal level. If there were a document somewhere which published how a\nfunction could return multiple tuples, remote views would be a trivial\nundertaking. It could look like:\n\nselect * from remote('select *from table', 'user=postgres host=outland\ndb=remote');\n", "msg_date": "Thu, 06 Dec 2001 15:21:13 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: Remote connections?" }, { "msg_contents": "mlw wrote:\n\n> I too find the internals of PostgreSQL virtually incomprehensible at the\n> internal level. If there were a document somewhere which published how a\n> function could return multiple tuples, remote views would be a trivial\n> undertaking. It could look like:\n> \n> select * from remote('select *from table', 'user=postgres host=outland\n> db=remote');\n> \n\nSee contrib/dblink in the 7.2 beta. It was my attempt inspired by \nOracle's dblink and some code that you (mlw) posted a while back. Based \non the limitations wrt returning muliple tuples, I wound up with \nsomething like:\n\nlt_lcat=# select dblink_tok(t1.dblink_p,0) as f1 from (select \ndblink('hostaddr=127.0.0.1 dbname=template1 user=postgres \npassword=postgres','select proname from pg_proc') as dblink_p) as t1;\n\nWhich, as has been pointed out more than once, is pretty ugly, but at \nleast it's a start ;-)\n\n\nJoe\n\n", "msg_date": "Thu, 06 Dec 2001 14:03:24 -0800", "msg_from": "Joe Conway <joseph.conway@home.com>", "msg_from_op": false, "msg_subject": "Re: Remote connections?" }, { "msg_contents": "On Thu, 6 Dec 2001, mlw wrote:\n\n> I too find the internals of PostgreSQL virtually incomprehensible at the\n> internal level. If there were a document somewhere which published how a\n> function could return multiple tuples, remote views would be a trivial\n> undertaking. It could look like:\n> \n> select * from remote('select *from table', 'user=postgres host=outland\n> db=remote');\nThis isn't possible yet. I was working on implementation of this, about\n80% done, but never finished. Now I'm out of time to work more on this for\na while. :(\n\nLet me know if you want my code.\n\n-alex\n\n", "msg_date": "Thu, 6 Dec 2001 18:23:22 -0500 (EST)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: Remote connections?" }, { "msg_contents": "Hey this looks really cool. It looks like something I was thinking about doing.\nI have a couple suggestions that could make it a little better, I hope you will\nnot be offended. (If you want my help, I'll chip in!)\n\nWhy not use a binary cursor? That way native types can slip through without the\noverhead of conversion.\n\nRight now you get all rows up front, you may be able to increase overall\nperformance by fetching only a few rows at a time, rather than get everything\nall at once. (Think on the order of 4 million rows from your remote query!)\nExecute the commit at the end of processing. There are even some asynchronous\nfunctions you may be able to utilize to reduce the I/O bottleneck. Use the\nsynchronous function first, then before you return initiate an asynchronous\nread. Every successive pass through the function, read the newly arrived tuple,\nand initiate the next asynchronous read. (The two machine could be processing\nthe query simultaneously, and this could even IMPROVE performance over a single\nsystem for heavy duty queries.)\n\nSetup a hash table for field names, rather than requiring field numbers. (Keep\nfield number for efficiency, of course.)\n\nYou could eliminate having to pass the result pointer around by keeping a\nstatic array in your extension. Use something like Oracle's \"contains\" notation\nof result number. Where each instantiation of \"contains()\" and \"score()\"\nrequire an id. i.e. 1,2,3,40 etc. Then hash those numbers into an array. I have\nsome code that does this for a PostgreSQL extension (it implements contains) on\nmy website (pgcontains, under download). It is ugly but works for the most\npart.\n\nSeriously, your stuff looks great. I think it could be the beginning of a\nfairly usable system for my company. Any help you need/want, just let me know.\n\n\nJoe Conway wrote:\n> \n> mlw wrote:\n> \n> > I too find the internals of PostgreSQL virtually incomprehensible at the\n> > internal level. If there were a document somewhere which published how a\n> > function could return multiple tuples, remote views would be a trivial\n> > undertaking. It could look like:\n> >\n> > select * from remote('select *from table', 'user=postgres host=outland\n> > db=remote');\n> >\n> \n> See contrib/dblink in the 7.2 beta. It was my attempt inspired by\n> Oracle's dblink and some code that you (mlw) posted a while back. Based\n> on the limitations wrt returning muliple tuples, I wound up with\n> something like:\n> \n> lt_lcat=# select dblink_tok(t1.dblink_p,0) as f1 from (select\n> dblink('hostaddr=127.0.0.1 dbname=template1 user=postgres\n> password=postgres','select proname from pg_proc') as dblink_p) as t1;\n> \n> Which, as has been pointed out more than once, is pretty ugly, but at\n> least it's a start ;-)\n> \n> Joe\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n", "msg_date": "Fri, 07 Dec 2001 00:06:01 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: Remote connections?" }, { "msg_contents": "On Fri, 7 Dec 2001, mlw wrote:\n\n>\n> You could eliminate having to pass the result pointer around by keeping a\n> static array in your extension. Use something like Oracle's \"contains\" notation\n> of result number. Where each instantiation of \"contains()\" and \"score()\"\n> require an id. i.e. 1,2,3,40 etc. Then hash those numbers into an array. I have\n> some code that does this for a PostgreSQL extension (it implements contains) on\n> my website (pgcontains, under download). It is ugly but works for the most\n> part.\n\ncontrib/intarray does this job very well\n\n>\n> Seriously, your stuff looks great. I think it could be the beginning of a\n> fairly usable system for my company. Any help you need/want, just let me know.\n>\n>\n> Joe Conway wrote:\n> >\n> > mlw wrote:\n> >\n> > > I too find the internals of PostgreSQL virtually incomprehensible at the\n> > > internal level. If there were a document somewhere which published how a\n> > > function could return multiple tuples, remote views would be a trivial\n> > > undertaking. It could look like:\n> > >\n> > > select * from remote('select *from table', 'user=postgres host=outland\n> > > db=remote');\n> > >\n> >\n> > See contrib/dblink in the 7.2 beta. It was my attempt inspired by\n> > Oracle's dblink and some code that you (mlw) posted a while back. Based\n> > on the limitations wrt returning muliple tuples, I wound up with\n> > something like:\n> >\n> > lt_lcat=# select dblink_tok(t1.dblink_p,0) as f1 from (select\n> > dblink('hostaddr=127.0.0.1 dbname=template1 user=postgres\n> > password=postgres','select proname from pg_proc') as dblink_p) as t1;\n> >\n> > Which, as has been pointed out more than once, is pretty ugly, but at\n> > least it's a start ;-)\n> >\n> > Joe\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 7 Dec 2001 10:37:04 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: Remote connections?" }, { "msg_contents": "The dblink code is a very cool idea.\n\nIt got me thinking, what if, just thinking out load here, it was redesigned as\nsomething a little more grandeous.\n\nImagine this:\n\n\nselect dblink('select * from table', 'table_name', 'db=oracle.test user=chris\npasswd=knight', 1) as t1, dblink('table2_name', 1) as t2\n\n\nJust something to think about. \n\nThe first instance of dblink would take 4 parameters: query, table which it\nreturns, connect string, and link token.\n\nThe second instance of dblink would just take the name of the table which it\nreturns and a link token.\n\nThe cool bit is the notion that the query string could specify different\ndatabases or even .DBF libraries. \n\nJust something to think about.\n\nIt would REALLY be great if functions could return multiple tuples!\n", "msg_date": "Fri, 07 Dec 2001 08:40:00 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: Remote connections?" }, { "msg_contents": "mlw wrote:\n\n> Hey this looks really cool. It looks like something I was thinking about doing.\n> I have a couple suggestions that could make it a little better, I hope you will\n> not be offended. (If you want my help, I'll chip in!)\n> \n\n\nThanks! Suggestions welcomed.\n\n> Why not use a binary cursor? That way native types can slip through without the\n> overhead of conversion.\n>\n\n\nI wasn't sure that would work. Would you create dblink_tok as returning \nopaque then?\n\n \n> Right now you get all rows up front, you may be able to increase overall\n> performance by fetching only a few rows at a time, rather than get everything\n> all at once. (Think on the order of 4 million rows from your remote query!)\n> Execute the commit at the end of processing. There are even some asynchronous\n> functions you may be able to utilize to reduce the I/O bottleneck. Use the\n> synchronous function first, then before you return initiate an asynchronous\n> read. Every successive pass through the function, read the newly arrived tuple,\n> and initiate the next asynchronous read. (The two machine could be processing\n> the query simultaneously, and this could even IMPROVE performance over a single\n> system for heavy duty queries.)\n\n\nInteresting . . . but aren't there some issues with the asynch functions?\n\n> \n> Setup a hash table for field names, rather than requiring field numbers. (Keep\n> field number for efficiency, of course.)\n> \n> You could eliminate having to pass the result pointer around by keeping a\n> static array in your extension. Use something like Oracle's \"contains\" notation\n> of result number. Where each instantiation of \"contains()\" and \"score()\"\n> require an id. i.e. 1,2,3,40 etc. Then hash those numbers into an array. I have\n> some code that does this for a PostgreSQL extension (it implements contains) on\n> my website (pgcontains, under download). It is ugly but works for the most\n> part.\n> \n\n\nI thought about the static array, but I'm not familiar with Oracle \ncontains() and score() -- I'm only fluent enough with Oracle to be \ndangerous. Guess I'll have to dig out the books . . .\n\n\n> Seriously, your stuff looks great. I think it could be the beginning of a\n> fairly usable system for my company. Any help you need/want, just let me know.\n> \n\nI am planning to improve dblink during the next release cycle, so I'll \nkeep all this in mind (and might take you up on the help offer too!). I \nwas hoping we'd have functions returning tuples by now, which would \nimprove this extension dramatically. Unfortunately, it sounds like Alex \nwon't have time to finish that even for 7.3 :(\n\nAlex, can we get a look at your latest code? Is it any different the \nyour last submission to PATCHES?\n\nJoe\n\n", "msg_date": "Fri, 07 Dec 2001 09:26:47 -0800", "msg_from": "Joe Conway <joseph.conway@home.com>", "msg_from_op": false, "msg_subject": "Re: Remote connections?" } ]
[ { "msg_contents": "I am trying to compile postgres on solaris 7. After running the configure\nscript and then the make I get the following error...\n\n/home/eccdev/kzorgdra/tmp/postgresql-7.1.3>make\nmake -C doc all\nmake[1]: Entering directory `/home/eccdev/kzorgdra/tmp/postgresql-7.1.3/doc'\nmake[1]: Nothing to be done for `all'.\nmake[1]: Leaving directory `/home/eccdev/kzorgdra/tmp/postgresql-7.1.3/doc'\nmake -C src all\nmake[1]: Entering directory `/home/eccdev/kzorgdra/tmp/postgresql-7.1.3/src'\nmake -C backend all\nmake[2]: Entering directory\n`/home/eccdev/kzorgdra/tmp/postgresql-7.1.3/src/backend'\nmake -C access all\nmake[3]: Entering directory\n`/home/eccdev/kzorgdra/tmp/postgresql-7.1.3/src/backend/access'\nmake -C common SUBSYS.o\nmake[4]: Entering directory\n`/home/eccdev/kzorgdra/tmp/postgresql-7.1.3/src/backend/access/common'\ngcc -O2 -funroll-loops -fexpensive-optimizations -I/usr/local/include -Wall \n-Wmissing-prototypes -Wmissing-declarations\n/../src/include -O2 -funroll-loops -fexpensive-optimizations -I/usr/local/in\nclude -c -o heaptuple.o heaptuple.c\n/usr/ccs/bin/as: \"/var/tmp/ccLq9TVj.s\", line 806: error: unknown opcode\n\".subsection\"\n/usr/ccs/bin/as: \"/var/tmp/ccLq9TVj.s\", line 806: error: statement syntax\n/usr/ccs/bin/as: \"/var/tmp/ccLq9TVj.s\", line 816: error: unknown opcode\n\".previous\"\n/usr/ccs/bin/as: \"/var/tmp/ccLq9TVj.s\", line 816: error: statement syntax\nmake[4]: *** [heaptuple.o] Error 1\nmake[4]: Leaving directory\n`/home/eccdev/kzorgdra/tmp/postgresql-7.1.3/src/backend/access/common'\nmake[3]: *** [common-recursive] Error 2\nmake[3]: Leaving directory\n`/home/eccdev/kzorgdra/tmp/postgresql-7.1.3/src/backend/access'\nmake[2]: *** [access-recursive] Error 2\nmake[2]: Leaving directory\n`/home/eccdev/kzorgdra/tmp/postgresql-7.1.3/src/backend'\nmake[1]: *** [all] Error 2\nmake[1]: Leaving directory `/home/eccdev/kzorgdra/tmp/postgresql-7.1.3/src'\nmake: *** [all] Error 2\n\n\nThanks.\n\nKelby\n\n", "msg_date": "Thu, 6 Dec 2001 15:32:45 -0700", "msg_from": "\"Kelby Zorgdrager\" <kelby@ecarcredit.com>", "msg_from_op": true, "msg_subject": "Help Building PostgreSQL on Solaris" }, { "msg_contents": "\"Kelby Zorgdrager\" <kelby@ecarcredit.com> writes:\n\n> I am trying to compile postgres on solaris 7. After running the configure\n> script and then the make I get the following error...\n\n> gcc -O2 -funroll-loops -fexpensive-optimizations -I/usr/local/include -Wall \n> -Wmissing-prototypes -Wmissing-declarations\n> /../src/include -O2 -funroll-loops -fexpensive-optimizations -I/usr/local/in\n> clude -c -o heaptuple.o heaptuple.c\n> /usr/ccs/bin/as: \"/var/tmp/ccLq9TVj.s\", line 806: error: unknown opcode\n> \".subsection\"\n\nIt seems that gcc is generating code that the assembler doesn't like.\nDoes this gcc install work for compiling other programs? I'd assume\nso since you got through the 'configure' stage, but...\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n", "msg_date": "10 Dec 2001 13:04:41 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Help Building PostgreSQL on Solaris" }, { "msg_contents": "I just compiled a version on Solaris 2.6. I used the Sun\ncompilers (not gcc). No errors (or nothing I remember).\nThe version I used is the one I got off of the RedHat 7.2 CD.\nNot sure what version it is, but it compiles. Does it help :-)\n\nKelby Zorgdrager wrote:\n> \n> I am trying to compile postgres on solaris 7. After running the configure\n> script and then the make I get the following error...\n> \n> .....", "msg_date": "Mon, 10 Dec 2001 15:37:06 -0700", "msg_from": "Doug Royer <Doug@royer.com>", "msg_from_op": false, "msg_subject": "Re: Help Building PostgreSQL on Solaris" }, { "msg_contents": "The problem seems to be that GCC is using the sun assembler\n(/usr/ccs/bin/as). Is the GNU assembler installed? \n\nGavin\n\nOn Mon, 10 Dec 2001, Doug Royer wrote:\n\n> \n> I just compiled a version on Solaris 2.6. I used the Sun\n> compilers (not gcc). No errors (or nothing I remember).\n> The version I used is the one I got off of the RedHat 7.2 CD.\n> Not sure what version it is, but it compiles. Does it help :-)\n> \n> Kelby Zorgdrager wrote:\n> > \n> > I am trying to compile postgres on solaris 7. After running the configure\n> > script and then the make I get the following error...\n> > \n> > .....\n\n\n\n", "msg_date": "Tue, 11 Dec 2001 12:56:54 +1100 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: Help Building PostgreSQL on Solaris" }, { "msg_contents": "Hi all,\n\nHave you checked the Solaris specific installation guide for PostgreSQL?\n\nhttp://techdocs.postgresql.org/installguides.php#solaris\n\nI know it works with gcc 2.95.3, but I haven't downloaded the Sun Forte\nnor Sun Workshop compilers and tried with them. If you're using gcc,\nthis guide is known to work.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\nGavin Sherry wrote:\n> \n> The problem seems to be that GCC is using the sun assembler\n> (/usr/ccs/bin/as). Is the GNU assembler installed?\n> \n> Gavin\n> \n> On Mon, 10 Dec 2001, Doug Royer wrote:\n> \n> >\n> > I just compiled a version on Solaris 2.6. I used the Sun\n> > compilers (not gcc). No errors (or nothing I remember).\n> > The version I used is the one I got off of the RedHat 7.2 CD.\n> > Not sure what version it is, but it compiles. Does it help :-)\n> >\n> > Kelby Zorgdrager wrote:\n> > >\n> > > I am trying to compile postgres on solaris 7. After running the configure\n> > > script and then the make I get the following error...\n> > >\n> > > .....\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Tue, 11 Dec 2001 19:08:18 +1100", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Help Building PostgreSQL on Solaris" } ]
[ { "msg_contents": "\nMorning all ...\n\n\tWell, just spend the past few days banging my head against a brick\nwall trying to figure out why OpenACS 4.x won't work with PgSQL v7.2b3,\nand just figured it out, or, at least, figured out part of it ...\n\n\tv7.2b3 no longer has an OID on pg_attribute?\n\n\tThe following works great in v7.1.3, but fails in v7.b3:\n\n select upper(c.relname) as table_name,\n upper(a.attname) as column_name,\n d.description as comments\n from pg_class c,\n pg_attribute a\n left outer join pg_description d on (a.oid = d.objoid)\n where c.oid = a.attrelid\n and a.attnum > 0;\n\n\tIn v7.1.3, it retuns:\n\n table_name | column_name | comments\n---------------------------------+-----------------+----------\n PG_TYPE | TYPNAME |\n PG_TYPE | TYPOWNER |\n PG_TYPE | TYPLEN |\n PG_TYPE | TYPPRTLEN |\n PG_TYPE | TYPBYVAL |\n PG_TYPE | TYPTYPE |\n PG_TYPE | TYPISDEFINED |\n PG_TYPE | TYPDELIM |\n\n\tIn v7.2b3, it returns:\n\nERROR: No such attribute or function 'oid'\narthur_acs=#\n\n\tIs this intentional? :(\n\n", "msg_date": "Thu, 6 Dec 2001 21:47:22 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "OIDs missing in pg_attribute?" }, { "msg_contents": "\nOn Thu, 6 Dec 2001, Marc G. Fournier wrote:\n\n> \tWell, just spend the past few days banging my head against a brick\n> wall trying to figure out why OpenACS 4.x won't work with PgSQL v7.2b3,\n> and just figured it out, or, at least, figured out part of it ...\n>\n> \tv7.2b3 no longer has an OID on pg_attribute?\n\nI believe so. My guess would be that it cut down the OID usage per\ntable greatly.\n\n> \tThe following works great in v7.1.3, but fails in v7.b3:\n>\n> select upper(c.relname) as table_name,\n> upper(a.attname) as column_name,\n> d.description as comments\n> from pg_class c,\n> pg_attribute a\n> left outer join pg_description d on (a.oid = d.objoid)\n> where c.oid = a.attrelid\n> and a.attnum > 0;\n\nI think the test would now be d.objoid=c.oid and d.objsubid=a.attnum\nSo,\nselect upper(c.relname) as table_name,\n upper(a.attname) as column_name,\n d.description as comments\nfrom (pg_class c join pg_attribute a on (c.oid=a.attrelid) left outer join\npg_description d on (d.objsubid=a.attnum and d.objoid=c.\\\noid)) where a.attnum>0;\n\n\n\n", "msg_date": "Thu, 6 Dec 2001 22:19:20 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: OIDs missing in pg_attribute?" }, { "msg_contents": "[2001-12-06 21:47] Marc G. Fournier said:\n| \n| Morning all ...\n| \n| \tWell, just spend the past few days banging my head against a brick\n| wall trying to figure out why OpenACS 4.x won't work with PgSQL v7.2b3,\n| and just figured it out, or, at least, figured out part of it ...\n| \n| \tv7.2b3 no longer has an OID on pg_attribute?\n\nnope. It appears to have been removed around 10 Aug 2001.\n\n| \tThe following works great in v7.1.3, but fails in v7.b3:\n| \n| select upper(c.relname) as table_name,\n| upper(a.attname) as column_name,\n| d.description as comments\n| from pg_class c,\n| pg_attribute a\n| left outer join pg_description d on (a.oid = d.objoid)\n| where c.oid = a.attrelid\n| and a.attnum > 0;\n\nsee if this does what you need. Notice the col_description() function\nthat obviates the need for pg_attribute.oid...\n\nSELECT upper(c.relname) as table_name, \n upper(a.attname) as column_name, \n col_description(a.attrelid, a.attnum) as comments\nFROM pg_class c\n LEFT JOIN pg_attribute a \n ON a.attrelid = c.oid \nWHERE a.attnum > 0;\n\n\ncheers.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Fri, 7 Dec 2001 01:40:28 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": false, "msg_subject": "Re: OIDs missing in pg_attribute?" }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> \tv7.2b3 no longer has an OID on pg_attribute?\n\nYup.\n\n> \tIs this intentional? :(\n\nYup.\n\n> \tThe following works great in v7.1.3, but fails in v7.b3:\n\n> select upper(c.relname) as table_name,\n> upper(a.attname) as column_name,\n> d.description as comments\n> from pg_class c,\n> pg_attribute a\n> left outer join pg_description d on (a.oid = d.objoid)\n> where c.oid = a.attrelid\n> and a.attnum > 0;\n\nThis would not work anyway in 7.2, since the primary key of\npg_description is now (objoid,classoid,objsubid) not just (objoid).\nI'd recommend using col_description(a.attrelid, a.attnum) rather\nthan the explicit join against pg_description.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Dec 2001 10:42:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: OIDs missing in pg_attribute? " }, { "msg_contents": "On Fri, Dec 07, 2001 at 10:42:24AM -0500,\n Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I'd recommend using col_description(a.attrelid, a.attnum) rather\n> than the explicit join against pg_description.\n\nI couldn't find any documentation for this function in the function\nsection of the development docs or using a search with google.\n", "msg_date": "Fri, 7 Dec 2001 14:09:19 -0600", "msg_from": "Bruno Wolff III <bruno@[66.92.219.49]>", "msg_from_op": false, "msg_subject": "Re: OIDs missing in pg_attribute?" } ]
[ { "msg_contents": "In my attempts of trying to increase performance and redundancy, I\nhave trying to get rServ replication to work.\n\nI have successfully been able to replicate between two databases on\nlocalhost.\n\n test -> Main db\n test_slave -> Slave db\n\nThe 'test' database is located in PGDATA (/var/lib/pgsql/data), and\n'test_slave' in PGDATA2 (/var/lib/pgsql/data2). Works fine (although\nI'm a little unhappy about the replication speed).\n\nNow, I'd like to have PGDATA in a ram disk (we're only expecting a\nmaximum of 10-15Mb of data). The problem is if the machine is being\nreset (hardware vice) or if it crashes. Then the ram disk is\nlost. This is where PGDATA2 comes into play...\n\nI've been experimenting with renitialize the PGDATA directory\nstructure, and doing a\n\n insert into pg_database values\n ('test_slave', 26, 0, 'f', 't', 18539, 'PGDATA2');\n\nAlso, a link from PGDATA2/base/709432 -> PGDATA/base/709432 is made\n(this is what the 'original' PGDATA directory/pg_database table have).\n\nUnfortunatly this give me:\n\nFATAL 1: Database \"test_slave\" does not exist.\n The database subdirectory '/var/lib/pgsql/data/base/18720' is missing.\n\n-- \n Turbo __ _ Debian GNU Unix _IS_ user friendly - it's just \n ^^^^^ / /(_)_ __ _ ___ __ selective about who its friends are \n / / | | '_ \\| | | \\ \\/ / Debian Certified Linux Developer \n _ /// / /__| | | | | |_| |> < Turbo Fredriksson turbo@bayour.com\n \\\\\\/ \\____/_|_| |_|\\__,_/_/\\_\\ Gothenburg/Sweden\n\ndomestic disruption Albanian [Hello to all my fans in domestic\nsurveillance] Iran toluene Noriega Nazi 767 plutonium colonel Serbian\nPanama cryptographic radar subway\n[See http://www.aclu.org/echelonwatch/index.html for more about this]\n", "msg_date": "07 Dec 2001 11:32:46 +0100", "msg_from": "Turbo Fredriksson <turbo@bayour.com>", "msg_from_op": true, "msg_subject": "restoring a shadow" }, { "msg_contents": "Sorry, forgot the vacuum :)\n\nWorks like charm now!\n\n-- \n Turbo __ _ Debian GNU Unix _IS_ user friendly - it's just \n ^^^^^ / /(_)_ __ _ ___ __ selective about who its friends are \n / / | | '_ \\| | | \\ \\/ / Debian Certified Linux Developer \n _ /// / /__| | | | | |_| |> < Turbo Fredriksson turbo@bayour.com\n \\\\\\/ \\____/_|_| |_|\\__,_/_/\\_\\ Gothenburg/Sweden\n\nSouth Africa nitrate Marxist colonel attack ammonium NSA SEAL Team 6\nPanama Albanian North Korea Iran killed jihad genetic\n[See http://www.aclu.org/echelonwatch/index.html for more about this]\n", "msg_date": "07 Dec 2001 12:43:29 +0100", "msg_from": "Turbo Fredriksson <turbo@bayour.com>", "msg_from_op": true, "msg_subject": "Re: restoring a shadow" }, { "msg_contents": ">>>>> \"Turbo\" == Turbo Fredriksson <turbo@bayour.com> writes:\n\n Turbo> Sorry, forgot the vacuum :) Works like charm now!\n\nA _LITTLE_ premature though... I'd really like to know what the\ndatabase is/was named, so I can automate the restore.. There might\nbe many db's that's replicated...\n\nIs there such a way, or am I overlooking something?\n\n-- \n Turbo __ _ Debian GNU Unix _IS_ user friendly - it's just \n ^^^^^ / /(_)_ __ _ ___ __ selective about who its friends are \n / / | | '_ \\| | | \\ \\/ / Debian Certified Linux Developer \n _ /// / /__| | | | | |_| |> < Turbo Fredriksson turbo@bayour.com\n \\\\\\/ \\____/_|_| |_|\\__,_/_/\\_\\ Gothenburg/Sweden\n\nkibo Kennedy Honduras Nazi Delta Force Cuba pits PLO SDI subway iodine\nspy [Hello to all my fans in domestic surveillance] nitrate attack\n[See http://www.aclu.org/echelonwatch/index.html for more about this]\n", "msg_date": "07 Dec 2001 12:53:01 +0100", "msg_from": "Turbo Fredriksson <turbo@bayour.com>", "msg_from_op": true, "msg_subject": "Re: restoring a shadow" }, { "msg_contents": "Turbo Fredriksson wrote:\n> \n> In my attempts of trying to increase performance and redundancy, I\n> have trying to get rServ replication to work.\n> \n> I have successfully been able to replicate between two databases on\n> localhost.\n> \n> test -> Main db\n> test_slave -> Slave db\n> \n> The 'test' database is located in PGDATA (/var/lib/pgsql/data), and\n> 'test_slave' in PGDATA2 (/var/lib/pgsql/data2). Works fine (although\n> I'm a little unhappy about the replication speed).\n> \n> Now, I'd like to have PGDATA in a ram disk (we're only expecting a\n> maximum of 10-15Mb of data). The problem is if the machine is being\n> reset (hardware vice) or if it crashes. Then the ram disk is\n> lost. This is where PGDATA2 comes into play...\n\nWhy bother with a RAM disk? If you only have a few megabytes, why not just\nallocate a large number of buffers to PostgreSQL. Most, if not everything\nshould end up in RAM. Up your shared memory limites and give tones to\nPostgreSQL. We do that where I work, and I have seen 100% cache hit rate on\nsome queries.\n", "msg_date": "Fri, 07 Dec 2001 09:04:57 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: restoring a shadow" }, { "msg_contents": ">>>>> \"mlw\" == mlw <markw@mohawksoft.com> writes:\n\n mlw> Turbo Fredriksson wrote:\n >> Now, I'd like to have PGDATA in a ram disk (we're only\n >> expecting a maximum of 10-15Mb of data). The problem is if the\n >> machine is being reset (hardware vice) or if it crashes. Then\n >> the ram disk is lost. This is where PGDATA2 comes into play...\n\n mlw> Why bother with a RAM disk? If you only have a few megabytes,\n mlw> why not just allocate a large number of buffers to\n mlw> PostgreSQL.\n\nSeems like I have to. I couldn't reproduce the success, so :)\n\nMust have been having a stale process or something with the original\ndb in place...\n\n\nI'll try this out and see if we can speed things up that way. Thanx.\n\n-- \n Turbo __ _ Debian GNU Unix _IS_ user friendly - it's just \n ^^^^^ / /(_)_ __ _ ___ __ selective about who its friends are \n / / | | '_ \\| | | \\ \\/ / Debian Certified Linux Developer \n _ /// / /__| | | | | |_| |> < Turbo Fredriksson turbo@bayour.com\n \\\\\\/ \\____/_|_| |_|\\__,_/_/\\_\\ Gothenburg/Sweden\n\niodine BATF FSF plutonium genetic Ortega critical AK-47 congress\nAlbanian Panama radar Uzi Treasury Iran\n[See http://www.aclu.org/echelonwatch/index.html for more about this]\n", "msg_date": "07 Dec 2001 15:29:51 +0100", "msg_from": "Turbo Fredriksson <turbo@bayour.com>", "msg_from_op": true, "msg_subject": "Re: restoring a shadow" } ]
[ { "msg_contents": "I've updated the National Language Support status page, at\n\nhttp://www.ca.postgresql.org/~petere/nls.php\n\nNot only does it show all kinds of progress numbers, it also allows you to\ndownload .po files that are baked freshly every day for your working\npleasure. Thus, it's no longer necessary to keep an up-to-date source\ntree and all the tools around. Furthermore, any errors in the translation\nfiles will pop up automatically as well.\n\nIn somewhat related news, I've made a new release of my \"BSD Gettext\"\ndistribution:\n\nhttp://www.ca.postgresql.org/~petere/gettext.html\n\nThe msgfmt program is now a full replacement for the GNU equivalent for\nall relevant functionality, so all the tools required for the user side\nare now available.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Fri, 7 Dec 2001 13:05:21 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "New NLS status page" }, { "msg_contents": "----- Original Message ----- \nFrom: Peter Eisentraut <peter_e@gmx.net>\nSent: Friday, December 07, 2001 7:05 AM\n\n> I've updated the National Language Support status page, at\n> \n> http://www.ca.postgresql.org/~petere/nls.php\n\nThat's very cool!\n\nHowever the status table format isn't the best one.\nIt'll keep growing sidewise with every new language\nadded and will force to scroll horizontally the browser's\nwindow. Plus, the older stats you used to have of \nhow many messages translated, fuzzy translations, and\nuntranslated I also find useful.\n\nMay I suggest the following table format:\n\n-----------------------------------------------------------------------------\n| Lang/Component| libpg | pg_dump | postgres | pgsql | Lang Total |\n+---------------+-------+---------+---------------+------------+-------------\n+ cs + 0 + 0 + 0 + 89% + 1 (89) |\n+ + + + + t99\\f5\\u25 | |\n+---------------+-------+---------+---------------+------------+------------+\n+ de + 100 + 100 + 20 + 100 + 4 (80) |\n+ + + + t15\\f25\\u2000 + + |\n.....\n+ 7 langs + 4(100)+ ........\n+---------------+-------+---------+---------------+------------+------------+\n\n\n(Sorry for being such an @n@l :))\n\n--\nSerguei A. Mokhov\n\n", "msg_date": "Sat, 8 Dec 2001 14:27:59 -0500", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": false, "msg_subject": "Re: New NLS status page" }, { "msg_contents": "> I've updated the National Language Support status page, at\n>\n> http://www.ca.postgresql.org/~petere/nls.php\n>\n> Not only does it show all kinds of progress numbers, it also allows you to\n> download .po files that are baked freshly every day for your working\n> pleasure. Thus, it's no longer necessary to keep an up-to-date source\n> tree and all the tools around. Furthermore, any errors in the translation\n> files will pop up automatically as well.\n\nLooks like you might have to transpose that table at the top of the page\nonce there's a few more translations :)\n\nChris\n\n", "msg_date": "Mon, 10 Dec 2001 10:20:46 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: New NLS status page" }, { "msg_contents": "Christopher Kings-Lynne writes:\n\n> Looks like you might have to transpose that table at the top of the page\n> once there's a few more translations :)\n\nThere will be a few more translated programs and libraries down the left,\ntoo.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 10 Dec 2001 19:43:21 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: New NLS status page" }, { "msg_contents": "----- Original Message ----- \nFrom: Peter Eisentraut <peter_e@gmx.net>\nSent: Monday, December 10, 2001 1:43 PM\n\n> Christopher Kings-Lynne writes:\n> \n> > Looks like you might have to transpose that table at the top of the page\n> > once there's a few more translations :)\n> \n> There will be a few more translated programs and libraries down the left,\n> too.\n\nBut these are more or less fixed, whereas the number of languages\nwill grow more rapidly (hopefully).\n\n--\nSerguei A. Mokhov\n\n", "msg_date": "Tue, 11 Dec 2001 01:22:07 -0500", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": false, "msg_subject": "Re: New NLS status page" } ]
[ { "msg_contents": "We've got most platforms ironed out, with just a few left to get a\ndefinitive report. It looks like we'll end up dropping a few platforms\nfor this release (the first time in several years that the number of\nsupported platforms decreased!).\n\nThe problem platforms with comments and questions are:\n\nLinux/arm Mark Knox\n Obsolete platform? gcc no longer supported?\nLinux/s390 Neale Ferguson\n Likely small user base.\n Anyone actively running on S390?\nNetBSD/arm32 Patrick Welche\nNetBSD/m68k Bill Studenmund\n Bill, you thought you might get the old iron tested.\n Any luck?\nNetBSD/VAX Tom I. Helbekkmo\n Any VAXen out there nowadays?\nQNX Bernd Tegge, Igor Kovalenko\n Anyone tested 4.x with 7.2,\n or are we stuck with QNX6 needing patches?\nSunOS Tatsuo Ishii\n Are we giving up on this one? Still relevant?\nWindows/Cygwin Daniel Horak\n OK in serial test, trouble with parallel test?\nShowstopper??\nWindows/native Magnus Hagander (clients only)\n Any reports?\n\n\nAnd those reported as successful:\n\nAIX Andreas Zeugswetter (Tatsuo working on 5L?)\nBeOS Cyril Velter\nBSD/OS Bruce\nFreeBSD Chris Kings-Lynne\nHPUX Tom (anyone tested 11.0 or higher?)\nIRIX Luis Amigo\nLinux/Alpha Tom\nLinux/MIPS Hisao Shibuya\nLinux/PPC Tom\nLinux/sparc Doug McNaught\nLinux/x86 Thomas (and many others ;)\nMacOS-X Gavin Sherry\nNetBSD/Alpha Thomas Thai\nNetBSD/PPC Bill Studenmund\nNetBSD/sparc Matthew Green\nNetBSD/x86 Bill Studenmund\nOpenBSD/sparc Brandon Palmer\nOpenBSD/x86 Brandon Palmer\nSCO OpenUnix Larry Rosenman\nSolaris/sparc Andrew Sullivan\nSolaris/x86 Martin Renters\nTru64 Alessio Bragadini (trouble with 5.1?)\n", "msg_date": "Fri, 07 Dec 2001 16:14:52 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Third call for platform testing" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> HPUX Tom (anyone tested 11.0 or higher?)\n\nI thought we had a success report from someone for HPUX 11.\n\nBTW, I'm hoping to help Tatsuo look into the reported instability\non AIX 5L. I'm guessing it's some unportable assumption in the\nnew LWLock code about behavior of semaphores. If we're really\nlucky this might also extend to the reported Cygwin problem...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Dec 2001 12:09:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Third call for platform testing " }, { "msg_contents": "> > HPUX Tom (anyone tested 11.0 or higher?)\n> I thought we had a success report from someone for HPUX 11.\n\nYup. I have it in my (uncommitted) sgml list, but have been cutting and\npasting from previous emails and forgot to update this one.\n\n> BTW, I'm hoping to help Tatsuo look into the reported instability\n> on AIX 5L. I'm guessing it's some unportable assumption in the\n> new LWLock code about behavior of semaphores. If we're really\n> lucky this might also extend to the reported Cygwin problem...\n\nGreat. Do we have other folks looking at cygwin too, or is it dead in\nthe water unless you come up with something?\n\n - Thomas\n", "msg_date": "Fri, 07 Dec 2001 17:12:53 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: Third call for platform testing" }, { "msg_contents": "Thomas Lockhart writes:\n\n> QNX Bernd Tegge, Igor Kovalenko\n> Anyone tested 4.x with 7.2,\n> or are we stuck with QNX6 needing patches?\n\nPlease note that QNX 4 and QNX 6 are completely different operating\nsystems that happen to come from the same company. So you don't want to\nhave one entry for both of them.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 10 Dec 2001 14:08:07 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Third call for platform testing" }, { "msg_contents": "...\n> Please note that QNX 4 and QNX 6 are completely different operating\n> systems that happen to come from the same company. So you don't want to\n> have one entry for both of them.\n\nOh, of course. Don't know why one would assume that they are versions of\nthe *same* OS ;)\n\n From recent emails, it looks like QNX 4 should be listed as supported,\nand QNX 6 could/should be listed as \"supported with patches\". Or should\nthe status for 6 be something different?\n\n - Thomas\n", "msg_date": "Mon, 10 Dec 2001 16:30:52 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: Third call for platform testing" }, { "msg_contents": "> SunOS Tatsuo Ishii\n> Are we giving up on this one? Still relevant?\n\nWhat should we do? The only remaining issue is a non-8-bit-clean\nmemcmp, which seems pretty easy to fix it.\n--\nTatsuo Ishii\n", "msg_date": "Tue, 11 Dec 2001 10:10:35 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Third call for platform testing" }, { "msg_contents": "> ...\n> > Please note that QNX 4 and QNX 6 are completely different operating\n> > systems that happen to come from the same company. So you don't want to\n> > have one entry for both of them.\n> \n> Oh, of course. Don't know why one would assume that they are versions of\n> the *same* OS ;)\n> \n> >From recent emails, it looks like QNX 4 should be listed as supported,\n> and QNX 6 could/should be listed as \"supported with patches\". Or should\n> the status for 6 be something different?\n\nThat seems right to me.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 Dec 2001 03:38:22 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Third call for platform testing" }, { "msg_contents": "> > SunOS Tatsuo Ishii\n> > Are we giving up on this one? Still relevant?\n> \n> What should we do? The only remaining issue is a non-8-bit-clean\n> memcmp, which seems pretty easy to fix it.\n\nYes, seems we could go a few directions with SunOS:\n\n\tLeave bit types broken on that platform, document it\n\tHard-code in a memcmp() in C for just that platform in varbit.c\n\tAdd configure test and real memcmp() function for bad platforms\n\nAnyone want to vote on these? Personally, SunOS seems like the\ngranddaddy of ports and I would hate to see it leave, especially when we\nare so close.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 Dec 2001 03:41:37 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Third call for platform testing" }, { "msg_contents": "> NetBSD/arm32 Patrick Welche\n\nNot too good:\n\n CREATE INDEX hash_i4_index ON hash_i4_heap USING hash (random int4_ops);\n+ ERROR: cannot read block 3 of hash_i4_index: Bad address\n CREATE INDEX hash_name_index ON hash_name_heap USING hash (random name_ops);\n+ ERROR: cannot read block 3 of hash_name_index: Bad address\n CREATE INDEX hash_txt_index ON hash_txt_heap USING hash (random text_ops);\n+ ERROR: cannot read block 3 of hash_txt_index: Bad address\n CREATE INDEX hash_f8_index ON hash_f8_heap USING hash (random float8_ops);\n+ ERROR: cannot read block 3 of hash_f8_index: Bad address\n -- CREATE INDEX hash_ovfl_index ON hash_ovfl_heap USING hash (x int4_ops);\n\nso create_index failed in a runcheck. (leading to sanity_check failing too)\n\nPatrick\n", "msg_date": "Thu, 13 Dec 2001 18:57:51 +0000", "msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>", "msg_from_op": false, "msg_subject": "Re: Third call for platform testing" }, { "msg_contents": "Patrick Welche <prlw1@newn.cam.ac.uk> writes:\n>> NetBSD/arm32 Patrick Welche\n> Not too good:\n\n> CREATE INDEX hash_i4_index ON hash_i4_heap USING hash (random int4_ops);\n> + ERROR: cannot read block 3 of hash_i4_index: Bad address\n\nIIRC there was a similar report awhile back --- try searching the\narchives. I don't recall the resolution, and since I'm on a very slow\ndialup link at the moment, I'm not eager to do the search myself.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Dec 2001 18:11:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Third call for platform testing " }, { "msg_contents": "On Thu, Dec 13, 2001 at 06:11:26PM -0500, Tom Lane wrote:\n> Patrick Welche <prlw1@newn.cam.ac.uk> writes:\n> >> NetBSD/arm32 Patrick Welche\n> > Not too good:\n> \n> > CREATE INDEX hash_i4_index ON hash_i4_heap USING hash (random int4_ops);\n> > + ERROR: cannot read block 3 of hash_i4_index: Bad address\n> \n> IIRC there was a similar report awhile back --- try searching the\n> archives. I don't recall the resolution, and since I'm on a very slow\n> dialup link at the moment, I'm not eager to do the search myself.\n\nYes - 13 Apr 2001 - that was reported against NetBSD-1.5T/vax. I was trying\nNetBSD-1.5U/arm32 - I'll try upgrading to NetBSD-1.5Z/arm32 and see if it\nchanges anything.. (Will take a few days though..)\n\nCheers,\n\nPatrick\n", "msg_date": "Thu, 13 Dec 2001 23:42:10 +0000", "msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>", "msg_from_op": false, "msg_subject": "Re: Third call for platform testing" }, { "msg_contents": "> > SunOS Tatsuo Ishii\n> > Are we giving up on this one? Still relevant?\n> \n> What should we do? The only remaining issue is a non-8-bit-clean\n> memcmp, which seems pretty easy to fix it.\n\nOK, here is a patch to allow SunOS to pass the regression tests. One\nattachment is the patch, which is quite small except that the line\nnumbers in configure changed with autoconf, and that bloated the patch. \nThe second attachment is the file memcmp.c which should be placed in\nsrc/utils. I got this from NetBSD.\n\nI tested the patch several ways. First I tested the regression tests\nwithout the new memcmp(). Then I enabled the memcmp(). Thirdly, I had\nthe memcmp() always return '1' on non-equals rather than +/- 1, and saw\nthe same failures Tatuso saw.\n\nOne interesting item is that I had to compile with\nbackend/utils/adt/varbit.c with -fno-builtin because my gcc 2.X manual\nsays:\n\n -fno-builtin\n Don't recognize built-in functions that do not be-\n gin with two leading underscores. Currently, the\n functions affected include _exit, abort, abs, allo-\n ca, cos, exit, fabs, labs, memcmp, memcpy, sin,\n sqrt, strcmp, strcpy, and strlen.\n\nSo if you don't give that flag on BSD/OS, the memcmp() is inlined and\nnever called. Nice feature. Tatsuo, is there a newer compiler for\nSunOS that will do this and bypass the broken libc memcmp() on that\nplatform?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: configure.in\n===================================================================\nRCS file: /cvsroot/pgsql/configure.in,v\nretrieving revision 1.158\ndiff -c -r1.158 configure.in\n*** configure.in\t2001/12/13 22:00:22\t1.158\n--- configure.in\t2001/12/17 05:22:05\n***************\n*** 809,814 ****\n--- 809,817 ----\n AC_FUNC_ACCEPT_ARGTYPES\n PGAC_FUNC_GETTIMEOFDAY_1ARG\n \n+ # SunOS doesn't handle negative byte comparisons properly with +/- return\n+ PGAC_FUNC_MEMCMP\n+ \n AC_CHECK_FUNCS([fcvt getopt_long memmove pstat setproctitle setsid sigprocmask sysconf waitpid dlopen fdatasync])\n \n dnl Check whether <unistd.h> declares fdatasync().\nIndex: config/c-library.m4\n===================================================================\nRCS file: /cvsroot/pgsql/config/c-library.m4,v\nretrieving revision 1.9\ndiff -c -r1.9 c-library.m4\n*** config/c-library.m4\t2001/09/07 19:52:53\t1.9\n--- config/c-library.m4\t2001/12/17 05:22:05\n***************\n*** 36,41 ****\n--- 36,65 ----\n fi])# PGAC_FUNC_GETTIMEOFDAY_1ARG\n \n \n+ # PGAC_FUNC_MEMCMP\n+ # -----------\n+ # Check if memcmp() properly handles negative bytes and returns +/-.\n+ # SunOS does not.\n+ # AC_FUNC_MEMCMP\n+ AC_DEFUN(PGAC_FUNC_MEMCMP,\n+ [AC_CACHE_CHECK(for 8-bit clean memcmp, pgac_cv_func_memcmp_clean,\n+ [AC_TRY_RUN([\n+ main()\n+ {\n+ char c0 = 0x40, c1 = 0x80, c2 = 0x81;\n+ exit(memcmp(&c0, &c2, 1) < 0 && memcmp(&c1, &c2, 1) < 0 ? 0 : 1);\n+ }\n+ ], pgac_cv_func_memcmp_clean=yes, pgac_cv_func_memcmp_clean=no,\n+ pgac_cv_func_memcmp_clean=no)])\n+ if test $pgac_cv_func_memcmp_clean = no ; then\n+ MEMCMP=memcmp.o\n+ else\n+ MEMCMP=\n+ fi\n+ AC_SUBST(MEMCMP)dnl\n+ ])\n+ \n+ \n # PGAC_UNION_SEMUN\n # ----------------\n # Check if `union semun' exists. Define HAVE_UNION_SEMUN if so.\nIndex: src/Makefile.global.in\n===================================================================\nRCS file: /cvsroot/pgsql/src/Makefile.global.in,v\nretrieving revision 1.140\ndiff -c -r1.140 Makefile.global.in\n*** src/Makefile.global.in\t2001/10/13 15:24:23\t1.140\n--- src/Makefile.global.in\t2001/12/17 05:22:06\n***************\n*** 328,333 ****\n--- 328,334 ----\n STRERROR = @STRERROR@\n SNPRINTF = @SNPRINTF@\n STRDUP = @STRDUP@\n+ MEMCMP = @MEMCMP@\n STRTOUL = @STRTOUL@\n \n \nIndex: src/backend/port/Makefile.in\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/port/Makefile.in,v\nretrieving revision 1.29\ndiff -c -r1.29 Makefile.in\n*** src/backend/port/Makefile.in\t2001/05/08 19:38:57\t1.29\n--- src/backend/port/Makefile.in\t2001/12/17 05:22:06\n***************\n*** 26,32 ****\n--- 26,39 ----\n OBJS+= @STRTOL@ @STRTOUL@ @SNPRINTF@\n ifdef STRDUP\n OBJS += $(top_builddir)/src/utils/strdup.o\n+ $(top_builddir)/src/utils/strdup.o:\n+ \t$(MAKE) -C $(top_builddir)/src/utils strdup.o\n endif\n+ ifdef MEMCMP\n+ OBJS += $(top_builddir)/src/utils/memcmp.o\n+ $(top_builddir)/src/utils/memcmp.o:\n+ \t$(MAKE) -C $(top_builddir)/src/utils memcmp.o\n+ endif\n ifeq ($(PORTNAME), qnx4)\n OBJS += getrusage.o qnx4/SUBSYS.o\n endif\n***************\n*** 59,68 ****\n \n tas.o: tas.s\n \t$(CC) $(CFLAGS) -c $<\n- \n- $(top_builddir)/src/utils/strdup.o:\n- \t$(MAKE) -C $(top_builddir)/src/utils strdup.o\n- \n \n distclean clean:\n \trm -f SUBSYS.o $(OBJS)\n--- 66,71 ----\nIndex: src/utils/Makefile\n===================================================================\nRCS file: /cvsroot/pgsql/src/utils/Makefile,v\nretrieving revision 1.9\ndiff -c -r1.9 Makefile\n*** src/utils/Makefile\t2000/08/31 16:12:35\t1.9\n--- src/utils/Makefile\t2001/12/17 05:22:08\n***************\n*** 24,30 ****\n all:\n \n clean distclean maintainer-clean:\n! \trm -f dllinit.o getopt.o strdup.o\n \n depend dep:\n \t$(CC) $(CFLAGS) -MM *.c >depend\n--- 24,30 ----\n all:\n \n clean distclean maintainer-clean:\n! \trm -f dllinit.o getopt.o strdup.o memcmp.o\n \n depend dep:\n \t$(CC) $(CFLAGS) -MM *.c >depend\nIndex: configure\n===================================================================\nRCS file: /cvsroot/pgsql/configure,v\nretrieving revision 1.166\ndiff -c -r1.166 configure\n*** configure\t2001/12/13 22:00:22\t1.166\n--- configure\t2001/12/17 05:22:04\n***************\n*** 6149,6163 ****\n \n fi\n \n for ac_func in fcvt getopt_long memmove pstat setproctitle setsid sigprocmask sysconf waitpid dlopen fdatasync\n do\n echo $ac_n \"checking for $ac_func\"\"... $ac_c\" 1>&6\n! echo \"configure:6156: checking for $ac_func\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_$ac_func'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 6161 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char $ac_func(); below. */\n--- 6149,6205 ----\n \n fi\n \n+ # SunOS doesn't handle negative byte comparisons properly with +/- return\n+ echo $ac_n \"checking for 8-bit clean memcmp\"\"... $ac_c\" 1>&6\n+ echo \"configure:6155: checking for 8-bit clean memcmp\" >&5\n+ if eval \"test \\\"`echo '$''{'pgac_cv_func_memcmp_clean'+set}'`\\\" = set\"; then\n+ echo $ac_n \"(cached) $ac_c\" 1>&6\n+ else\n+ if test \"$cross_compiling\" = yes; then\n+ pgac_cv_func_memcmp_clean=no\n+ else\n+ cat > conftest.$ac_ext <<EOF\n+ #line 6163 \"configure\"\n+ #include \"confdefs.h\"\n+ \n+ main()\n+ {\n+ char c0 = 0x40, c1 = 0x80, c2 = 0x81;\n+ exit(memcmp(&c0, &c2, 1) < 0 && memcmp(&c1, &c2, 1) < 0 ? 0 : 1);\n+ }\n+ \n+ EOF\n+ if { (eval echo configure:6173: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null\n+ then\n+ pgac_cv_func_memcmp_clean=yes\n+ else\n+ echo \"configure: failed program was:\" >&5\n+ cat conftest.$ac_ext >&5\n+ rm -fr conftest*\n+ pgac_cv_func_memcmp_clean=no\n+ fi\n+ rm -fr conftest*\n+ fi\n+ \n+ fi\n+ \n+ echo \"$ac_t\"\"$pgac_cv_func_memcmp_clean\" 1>&6\n+ if test $pgac_cv_func_memcmp_clean = no ; then\n+ MEMCMP=memcmp.o\n+ else\n+ MEMCMP=\n+ fi\n+ \n+ \n for ac_func in fcvt getopt_long memmove pstat setproctitle setsid sigprocmask sysconf waitpid dlopen fdatasync\n do\n echo $ac_n \"checking for $ac_func\"\"... $ac_c\" 1>&6\n! echo \"configure:6198: checking for $ac_func\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_$ac_func'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 6203 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char $ac_func(); below. */\n***************\n*** 6180,6186 ****\n \n ; return 0; }\n EOF\n! if { (eval echo configure:6184: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_$ac_func=yes\"\n else\n--- 6222,6228 ----\n \n ; return 0; }\n EOF\n! if { (eval echo configure:6226: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_$ac_func=yes\"\n else\n***************\n*** 6206,6212 ****\n \n \n cat > conftest.$ac_ext <<EOF\n! #line 6210 \"configure\"\n #include \"confdefs.h\"\n #include <unistd.h>\n EOF\n--- 6248,6254 ----\n \n \n cat > conftest.$ac_ext <<EOF\n! #line 6252 \"configure\"\n #include \"confdefs.h\"\n #include <unistd.h>\n EOF\n***************\n*** 6222,6233 ****\n \n \n echo $ac_n \"checking for PS_STRINGS\"\"... $ac_c\" 1>&6\n! echo \"configure:6226: checking for PS_STRINGS\" >&5\n if eval \"test \\\"`echo '$''{'pgac_cv_var_PS_STRINGS'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 6231 \"configure\"\n #include \"confdefs.h\"\n #include <machine/vmparam.h>\n #include <sys/exec.h>\n--- 6264,6275 ----\n \n \n echo $ac_n \"checking for PS_STRINGS\"\"... $ac_c\" 1>&6\n! echo \"configure:6268: checking for PS_STRINGS\" >&5\n if eval \"test \\\"`echo '$''{'pgac_cv_var_PS_STRINGS'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 6273 \"configure\"\n #include \"confdefs.h\"\n #include <machine/vmparam.h>\n #include <sys/exec.h>\n***************\n*** 6237,6243 ****\n PS_STRINGS->ps_argvstr = \"foo\";\n ; return 0; }\n EOF\n! if { (eval echo configure:6241: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n pgac_cv_var_PS_STRINGS=yes\n else\n--- 6279,6285 ----\n PS_STRINGS->ps_argvstr = \"foo\";\n ; return 0; }\n EOF\n! if { (eval echo configure:6283: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n pgac_cv_var_PS_STRINGS=yes\n else\n***************\n*** 6259,6270 ****\n \n SNPRINTF=''\n echo $ac_n \"checking for snprintf\"\"... $ac_c\" 1>&6\n! echo \"configure:6263: checking for snprintf\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_snprintf'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 6268 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char snprintf(); below. */\n--- 6301,6312 ----\n \n SNPRINTF=''\n echo $ac_n \"checking for snprintf\"\"... $ac_c\" 1>&6\n! echo \"configure:6305: checking for snprintf\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_snprintf'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 6310 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char snprintf(); below. */\n***************\n*** 6287,6293 ****\n \n ; return 0; }\n EOF\n! if { (eval echo configure:6291: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_snprintf=yes\"\n else\n--- 6329,6335 ----\n \n ; return 0; }\n EOF\n! if { (eval echo configure:6333: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_snprintf=yes\"\n else\n***************\n*** 6311,6322 ****\n fi\n \n echo $ac_n \"checking for vsnprintf\"\"... $ac_c\" 1>&6\n! echo \"configure:6315: checking for vsnprintf\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_vsnprintf'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 6320 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char vsnprintf(); below. */\n--- 6353,6364 ----\n fi\n \n echo $ac_n \"checking for vsnprintf\"\"... $ac_c\" 1>&6\n! echo \"configure:6357: checking for vsnprintf\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_vsnprintf'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 6362 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char vsnprintf(); below. */\n***************\n*** 6339,6345 ****\n \n ; return 0; }\n EOF\n! if { (eval echo configure:6343: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_vsnprintf=yes\"\n else\n--- 6381,6387 ----\n \n ; return 0; }\n EOF\n! if { (eval echo configure:6385: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_vsnprintf=yes\"\n else\n***************\n*** 6364,6370 ****\n \n \n cat > conftest.$ac_ext <<EOF\n! #line 6368 \"configure\"\n #include \"confdefs.h\"\n #include <stdio.h>\n EOF\n--- 6406,6412 ----\n \n \n cat > conftest.$ac_ext <<EOF\n! #line 6410 \"configure\"\n #include \"confdefs.h\"\n #include <stdio.h>\n EOF\n***************\n*** 6379,6385 ****\n rm -f conftest*\n \n cat > conftest.$ac_ext <<EOF\n! #line 6383 \"configure\"\n #include \"confdefs.h\"\n #include <stdio.h>\n EOF\n--- 6421,6427 ----\n rm -f conftest*\n \n cat > conftest.$ac_ext <<EOF\n! #line 6425 \"configure\"\n #include \"confdefs.h\"\n #include <stdio.h>\n EOF\n***************\n*** 6396,6407 ****\n \n # do this one the hard way in case isinf() is a macro\n echo $ac_n \"checking for isinf\"\"... $ac_c\" 1>&6\n! echo \"configure:6400: checking for isinf\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_isinf'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 6405 \"configure\"\n #include \"confdefs.h\"\n #include <math.h>\n \n--- 6438,6449 ----\n \n # do this one the hard way in case isinf() is a macro\n echo $ac_n \"checking for isinf\"\"... $ac_c\" 1>&6\n! echo \"configure:6442: checking for isinf\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_isinf'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 6447 \"configure\"\n #include \"confdefs.h\"\n #include <math.h>\n \n***************\n*** 6409,6415 ****\n double x = 0.0; int res = isinf(x);\n ; return 0; }\n EOF\n! if { (eval echo configure:6413: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n ac_cv_func_isinf=yes\n else\n--- 6451,6457 ----\n double x = 0.0; int res = isinf(x);\n ; return 0; }\n EOF\n! if { (eval echo configure:6455: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n ac_cv_func_isinf=yes\n else\n***************\n*** 6435,6446 ****\n for ac_func in fpclass fp_class fp_class_d class\n do\n echo $ac_n \"checking for $ac_func\"\"... $ac_c\" 1>&6\n! echo \"configure:6439: checking for $ac_func\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_$ac_func'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 6444 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char $ac_func(); below. */\n--- 6477,6488 ----\n for ac_func in fpclass fp_class fp_class_d class\n do\n echo $ac_n \"checking for $ac_func\"\"... $ac_c\" 1>&6\n! echo \"configure:6481: checking for $ac_func\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_$ac_func'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 6486 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char $ac_func(); below. */\n***************\n*** 6463,6469 ****\n \n ; return 0; }\n EOF\n! if { (eval echo configure:6467: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_$ac_func=yes\"\n else\n--- 6505,6511 ----\n \n ; return 0; }\n EOF\n! if { (eval echo configure:6509: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_$ac_func=yes\"\n else\n***************\n*** 6491,6502 ****\n \n \n echo $ac_n \"checking for getrusage\"\"... $ac_c\" 1>&6\n! echo \"configure:6495: checking for getrusage\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_getrusage'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 6500 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char getrusage(); below. */\n--- 6533,6544 ----\n \n \n echo $ac_n \"checking for getrusage\"\"... $ac_c\" 1>&6\n! echo \"configure:6537: checking for getrusage\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_getrusage'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 6542 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char getrusage(); below. */\n***************\n*** 6519,6525 ****\n \n ; return 0; }\n EOF\n! if { (eval echo configure:6523: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_getrusage=yes\"\n else\n--- 6561,6567 ----\n \n ; return 0; }\n EOF\n! if { (eval echo configure:6565: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_getrusage=yes\"\n else\n***************\n*** 6544,6555 ****\n \n \n echo $ac_n \"checking for srandom\"\"... $ac_c\" 1>&6\n! echo \"configure:6548: checking for srandom\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_srandom'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 6553 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char srandom(); below. */\n--- 6586,6597 ----\n \n \n echo $ac_n \"checking for srandom\"\"... $ac_c\" 1>&6\n! echo \"configure:6590: checking for srandom\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_srandom'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 6595 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char srandom(); below. */\n***************\n*** 6572,6578 ****\n \n ; return 0; }\n EOF\n! if { (eval echo configure:6576: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_srandom=yes\"\n else\n--- 6614,6620 ----\n \n ; return 0; }\n EOF\n! if { (eval echo configure:6618: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_srandom=yes\"\n else\n***************\n*** 6597,6608 ****\n \n \n echo $ac_n \"checking for gethostname\"\"... $ac_c\" 1>&6\n! echo \"configure:6601: checking for gethostname\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_gethostname'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 6606 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char gethostname(); below. */\n--- 6639,6650 ----\n \n \n echo $ac_n \"checking for gethostname\"\"... $ac_c\" 1>&6\n! echo \"configure:6643: checking for gethostname\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_gethostname'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 6648 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char gethostname(); below. */\n***************\n*** 6625,6631 ****\n \n ; return 0; }\n EOF\n! if { (eval echo configure:6629: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_gethostname=yes\"\n else\n--- 6667,6673 ----\n \n ; return 0; }\n EOF\n! if { (eval echo configure:6671: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_gethostname=yes\"\n else\n***************\n*** 6650,6661 ****\n \n \n echo $ac_n \"checking for random\"\"... $ac_c\" 1>&6\n! echo \"configure:6654: checking for random\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_random'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 6659 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char random(); below. */\n--- 6692,6703 ----\n \n \n echo $ac_n \"checking for random\"\"... $ac_c\" 1>&6\n! echo \"configure:6696: checking for random\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_random'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 6701 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char random(); below. */\n***************\n*** 6678,6684 ****\n \n ; return 0; }\n EOF\n! if { (eval echo configure:6682: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_random=yes\"\n else\n--- 6720,6726 ----\n \n ; return 0; }\n EOF\n! if { (eval echo configure:6724: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_random=yes\"\n else\n***************\n*** 6698,6714 ****\n \n else\n echo \"$ac_t\"\"no\" 1>&6\n! MISSING_RANDOM='random.o'\n fi\n \n \n echo $ac_n \"checking for inet_aton\"\"... $ac_c\" 1>&6\n! echo \"configure:6707: checking for inet_aton\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_inet_aton'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 6712 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char inet_aton(); below. */\n--- 6740,6756 ----\n \n else\n echo \"$ac_t\"\"no\" 1>&6\n! RANDOM='random.o'\n fi\n \n \n echo $ac_n \"checking for inet_aton\"\"... $ac_c\" 1>&6\n! echo \"configure:6749: checking for inet_aton\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_inet_aton'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 6754 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char inet_aton(); below. */\n***************\n*** 6731,6737 ****\n \n ; return 0; }\n EOF\n! if { (eval echo configure:6735: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_inet_aton=yes\"\n else\n--- 6773,6779 ----\n \n ; return 0; }\n EOF\n! if { (eval echo configure:6777: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_inet_aton=yes\"\n else\n***************\n*** 6756,6767 ****\n \n \n echo $ac_n \"checking for strerror\"\"... $ac_c\" 1>&6\n! echo \"configure:6760: checking for strerror\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_strerror'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 6765 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char strerror(); below. */\n--- 6798,6809 ----\n \n \n echo $ac_n \"checking for strerror\"\"... $ac_c\" 1>&6\n! echo \"configure:6802: checking for strerror\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_strerror'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 6807 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char strerror(); below. */\n***************\n*** 6784,6790 ****\n \n ; return 0; }\n EOF\n! if { (eval echo configure:6788: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_strerror=yes\"\n else\n--- 6826,6832 ----\n \n ; return 0; }\n EOF\n! if { (eval echo configure:6830: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_strerror=yes\"\n else\n***************\n*** 6809,6820 ****\n \n \n echo $ac_n \"checking for strdup\"\"... $ac_c\" 1>&6\n! echo \"configure:6813: checking for strdup\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_strdup'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 6818 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char strdup(); below. */\n--- 6851,6862 ----\n \n \n echo $ac_n \"checking for strdup\"\"... $ac_c\" 1>&6\n! echo \"configure:6855: checking for strdup\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_strdup'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 6860 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char strdup(); below. */\n***************\n*** 6837,6843 ****\n \n ; return 0; }\n EOF\n! if { (eval echo configure:6841: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_strdup=yes\"\n else\n--- 6879,6885 ----\n \n ; return 0; }\n EOF\n! if { (eval echo configure:6883: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_strdup=yes\"\n else\n***************\n*** 6862,6873 ****\n \n \n echo $ac_n \"checking for strtol\"\"... $ac_c\" 1>&6\n! echo \"configure:6866: checking for strtol\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_strtol'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 6871 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char strtol(); below. */\n--- 6904,6915 ----\n \n \n echo $ac_n \"checking for strtol\"\"... $ac_c\" 1>&6\n! echo \"configure:6908: checking for strtol\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_strtol'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 6913 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char strtol(); below. */\n***************\n*** 6890,6896 ****\n \n ; return 0; }\n EOF\n! if { (eval echo configure:6894: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_strtol=yes\"\n else\n--- 6932,6938 ----\n \n ; return 0; }\n EOF\n! if { (eval echo configure:6936: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_strtol=yes\"\n else\n***************\n*** 6915,6926 ****\n \n \n echo $ac_n \"checking for strtoul\"\"... $ac_c\" 1>&6\n! echo \"configure:6919: checking for strtoul\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_strtoul'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 6924 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char strtoul(); below. */\n--- 6957,6968 ----\n \n \n echo $ac_n \"checking for strtoul\"\"... $ac_c\" 1>&6\n! echo \"configure:6961: checking for strtoul\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_strtoul'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 6966 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char strtoul(); below. */\n***************\n*** 6943,6949 ****\n \n ; return 0; }\n EOF\n! if { (eval echo configure:6947: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_strtoul=yes\"\n else\n--- 6985,6991 ----\n \n ; return 0; }\n EOF\n! if { (eval echo configure:6989: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_strtoul=yes\"\n else\n***************\n*** 6968,6979 ****\n \n \n echo $ac_n \"checking for strcasecmp\"\"... $ac_c\" 1>&6\n! echo \"configure:6972: checking for strcasecmp\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_strcasecmp'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 6977 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char strcasecmp(); below. */\n--- 7010,7021 ----\n \n \n echo $ac_n \"checking for strcasecmp\"\"... $ac_c\" 1>&6\n! echo \"configure:7014: checking for strcasecmp\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_strcasecmp'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 7019 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char strcasecmp(); below. */\n***************\n*** 6996,7002 ****\n \n ; return 0; }\n EOF\n! if { (eval echo configure:7000: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_strcasecmp=yes\"\n else\n--- 7038,7044 ----\n \n ; return 0; }\n EOF\n! if { (eval echo configure:7042: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_strcasecmp=yes\"\n else\n***************\n*** 7021,7032 ****\n \n \n echo $ac_n \"checking for cbrt\"\"... $ac_c\" 1>&6\n! echo \"configure:7025: checking for cbrt\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_cbrt'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 7030 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char cbrt(); below. */\n--- 7063,7074 ----\n \n \n echo $ac_n \"checking for cbrt\"\"... $ac_c\" 1>&6\n! echo \"configure:7067: checking for cbrt\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_cbrt'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 7072 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char cbrt(); below. */\n***************\n*** 7049,7055 ****\n \n ; return 0; }\n EOF\n! if { (eval echo configure:7053: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_cbrt=yes\"\n else\n--- 7091,7097 ----\n \n ; return 0; }\n EOF\n! if { (eval echo configure:7095: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_cbrt=yes\"\n else\n***************\n*** 7070,7076 ****\n else\n echo \"$ac_t\"\"no\" 1>&6\n echo $ac_n \"checking for cbrt in -lm\"\"... $ac_c\" 1>&6\n! echo \"configure:7074: checking for cbrt in -lm\" >&5\n ac_lib_var=`echo m'_'cbrt | sed 'y%./+-%__p_%'`\n if eval \"test \\\"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n--- 7112,7118 ----\n else\n echo \"$ac_t\"\"no\" 1>&6\n echo $ac_n \"checking for cbrt in -lm\"\"... $ac_c\" 1>&6\n! echo \"configure:7116: checking for cbrt in -lm\" >&5\n ac_lib_var=`echo m'_'cbrt | sed 'y%./+-%__p_%'`\n if eval \"test \\\"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n***************\n*** 7078,7084 ****\n ac_save_LIBS=\"$LIBS\"\n LIBS=\"-lm $LIBS\"\n cat > conftest.$ac_ext <<EOF\n! #line 7082 \"configure\"\n #include \"confdefs.h\"\n /* Override any gcc2 internal prototype to avoid an error. */\n /* We use char because int might match the return type of a gcc2\n--- 7120,7126 ----\n ac_save_LIBS=\"$LIBS\"\n LIBS=\"-lm $LIBS\"\n cat > conftest.$ac_ext <<EOF\n! #line 7124 \"configure\"\n #include \"confdefs.h\"\n /* Override any gcc2 internal prototype to avoid an error. */\n /* We use char because int might match the return type of a gcc2\n***************\n*** 7089,7095 ****\n cbrt()\n ; return 0; }\n EOF\n! if { (eval echo configure:7093: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_lib_$ac_lib_var=yes\"\n else\n--- 7131,7137 ----\n cbrt()\n ; return 0; }\n EOF\n! if { (eval echo configure:7135: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_lib_$ac_lib_var=yes\"\n else\n***************\n*** 7127,7138 ****\n \n \n echo $ac_n \"checking for rint\"\"... $ac_c\" 1>&6\n! echo \"configure:7131: checking for rint\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_rint'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 7136 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char rint(); below. */\n--- 7169,7180 ----\n \n \n echo $ac_n \"checking for rint\"\"... $ac_c\" 1>&6\n! echo \"configure:7173: checking for rint\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_rint'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 7178 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char rint(); below. */\n***************\n*** 7155,7161 ****\n \n ; return 0; }\n EOF\n! if { (eval echo configure:7159: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_rint=yes\"\n else\n--- 7197,7203 ----\n \n ; return 0; }\n EOF\n! if { (eval echo configure:7201: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_rint=yes\"\n else\n***************\n*** 7176,7182 ****\n else\n echo \"$ac_t\"\"no\" 1>&6\n echo $ac_n \"checking for rint in -lm\"\"... $ac_c\" 1>&6\n! echo \"configure:7180: checking for rint in -lm\" >&5\n ac_lib_var=`echo m'_'rint | sed 'y%./+-%__p_%'`\n if eval \"test \\\"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n--- 7218,7224 ----\n else\n echo \"$ac_t\"\"no\" 1>&6\n echo $ac_n \"checking for rint in -lm\"\"... $ac_c\" 1>&6\n! echo \"configure:7222: checking for rint in -lm\" >&5\n ac_lib_var=`echo m'_'rint | sed 'y%./+-%__p_%'`\n if eval \"test \\\"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n***************\n*** 7184,7190 ****\n ac_save_LIBS=\"$LIBS\"\n LIBS=\"-lm $HPUXMATHLIB $LIBS\"\n cat > conftest.$ac_ext <<EOF\n! #line 7188 \"configure\"\n #include \"confdefs.h\"\n /* Override any gcc2 internal prototype to avoid an error. */\n /* We use char because int might match the return type of a gcc2\n--- 7226,7232 ----\n ac_save_LIBS=\"$LIBS\"\n LIBS=\"-lm $HPUXMATHLIB $LIBS\"\n cat > conftest.$ac_ext <<EOF\n! #line 7230 \"configure\"\n #include \"confdefs.h\"\n /* Override any gcc2 internal prototype to avoid an error. */\n /* We use char because int might match the return type of a gcc2\n***************\n*** 7195,7201 ****\n rint()\n ; return 0; }\n EOF\n! if { (eval echo configure:7199: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_lib_$ac_lib_var=yes\"\n else\n--- 7237,7243 ----\n rint()\n ; return 0; }\n EOF\n! if { (eval echo configure:7241: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_lib_$ac_lib_var=yes\"\n else\n***************\n*** 7224,7232 ****\n \n # Readline versions < 2.1 don't have rl_completion_append_character\n echo $ac_n \"checking for rl_completion_append_character\"\"... $ac_c\" 1>&6\n! echo \"configure:7228: checking for rl_completion_append_character\" >&5\n cat > conftest.$ac_ext <<EOF\n! #line 7230 \"configure\"\n #include \"confdefs.h\"\n #include <stdio.h>\n #ifdef HAVE_READLINE_READLINE_H\n--- 7266,7274 ----\n \n # Readline versions < 2.1 don't have rl_completion_append_character\n echo $ac_n \"checking for rl_completion_append_character\"\"... $ac_c\" 1>&6\n! echo \"configure:7270: checking for rl_completion_append_character\" >&5\n cat > conftest.$ac_ext <<EOF\n! #line 7272 \"configure\"\n #include \"confdefs.h\"\n #include <stdio.h>\n #ifdef HAVE_READLINE_READLINE_H\n***************\n*** 7239,7245 ****\n rl_completion_append_character = 'x';\n ; return 0; }\n EOF\n! if { (eval echo configure:7243: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n echo \"$ac_t\"\"yes\" 1>&6\n cat >> confdefs.h <<\\EOF\n--- 7281,7287 ----\n rl_completion_append_character = 'x';\n ; return 0; }\n EOF\n! if { (eval echo configure:7285: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n echo \"$ac_t\"\"yes\" 1>&6\n cat >> confdefs.h <<\\EOF\n***************\n*** 7257,7268 ****\n for ac_func in rl_completion_matches rl_filename_completion_function\n do\n echo $ac_n \"checking for $ac_func\"\"... $ac_c\" 1>&6\n! echo \"configure:7261: checking for $ac_func\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_$ac_func'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 7266 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char $ac_func(); below. */\n--- 7299,7310 ----\n for ac_func in rl_completion_matches rl_filename_completion_function\n do\n echo $ac_n \"checking for $ac_func\"\"... $ac_c\" 1>&6\n! echo \"configure:7303: checking for $ac_func\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_$ac_func'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 7308 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char $ac_func(); below. */\n***************\n*** 7285,7291 ****\n \n ; return 0; }\n EOF\n! if { (eval echo configure:7289: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_$ac_func=yes\"\n else\n--- 7327,7333 ----\n \n ; return 0; }\n EOF\n! if { (eval echo configure:7331: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_$ac_func=yes\"\n else\n***************\n*** 7312,7327 ****\n \n \n echo $ac_n \"checking for finite\"\"... $ac_c\" 1>&6\n! echo \"configure:7316: checking for finite\" >&5\n cat > conftest.$ac_ext <<EOF\n! #line 7318 \"configure\"\n #include \"confdefs.h\"\n #include <math.h>\n int main() {\n int dummy=finite(1.0);\n ; return 0; }\n EOF\n! if { (eval echo configure:7325: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n cat >> confdefs.h <<\\EOF\n #define HAVE_FINITE 1\n--- 7354,7369 ----\n \n \n echo $ac_n \"checking for finite\"\"... $ac_c\" 1>&6\n! echo \"configure:7358: checking for finite\" >&5\n cat > conftest.$ac_ext <<EOF\n! #line 7360 \"configure\"\n #include \"confdefs.h\"\n #include <math.h>\n int main() {\n int dummy=finite(1.0);\n ; return 0; }\n EOF\n! if { (eval echo configure:7367: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n cat >> confdefs.h <<\\EOF\n #define HAVE_FINITE 1\n***************\n*** 7336,7351 ****\n rm -f conftest*\n \n echo $ac_n \"checking for sigsetjmp\"\"... $ac_c\" 1>&6\n! echo \"configure:7340: checking for sigsetjmp\" >&5\n cat > conftest.$ac_ext <<EOF\n! #line 7342 \"configure\"\n #include \"confdefs.h\"\n #include <setjmp.h>\n int main() {\n sigjmp_buf x; sigsetjmp(x, 1);\n ; return 0; }\n EOF\n! if { (eval echo configure:7349: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n cat >> confdefs.h <<\\EOF\n #define HAVE_SIGSETJMP 1\n--- 7378,7393 ----\n rm -f conftest*\n \n echo $ac_n \"checking for sigsetjmp\"\"... $ac_c\" 1>&6\n! echo \"configure:7382: checking for sigsetjmp\" >&5\n cat > conftest.$ac_ext <<EOF\n! #line 7384 \"configure\"\n #include \"confdefs.h\"\n #include <setjmp.h>\n int main() {\n sigjmp_buf x; sigsetjmp(x, 1);\n ; return 0; }\n EOF\n! if { (eval echo configure:7391: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n cat >> confdefs.h <<\\EOF\n #define HAVE_SIGSETJMP 1\n***************\n*** 7365,7376 ****\n case $enable_syslog in\n yes)\n echo $ac_n \"checking for syslog\"\"... $ac_c\" 1>&6\n! echo \"configure:7369: checking for syslog\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_syslog'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 7374 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char syslog(); below. */\n--- 7407,7418 ----\n case $enable_syslog in\n yes)\n echo $ac_n \"checking for syslog\"\"... $ac_c\" 1>&6\n! echo \"configure:7411: checking for syslog\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_syslog'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 7416 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char syslog(); below. */\n***************\n*** 7393,7399 ****\n \n ; return 0; }\n EOF\n! if { (eval echo configure:7397: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_syslog=yes\"\n else\n--- 7435,7441 ----\n \n ; return 0; }\n EOF\n! if { (eval echo configure:7439: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_syslog=yes\"\n else\n***************\n*** 7432,7450 ****\n \n \n echo $ac_n \"checking for optreset\"\"... $ac_c\" 1>&6\n! echo \"configure:7436: checking for optreset\" >&5\n if eval \"test \\\"`echo '$''{'pgac_cv_var_int_optreset'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 7441 \"configure\"\n #include \"confdefs.h\"\n #include <unistd.h>\n int main() {\n extern int optreset; optreset = 1;\n ; return 0; }\n EOF\n! if { (eval echo configure:7448: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n pgac_cv_var_int_optreset=yes\n else\n--- 7474,7492 ----\n \n \n echo $ac_n \"checking for optreset\"\"... $ac_c\" 1>&6\n! echo \"configure:7478: checking for optreset\" >&5\n if eval \"test \\\"`echo '$''{'pgac_cv_var_int_optreset'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 7483 \"configure\"\n #include \"confdefs.h\"\n #include <unistd.h>\n int main() {\n extern int optreset; optreset = 1;\n ; return 0; }\n EOF\n! if { (eval echo configure:7490: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n pgac_cv_var_int_optreset=yes\n else\n***************\n*** 7470,7485 ****\n # This check should come after all modifications of compiler or linker\n # variables, and before any other run tests.\n echo $ac_n \"checking test program\"\"... $ac_c\" 1>&6\n! echo \"configure:7474: checking test program\" >&5\n if test \"$cross_compiling\" = yes; then\n echo \"$ac_t\"\"cross-compiling\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 7479 \"configure\"\n #include \"confdefs.h\"\n int main() { return 0; }\n EOF\n! if { (eval echo configure:7483: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null\n then\n echo \"$ac_t\"\"ok\" 1>&6\n else\n--- 7512,7527 ----\n # This check should come after all modifications of compiler or linker\n # variables, and before any other run tests.\n echo $ac_n \"checking test program\"\"... $ac_c\" 1>&6\n! echo \"configure:7516: checking test program\" >&5\n if test \"$cross_compiling\" = yes; then\n echo \"$ac_t\"\"cross-compiling\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 7521 \"configure\"\n #include \"confdefs.h\"\n int main() { return 0; }\n EOF\n! if { (eval echo configure:7525: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null\n then\n echo \"$ac_t\"\"ok\" 1>&6\n else\n***************\n*** 7499,7505 ****\n \n \n echo $ac_n \"checking whether long int is 64 bits\"\"... $ac_c\" 1>&6\n! echo \"configure:7503: checking whether long int is 64 bits\" >&5\n if eval \"test \\\"`echo '$''{'pgac_cv_type_long_int_64'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n--- 7541,7547 ----\n \n \n echo $ac_n \"checking whether long int is 64 bits\"\"... $ac_c\" 1>&6\n! echo \"configure:7545: checking whether long int is 64 bits\" >&5\n if eval \"test \\\"`echo '$''{'pgac_cv_type_long_int_64'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n***************\n*** 7508,7514 ****\n echo \"configure: warning: 64 bit arithmetic disabled when cross-compiling\" 1>&2\n else\n cat > conftest.$ac_ext <<EOF\n! #line 7512 \"configure\"\n #include \"confdefs.h\"\n typedef long int int64;\n \n--- 7550,7556 ----\n echo \"configure: warning: 64 bit arithmetic disabled when cross-compiling\" 1>&2\n else\n cat > conftest.$ac_ext <<EOF\n! #line 7554 \"configure\"\n #include \"confdefs.h\"\n typedef long int int64;\n \n***************\n*** 7537,7543 ****\n exit(! does_int64_work());\n }\n EOF\n! if { (eval echo configure:7541: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null\n then\n pgac_cv_type_long_int_64=yes\n else\n--- 7579,7585 ----\n exit(! does_int64_work());\n }\n EOF\n! if { (eval echo configure:7583: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null\n then\n pgac_cv_type_long_int_64=yes\n else\n***************\n*** 7564,7570 ****\n \n if test x\"$HAVE_LONG_INT_64\" = x\"no\" ; then\n echo $ac_n \"checking whether long long int is 64 bits\"\"... $ac_c\" 1>&6\n! echo \"configure:7568: checking whether long long int is 64 bits\" >&5\n if eval \"test \\\"`echo '$''{'pgac_cv_type_long_long_int_64'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n--- 7606,7612 ----\n \n if test x\"$HAVE_LONG_INT_64\" = x\"no\" ; then\n echo $ac_n \"checking whether long long int is 64 bits\"\"... $ac_c\" 1>&6\n! echo \"configure:7610: checking whether long long int is 64 bits\" >&5\n if eval \"test \\\"`echo '$''{'pgac_cv_type_long_long_int_64'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n***************\n*** 7573,7579 ****\n echo \"configure: warning: 64 bit arithmetic disabled when cross-compiling\" 1>&2\n else\n cat > conftest.$ac_ext <<EOF\n! #line 7577 \"configure\"\n #include \"confdefs.h\"\n typedef long long int int64;\n \n--- 7615,7621 ----\n echo \"configure: warning: 64 bit arithmetic disabled when cross-compiling\" 1>&2\n else\n cat > conftest.$ac_ext <<EOF\n! #line 7619 \"configure\"\n #include \"confdefs.h\"\n typedef long long int int64;\n \n***************\n*** 7602,7608 ****\n exit(! does_int64_work());\n }\n EOF\n! if { (eval echo configure:7606: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null\n then\n pgac_cv_type_long_long_int_64=yes\n else\n--- 7644,7650 ----\n exit(! does_int64_work());\n }\n EOF\n! if { (eval echo configure:7648: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null\n then\n pgac_cv_type_long_long_int_64=yes\n else\n***************\n*** 7632,7638 ****\n \n if [ x\"$HAVE_LONG_LONG_INT_64\" = xyes ] ; then\n cat > conftest.$ac_ext <<EOF\n! #line 7636 \"configure\"\n #include \"confdefs.h\"\n \n #define INT64CONST(x) x##LL\n--- 7674,7680 ----\n \n if [ x\"$HAVE_LONG_LONG_INT_64\" = xyes ] ; then\n cat > conftest.$ac_ext <<EOF\n! #line 7678 \"configure\"\n #include \"confdefs.h\"\n \n #define INT64CONST(x) x##LL\n***************\n*** 7642,7648 ****\n \n ; return 0; }\n EOF\n! if { (eval echo configure:7646: \\\"$ac_compile\\\") 1>&5; (eval $ac_compile) 2>&5; }; then\n rm -rf conftest*\n cat >> confdefs.h <<\\EOF\n #define HAVE_LL_CONSTANTS 1\n--- 7684,7690 ----\n \n ; return 0; }\n EOF\n! if { (eval echo configure:7688: \\\"$ac_compile\\\") 1>&5; (eval $ac_compile) 2>&5; }; then\n rm -rf conftest*\n cat >> confdefs.h <<\\EOF\n #define HAVE_LL_CONSTANTS 1\n***************\n*** 7660,7666 ****\n if [ x\"$HAVE_LONG_LONG_INT_64\" = xyes ] ; then\n if [ x$SNPRINTF = x ] ; then\n echo $ac_n \"checking whether snprintf handles 'long long int' as %lld\"\"... $ac_c\" 1>&6\n! echo \"configure:7664: checking whether snprintf handles 'long long int' as %lld\" >&5\n if test \"$cross_compiling\" = yes; then\n echo \"$ac_t\"\"assuming not on target machine\" 1>&6\n \t# Force usage of our own snprintf, since we cannot test foreign snprintf\n--- 7702,7708 ----\n if [ x\"$HAVE_LONG_LONG_INT_64\" = xyes ] ; then\n if [ x$SNPRINTF = x ] ; then\n echo $ac_n \"checking whether snprintf handles 'long long int' as %lld\"\"... $ac_c\" 1>&6\n! echo \"configure:7706: checking whether snprintf handles 'long long int' as %lld\" >&5\n if test \"$cross_compiling\" = yes; then\n echo \"$ac_t\"\"assuming not on target machine\" 1>&6\n \t# Force usage of our own snprintf, since we cannot test foreign snprintf\n***************\n*** 7669,7675 ****\n \n else\n cat > conftest.$ac_ext <<EOF\n! #line 7673 \"configure\"\n #include \"confdefs.h\"\n #include <stdio.h>\n typedef long long int int64;\n--- 7711,7717 ----\n \n else\n cat > conftest.$ac_ext <<EOF\n! #line 7715 \"configure\"\n #include \"confdefs.h\"\n #include <stdio.h>\n typedef long long int int64;\n***************\n*** 7696,7702 ****\n exit(! does_int64_snprintf_work());\n }\n EOF\n! if { (eval echo configure:7700: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null\n then\n echo \"$ac_t\"\"yes\" 1>&6\n \t INT64_FORMAT='\"%lld\"'\n--- 7738,7744 ----\n exit(! does_int64_snprintf_work());\n }\n EOF\n! if { (eval echo configure:7742: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null\n then\n echo \"$ac_t\"\"yes\" 1>&6\n \t INT64_FORMAT='\"%lld\"'\n***************\n*** 7707,7713 ****\n rm -fr conftest*\n echo \"$ac_t\"\"no\" 1>&6\n echo $ac_n \"checking whether snprintf handles 'long long int' as %qd\"\"... $ac_c\" 1>&6\n! echo \"configure:7711: checking whether snprintf handles 'long long int' as %qd\" >&5 \n if test \"$cross_compiling\" = yes; then\n echo \"$ac_t\"\"assuming not on target machine\" 1>&6\n \t# Force usage of our own snprintf, since we cannot test foreign snprintf\n--- 7749,7755 ----\n rm -fr conftest*\n echo \"$ac_t\"\"no\" 1>&6\n echo $ac_n \"checking whether snprintf handles 'long long int' as %qd\"\"... $ac_c\" 1>&6\n! echo \"configure:7753: checking whether snprintf handles 'long long int' as %qd\" >&5 \n if test \"$cross_compiling\" = yes; then\n echo \"$ac_t\"\"assuming not on target machine\" 1>&6\n \t# Force usage of our own snprintf, since we cannot test foreign snprintf\n***************\n*** 7716,7722 ****\n \n else\n cat > conftest.$ac_ext <<EOF\n! #line 7720 \"configure\"\n #include \"confdefs.h\"\n #include <stdio.h>\n typedef long long int int64;\n--- 7758,7764 ----\n \n else\n cat > conftest.$ac_ext <<EOF\n! #line 7762 \"configure\"\n #include \"confdefs.h\"\n #include <stdio.h>\n typedef long long int int64;\n***************\n*** 7743,7749 ****\n exit(! does_int64_snprintf_work());\n }\n EOF\n! if { (eval echo configure:7747: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null\n then\n echo \"$ac_t\"\"yes\" 1>&6\n INT64_FORMAT='\"%qd\"'\n--- 7785,7791 ----\n exit(! does_int64_snprintf_work());\n }\n EOF\n! if { (eval echo configure:7789: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null\n then\n echo \"$ac_t\"\"yes\" 1>&6\n INT64_FORMAT='\"%qd\"'\n***************\n*** 7783,7794 ****\n for ac_func in strtoll strtoq\n do\n echo $ac_n \"checking for $ac_func\"\"... $ac_c\" 1>&6\n! echo \"configure:7787: checking for $ac_func\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_$ac_func'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 7792 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char $ac_func(); below. */\n--- 7825,7836 ----\n for ac_func in strtoll strtoq\n do\n echo $ac_n \"checking for $ac_func\"\"... $ac_c\" 1>&6\n! echo \"configure:7829: checking for $ac_func\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_$ac_func'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 7834 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char $ac_func(); below. */\n***************\n*** 7811,7817 ****\n \n ; return 0; }\n EOF\n! if { (eval echo configure:7815: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_$ac_func=yes\"\n else\n--- 7853,7859 ----\n \n ; return 0; }\n EOF\n! if { (eval echo configure:7857: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_$ac_func=yes\"\n else\n***************\n*** 7838,7849 ****\n for ac_func in strtoull strtouq\n do\n echo $ac_n \"checking for $ac_func\"\"... $ac_c\" 1>&6\n! echo \"configure:7842: checking for $ac_func\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_$ac_func'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 7847 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char $ac_func(); below. */\n--- 7880,7891 ----\n for ac_func in strtoull strtouq\n do\n echo $ac_n \"checking for $ac_func\"\"... $ac_c\" 1>&6\n! echo \"configure:7884: checking for $ac_func\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_$ac_func'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 7889 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char $ac_func(); below. */\n***************\n*** 7866,7872 ****\n \n ; return 0; }\n EOF\n! if { (eval echo configure:7870: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_$ac_func=yes\"\n else\n--- 7908,7914 ----\n \n ; return 0; }\n EOF\n! if { (eval echo configure:7912: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_$ac_func=yes\"\n else\n***************\n*** 7892,7903 ****\n \n \n echo $ac_n \"checking for atexit\"\"... $ac_c\" 1>&6\n! echo \"configure:7896: checking for atexit\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_atexit'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 7901 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char atexit(); below. */\n--- 7934,7945 ----\n \n \n echo $ac_n \"checking for atexit\"\"... $ac_c\" 1>&6\n! echo \"configure:7938: checking for atexit\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_atexit'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 7943 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char atexit(); below. */\n***************\n*** 7920,7926 ****\n \n ; return 0; }\n EOF\n! if { (eval echo configure:7924: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_atexit=yes\"\n else\n--- 7962,7968 ----\n \n ; return 0; }\n EOF\n! if { (eval echo configure:7966: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_atexit=yes\"\n else\n***************\n*** 7943,7954 ****\n for ac_func in on_exit\n do\n echo $ac_n \"checking for $ac_func\"\"... $ac_c\" 1>&6\n! echo \"configure:7947: checking for $ac_func\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_$ac_func'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 7952 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char $ac_func(); below. */\n--- 7985,7996 ----\n for ac_func in on_exit\n do\n echo $ac_n \"checking for $ac_func\"\"... $ac_c\" 1>&6\n! echo \"configure:7989: checking for $ac_func\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_func_$ac_func'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 7994 \"configure\"\n #include \"confdefs.h\"\n /* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char $ac_func(); below. */\n***************\n*** 7971,7977 ****\n \n ; return 0; }\n EOF\n! if { (eval echo configure:7975: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_$ac_func=yes\"\n else\n--- 8013,8019 ----\n \n ; return 0; }\n EOF\n! if { (eval echo configure:8017: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n eval \"ac_cv_func_$ac_func=yes\"\n else\n***************\n*** 8004,8010 ****\n \n \n echo $ac_n \"checking size of unsigned long\"\"... $ac_c\" 1>&6\n! echo \"configure:8008: checking size of unsigned long\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_sizeof_unsigned_long'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n--- 8046,8052 ----\n \n \n echo $ac_n \"checking size of unsigned long\"\"... $ac_c\" 1>&6\n! echo \"configure:8050: checking size of unsigned long\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_sizeof_unsigned_long'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n***************\n*** 8012,8018 ****\n ac_cv_sizeof_unsigned_long=4\n else\n cat > conftest.$ac_ext <<EOF\n! #line 8016 \"configure\"\n #include \"confdefs.h\"\n #include <stdio.h>\n main()\n--- 8054,8060 ----\n ac_cv_sizeof_unsigned_long=4\n else\n cat > conftest.$ac_ext <<EOF\n! #line 8058 \"configure\"\n #include \"confdefs.h\"\n #include <stdio.h>\n main()\n***************\n*** 8023,8029 ****\n exit(0);\n }\n EOF\n! if { (eval echo configure:8027: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null\n then\n ac_cv_sizeof_unsigned_long=`cat conftestval`\n else\n--- 8065,8071 ----\n exit(0);\n }\n EOF\n! if { (eval echo configure:8069: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null\n then\n ac_cv_sizeof_unsigned_long=`cat conftestval`\n else\n***************\n*** 8049,8055 ****\n \n \n echo $ac_n \"checking alignment of short\"\"... $ac_c\" 1>&6\n! echo \"configure:8053: checking alignment of short\" >&5\n if eval \"test \\\"`echo '$''{'pgac_cv_alignof_short'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n--- 8091,8097 ----\n \n \n echo $ac_n \"checking alignment of short\"\"... $ac_c\" 1>&6\n! echo \"configure:8095: checking alignment of short\" >&5\n if eval \"test \\\"`echo '$''{'pgac_cv_alignof_short'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n***************\n*** 8057,8063 ****\n pgac_cv_alignof_short='sizeof(short)'\n else\n cat > conftest.$ac_ext <<EOF\n! #line 8061 \"configure\"\n #include \"confdefs.h\"\n #include <stdio.h>\n struct { char filler; short field; } mystruct;\n--- 8099,8105 ----\n pgac_cv_alignof_short='sizeof(short)'\n else\n cat > conftest.$ac_ext <<EOF\n! #line 8103 \"configure\"\n #include \"confdefs.h\"\n #include <stdio.h>\n struct { char filler; short field; } mystruct;\n***************\n*** 8069,8075 ****\n exit(0);\n }\n EOF\n! if { (eval echo configure:8073: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null\n then\n pgac_cv_alignof_short=`cat conftestval`\n else\n--- 8111,8117 ----\n exit(0);\n }\n EOF\n! if { (eval echo configure:8115: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null\n then\n pgac_cv_alignof_short=`cat conftestval`\n else\n***************\n*** 8089,8095 ****\n \n \n echo $ac_n \"checking alignment of int\"\"... $ac_c\" 1>&6\n! echo \"configure:8093: checking alignment of int\" >&5\n if eval \"test \\\"`echo '$''{'pgac_cv_alignof_int'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n--- 8131,8137 ----\n \n \n echo $ac_n \"checking alignment of int\"\"... $ac_c\" 1>&6\n! echo \"configure:8135: checking alignment of int\" >&5\n if eval \"test \\\"`echo '$''{'pgac_cv_alignof_int'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n***************\n*** 8097,8103 ****\n pgac_cv_alignof_int='sizeof(int)'\n else\n cat > conftest.$ac_ext <<EOF\n! #line 8101 \"configure\"\n #include \"confdefs.h\"\n #include <stdio.h>\n struct { char filler; int field; } mystruct;\n--- 8139,8145 ----\n pgac_cv_alignof_int='sizeof(int)'\n else\n cat > conftest.$ac_ext <<EOF\n! #line 8143 \"configure\"\n #include \"confdefs.h\"\n #include <stdio.h>\n struct { char filler; int field; } mystruct;\n***************\n*** 8109,8115 ****\n exit(0);\n }\n EOF\n! if { (eval echo configure:8113: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null\n then\n pgac_cv_alignof_int=`cat conftestval`\n else\n--- 8151,8157 ----\n exit(0);\n }\n EOF\n! if { (eval echo configure:8155: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null\n then\n pgac_cv_alignof_int=`cat conftestval`\n else\n***************\n*** 8129,8135 ****\n \n \n echo $ac_n \"checking alignment of long\"\"... $ac_c\" 1>&6\n! echo \"configure:8133: checking alignment of long\" >&5\n if eval \"test \\\"`echo '$''{'pgac_cv_alignof_long'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n--- 8171,8177 ----\n \n \n echo $ac_n \"checking alignment of long\"\"... $ac_c\" 1>&6\n! echo \"configure:8175: checking alignment of long\" >&5\n if eval \"test \\\"`echo '$''{'pgac_cv_alignof_long'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n***************\n*** 8137,8143 ****\n pgac_cv_alignof_long='sizeof(long)'\n else\n cat > conftest.$ac_ext <<EOF\n! #line 8141 \"configure\"\n #include \"confdefs.h\"\n #include <stdio.h>\n struct { char filler; long field; } mystruct;\n--- 8179,8185 ----\n pgac_cv_alignof_long='sizeof(long)'\n else\n cat > conftest.$ac_ext <<EOF\n! #line 8183 \"configure\"\n #include \"confdefs.h\"\n #include <stdio.h>\n struct { char filler; long field; } mystruct;\n***************\n*** 8149,8155 ****\n exit(0);\n }\n EOF\n! if { (eval echo configure:8153: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null\n then\n pgac_cv_alignof_long=`cat conftestval`\n else\n--- 8191,8197 ----\n exit(0);\n }\n EOF\n! if { (eval echo configure:8195: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null\n then\n pgac_cv_alignof_long=`cat conftestval`\n else\n***************\n*** 8170,8176 ****\n \n if [ x\"$HAVE_LONG_LONG_INT_64\" = xyes ] ; then\n echo $ac_n \"checking alignment of long long int\"\"... $ac_c\" 1>&6\n! echo \"configure:8174: checking alignment of long long int\" >&5\n if eval \"test \\\"`echo '$''{'pgac_cv_alignof_long_long_int'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n--- 8212,8218 ----\n \n if [ x\"$HAVE_LONG_LONG_INT_64\" = xyes ] ; then\n echo $ac_n \"checking alignment of long long int\"\"... $ac_c\" 1>&6\n! echo \"configure:8216: checking alignment of long long int\" >&5\n if eval \"test \\\"`echo '$''{'pgac_cv_alignof_long_long_int'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n***************\n*** 8178,8184 ****\n pgac_cv_alignof_long_long_int='sizeof(long long int)'\n else\n cat > conftest.$ac_ext <<EOF\n! #line 8182 \"configure\"\n #include \"confdefs.h\"\n #include <stdio.h>\n struct { char filler; long long int field; } mystruct;\n--- 8220,8226 ----\n pgac_cv_alignof_long_long_int='sizeof(long long int)'\n else\n cat > conftest.$ac_ext <<EOF\n! #line 8224 \"configure\"\n #include \"confdefs.h\"\n #include <stdio.h>\n struct { char filler; long long int field; } mystruct;\n***************\n*** 8190,8196 ****\n exit(0);\n }\n EOF\n! if { (eval echo configure:8194: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null\n then\n pgac_cv_alignof_long_long_int=`cat conftestval`\n else\n--- 8232,8238 ----\n exit(0);\n }\n EOF\n! if { (eval echo configure:8236: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null\n then\n pgac_cv_alignof_long_long_int=`cat conftestval`\n else\n***************\n*** 8211,8217 ****\n \n fi\n echo $ac_n \"checking alignment of double\"\"... $ac_c\" 1>&6\n! echo \"configure:8215: checking alignment of double\" >&5\n if eval \"test \\\"`echo '$''{'pgac_cv_alignof_double'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n--- 8253,8259 ----\n \n fi\n echo $ac_n \"checking alignment of double\"\"... $ac_c\" 1>&6\n! echo \"configure:8257: checking alignment of double\" >&5\n if eval \"test \\\"`echo '$''{'pgac_cv_alignof_double'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n***************\n*** 8219,8225 ****\n pgac_cv_alignof_double='sizeof(double)'\n else\n cat > conftest.$ac_ext <<EOF\n! #line 8223 \"configure\"\n #include \"confdefs.h\"\n #include <stdio.h>\n struct { char filler; double field; } mystruct;\n--- 8261,8267 ----\n pgac_cv_alignof_double='sizeof(double)'\n else\n cat > conftest.$ac_ext <<EOF\n! #line 8265 \"configure\"\n #include \"confdefs.h\"\n #include <stdio.h>\n struct { char filler; double field; } mystruct;\n***************\n*** 8231,8237 ****\n exit(0);\n }\n EOF\n! if { (eval echo configure:8235: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null\n then\n pgac_cv_alignof_double=`cat conftestval`\n else\n--- 8273,8279 ----\n exit(0);\n }\n EOF\n! if { (eval echo configure:8277: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null\n then\n pgac_cv_alignof_double=`cat conftestval`\n else\n***************\n*** 8281,8292 ****\n #endif\"\n \n echo $ac_n \"checking for int8\"\"... $ac_c\" 1>&6\n! echo \"configure:8285: checking for int8\" >&5\n if eval \"test \\\"`echo '$''{'pgac_cv_have_int8'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 8290 \"configure\"\n #include \"confdefs.h\"\n $pgac_type_includes\n int main() {\n--- 8323,8334 ----\n #endif\"\n \n echo $ac_n \"checking for int8\"\"... $ac_c\" 1>&6\n! echo \"configure:8327: checking for int8\" >&5\n if eval \"test \\\"`echo '$''{'pgac_cv_have_int8'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 8332 \"configure\"\n #include \"confdefs.h\"\n $pgac_type_includes\n int main() {\n***************\n*** 8296,8302 ****\n return 0;\n ; return 0; }\n EOF\n! if { (eval echo configure:8300: \\\"$ac_compile\\\") 1>&5; (eval $ac_compile) 2>&5; }; then\n rm -rf conftest*\n pgac_cv_have_int8=yes\n else\n--- 8338,8344 ----\n return 0;\n ; return 0; }\n EOF\n! if { (eval echo configure:8342: \\\"$ac_compile\\\") 1>&5; (eval $ac_compile) 2>&5; }; then\n rm -rf conftest*\n pgac_cv_have_int8=yes\n else\n***************\n*** 8317,8328 ****\n fi\n \n echo $ac_n \"checking for uint8\"\"... $ac_c\" 1>&6\n! echo \"configure:8321: checking for uint8\" >&5\n if eval \"test \\\"`echo '$''{'pgac_cv_have_uint8'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 8326 \"configure\"\n #include \"confdefs.h\"\n $pgac_type_includes\n int main() {\n--- 8359,8370 ----\n fi\n \n echo $ac_n \"checking for uint8\"\"... $ac_c\" 1>&6\n! echo \"configure:8363: checking for uint8\" >&5\n if eval \"test \\\"`echo '$''{'pgac_cv_have_uint8'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 8368 \"configure\"\n #include \"confdefs.h\"\n $pgac_type_includes\n int main() {\n***************\n*** 8332,8338 ****\n return 0;\n ; return 0; }\n EOF\n! if { (eval echo configure:8336: \\\"$ac_compile\\\") 1>&5; (eval $ac_compile) 2>&5; }; then\n rm -rf conftest*\n pgac_cv_have_uint8=yes\n else\n--- 8374,8380 ----\n return 0;\n ; return 0; }\n EOF\n! if { (eval echo configure:8378: \\\"$ac_compile\\\") 1>&5; (eval $ac_compile) 2>&5; }; then\n rm -rf conftest*\n pgac_cv_have_uint8=yes\n else\n***************\n*** 8353,8364 ****\n fi\n \n echo $ac_n \"checking for int64\"\"... $ac_c\" 1>&6\n! echo \"configure:8357: checking for int64\" >&5\n if eval \"test \\\"`echo '$''{'pgac_cv_have_int64'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 8362 \"configure\"\n #include \"confdefs.h\"\n $pgac_type_includes\n int main() {\n--- 8395,8406 ----\n fi\n \n echo $ac_n \"checking for int64\"\"... $ac_c\" 1>&6\n! echo \"configure:8399: checking for int64\" >&5\n if eval \"test \\\"`echo '$''{'pgac_cv_have_int64'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 8404 \"configure\"\n #include \"confdefs.h\"\n $pgac_type_includes\n int main() {\n***************\n*** 8368,8374 ****\n return 0;\n ; return 0; }\n EOF\n! if { (eval echo configure:8372: \\\"$ac_compile\\\") 1>&5; (eval $ac_compile) 2>&5; }; then\n rm -rf conftest*\n pgac_cv_have_int64=yes\n else\n--- 8410,8416 ----\n return 0;\n ; return 0; }\n EOF\n! if { (eval echo configure:8414: \\\"$ac_compile\\\") 1>&5; (eval $ac_compile) 2>&5; }; then\n rm -rf conftest*\n pgac_cv_have_int64=yes\n else\n***************\n*** 8389,8400 ****\n fi\n \n echo $ac_n \"checking for uint64\"\"... $ac_c\" 1>&6\n! echo \"configure:8393: checking for uint64\" >&5\n if eval \"test \\\"`echo '$''{'pgac_cv_have_uint64'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 8398 \"configure\"\n #include \"confdefs.h\"\n $pgac_type_includes\n int main() {\n--- 8431,8442 ----\n fi\n \n echo $ac_n \"checking for uint64\"\"... $ac_c\" 1>&6\n! echo \"configure:8435: checking for uint64\" >&5\n if eval \"test \\\"`echo '$''{'pgac_cv_have_uint64'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 8440 \"configure\"\n #include \"confdefs.h\"\n $pgac_type_includes\n int main() {\n***************\n*** 8404,8410 ****\n return 0;\n ; return 0; }\n EOF\n! if { (eval echo configure:8408: \\\"$ac_compile\\\") 1>&5; (eval $ac_compile) 2>&5; }; then\n rm -rf conftest*\n pgac_cv_have_uint64=yes\n else\n--- 8446,8452 ----\n return 0;\n ; return 0; }\n EOF\n! if { (eval echo configure:8450: \\\"$ac_compile\\\") 1>&5; (eval $ac_compile) 2>&5; }; then\n rm -rf conftest*\n pgac_cv_have_uint64=yes\n else\n***************\n*** 8425,8436 ****\n fi\n \n echo $ac_n \"checking for sig_atomic_t\"\"... $ac_c\" 1>&6\n! echo \"configure:8429: checking for sig_atomic_t\" >&5\n if eval \"test \\\"`echo '$''{'pgac_cv_have_sig_atomic_t'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 8434 \"configure\"\n #include \"confdefs.h\"\n $pgac_type_includes\n int main() {\n--- 8467,8478 ----\n fi\n \n echo $ac_n \"checking for sig_atomic_t\"\"... $ac_c\" 1>&6\n! echo \"configure:8471: checking for sig_atomic_t\" >&5\n if eval \"test \\\"`echo '$''{'pgac_cv_have_sig_atomic_t'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 8476 \"configure\"\n #include \"confdefs.h\"\n $pgac_type_includes\n int main() {\n***************\n*** 8440,8446 ****\n return 0;\n ; return 0; }\n EOF\n! if { (eval echo configure:8444: \\\"$ac_compile\\\") 1>&5; (eval $ac_compile) 2>&5; }; then\n rm -rf conftest*\n pgac_cv_have_sig_atomic_t=yes\n else\n--- 8482,8488 ----\n return 0;\n ; return 0; }\n EOF\n! if { (eval echo configure:8486: \\\"$ac_compile\\\") 1>&5; (eval $ac_compile) 2>&5; }; then\n rm -rf conftest*\n pgac_cv_have_sig_atomic_t=yes\n else\n***************\n*** 8463,8474 ****\n \n \n echo $ac_n \"checking for POSIX signal interface\"\"... $ac_c\" 1>&6\n! echo \"configure:8467: checking for POSIX signal interface\" >&5\n if eval \"test \\\"`echo '$''{'pgac_cv_func_posix_signals'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 8472 \"configure\"\n #include \"confdefs.h\"\n #include <signal.h>\n \n--- 8505,8516 ----\n \n \n echo $ac_n \"checking for POSIX signal interface\"\"... $ac_c\" 1>&6\n! echo \"configure:8509: checking for POSIX signal interface\" >&5\n if eval \"test \\\"`echo '$''{'pgac_cv_func_posix_signals'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n cat > conftest.$ac_ext <<EOF\n! #line 8514 \"configure\"\n #include \"confdefs.h\"\n #include <signal.h>\n \n***************\n*** 8479,8485 ****\n sigaction(0, &act, &oact);\n ; return 0; }\n EOF\n! if { (eval echo configure:8483: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n pgac_cv_func_posix_signals=yes\n else\n--- 8521,8527 ----\n sigaction(0, &act, &oact);\n ; return 0; }\n EOF\n! if { (eval echo configure:8525: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n rm -rf conftest*\n pgac_cv_func_posix_signals=yes\n else\n***************\n*** 8509,8515 ****\n # Extract the first word of \"$ac_prog\", so it can be a program name with args.\n set dummy $ac_prog; ac_word=$2\n echo $ac_n \"checking for $ac_word\"\"... $ac_c\" 1>&6\n! echo \"configure:8513: checking for $ac_word\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_path_TCLSH'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n--- 8551,8557 ----\n # Extract the first word of \"$ac_prog\", so it can be a program name with args.\n set dummy $ac_prog; ac_word=$2\n echo $ac_n \"checking for $ac_word\"\"... $ac_c\" 1>&6\n! echo \"configure:8555: checking for $ac_word\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_path_TCLSH'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n***************\n*** 8545,8551 ****\n done\n \n echo $ac_n \"checking for tclConfig.sh\"\"... $ac_c\" 1>&6\n! echo \"configure:8549: checking for tclConfig.sh\" >&5\n # Let user override test\n if test -z \"$TCL_CONFIG_SH\"; then\n pgac_test_dirs=\"$with_tclconfig\"\n--- 8587,8593 ----\n done\n \n echo $ac_n \"checking for tclConfig.sh\"\"... $ac_c\" 1>&6\n! echo \"configure:8591: checking for tclConfig.sh\" >&5\n # Let user override test\n if test -z \"$TCL_CONFIG_SH\"; then\n pgac_test_dirs=\"$with_tclconfig\"\n***************\n*** 8578,8584 ****\n # Check for Tk configuration script tkConfig.sh\n if test \"$with_tk\" = yes; then\n echo $ac_n \"checking for tkConfig.sh\"\"... $ac_c\" 1>&6\n! echo \"configure:8582: checking for tkConfig.sh\" >&5\n # Let user override test\n if test -z \"$TK_CONFIG_SH\"; then\n pgac_test_dirs=\"$with_tkconfig $with_tclconfig\"\n--- 8620,8626 ----\n # Check for Tk configuration script tkConfig.sh\n if test \"$with_tk\" = yes; then\n echo $ac_n \"checking for tkConfig.sh\"\"... $ac_c\" 1>&6\n! echo \"configure:8624: checking for tkConfig.sh\" >&5\n # Let user override test\n if test -z \"$TK_CONFIG_SH\"; then\n pgac_test_dirs=\"$with_tkconfig $with_tclconfig\"\n***************\n*** 8617,8623 ****\n # Extract the first word of \"$ac_prog\", so it can be a program name with args.\n set dummy $ac_prog; ac_word=$2\n echo $ac_n \"checking for $ac_word\"\"... $ac_c\" 1>&6\n! echo \"configure:8621: checking for $ac_word\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_prog_NSGMLS'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n--- 8659,8665 ----\n # Extract the first word of \"$ac_prog\", so it can be a program name with args.\n set dummy $ac_prog; ac_word=$2\n echo $ac_n \"checking for $ac_word\"\"... $ac_c\" 1>&6\n! echo \"configure:8663: checking for $ac_word\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_prog_NSGMLS'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n***************\n*** 8653,8659 ****\n # Extract the first word of \"$ac_prog\", so it can be a program name with args.\n set dummy $ac_prog; ac_word=$2\n echo $ac_n \"checking for $ac_word\"\"... $ac_c\" 1>&6\n! echo \"configure:8657: checking for $ac_word\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_prog_JADE'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n--- 8695,8701 ----\n # Extract the first word of \"$ac_prog\", so it can be a program name with args.\n set dummy $ac_prog; ac_word=$2\n echo $ac_n \"checking for $ac_word\"\"... $ac_c\" 1>&6\n! echo \"configure:8699: checking for $ac_word\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_prog_JADE'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n***************\n*** 8684,8690 ****\n \n \n echo $ac_n \"checking for DocBook V3.1\"\"... $ac_c\" 1>&6\n! echo \"configure:8688: checking for DocBook V3.1\" >&5\n if eval \"test \\\"`echo '$''{'pgac_cv_check_docbook'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n--- 8726,8732 ----\n \n \n echo $ac_n \"checking for DocBook V3.1\"\"... $ac_c\" 1>&6\n! echo \"configure:8730: checking for DocBook V3.1\" >&5\n if eval \"test \\\"`echo '$''{'pgac_cv_check_docbook'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n***************\n*** 8717,8723 ****\n \n \n echo $ac_n \"checking for DocBook stylesheets\"\"... $ac_c\" 1>&6\n! echo \"configure:8721: checking for DocBook stylesheets\" >&5\n if eval \"test \\\"`echo '$''{'pgac_cv_path_stylesheets'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n--- 8759,8765 ----\n \n \n echo $ac_n \"checking for DocBook stylesheets\"\"... $ac_c\" 1>&6\n! echo \"configure:8763: checking for DocBook stylesheets\" >&5\n if eval \"test \\\"`echo '$''{'pgac_cv_path_stylesheets'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n***************\n*** 8756,8762 ****\n # Extract the first word of \"$ac_prog\", so it can be a program name with args.\n set dummy $ac_prog; ac_word=$2\n echo $ac_n \"checking for $ac_word\"\"... $ac_c\" 1>&6\n! echo \"configure:8760: checking for $ac_word\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_prog_SGMLSPL'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n--- 8798,8804 ----\n # Extract the first word of \"$ac_prog\", so it can be a program name with args.\n set dummy $ac_prog; ac_word=$2\n echo $ac_n \"checking for $ac_word\"\"... $ac_c\" 1>&6\n! echo \"configure:8802: checking for $ac_word\" >&5\n if eval \"test \\\"`echo '$''{'ac_cv_prog_SGMLSPL'+set}'`\\\" = set\"; then\n echo $ac_n \"(cached) $ac_c\" 1>&6\n else\n***************\n*** 9011,9022 ****\n s%@MSGMERGE@%$MSGMERGE%g\n s%@XGETTEXT@%$XGETTEXT%g\n s%@localedir@%$localedir%g\n s%@SNPRINTF@%$SNPRINTF%g\n s%@ISINF@%$ISINF%g\n s%@GETRUSAGE@%$GETRUSAGE%g\n s%@SRANDOM@%$SRANDOM%g\n s%@GETHOSTNAME@%$GETHOSTNAME%g\n! s%@MISSING_RANDOM@%$MISSING_RANDOM%g\n s%@INET_ATON@%$INET_ATON%g\n s%@STRERROR@%$STRERROR%g\n s%@STRDUP@%$STRDUP%g\n--- 9053,9065 ----\n s%@MSGMERGE@%$MSGMERGE%g\n s%@XGETTEXT@%$XGETTEXT%g\n s%@localedir@%$localedir%g\n+ s%@MEMCMP@%$MEMCMP%g\n s%@SNPRINTF@%$SNPRINTF%g\n s%@ISINF@%$ISINF%g\n s%@GETRUSAGE@%$GETRUSAGE%g\n s%@SRANDOM@%$SRANDOM%g\n s%@GETHOSTNAME@%$GETHOSTNAME%g\n! s%@RANDOM@%$RANDOM%g\n s%@INET_ATON@%$INET_ATON%g\n s%@STRERROR@%$STRERROR%g\n s%@STRDUP@%$STRDUP%g\n\n\n/*-------------------------------------------------------------------------\n *\n * memcmp.c\n *\t compares memory bytes\n *\n * Portions Copyright (c) 1996-2001, PostgreSQL Global Development Group\n * Portions Copyright (c) 1994, Regents of the University of California\n *\n *\n * IDENTIFICATION\n *\t $Header: /cvsroot/pgsql/src/utils/memcmp.c,v 1.8 2001/01/24 19:43:33 momjian Exp $\n *\n * This file was taken from NetBSD and is used by SunOS because memcmp\n * on that platform does not properly compare negative bytes.\n *\n *-------------------------------------------------------------------------\n */\n\n#include <string.h>\n\n/*\n * Compare memory regions.\n */\nint\nmemcmp(const void *s1, const void *s2, size_t n)\n{\n\tif (n != 0) {\n\t\tconst unsigned char *p1 = s1, *p2 = s2;\n\n\t\tdo {\n\t\t\tif (*p1++ != *p2++)\n\t\t\t\treturn (*--p1 - *--p2);\n\t\t} while (--n != 0);\n\t}\n\treturn 0;\n}", "msg_date": "Mon, 17 Dec 2001 00:30:59 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "SunOS patch for memcmp()" }, { "msg_contents": "Bruce Momjian writes:\n\n> > What should we do? The only remaining issue is a non-8-bit-clean\n> > memcmp, which seems pretty easy to fix it.\n>\n> OK, here is a patch to allow SunOS to pass the regression tests.\n\nAt a glance, this patch looks okay. However, since this issue does not\nrepresent a regression from 7.1 I'm not exactly in favour of installing it\nnow. We want to get a release out, so I think we need to get stricter in\nthose matters. And this patch is not exactly trivial.\n\n> One interesting item is that I had to compile with\n> backend/utils/adt/varbit.c with -fno-builtin because my gcc 2.X manual\n> says:\n\nYeah, if you use GCC with optimization on SunOS 4 then the issue should be\nmoot because the GCC version is used. However, I don't think that's the\nsetup in question.\n\nThis is actually a good situation to show that configuring with one kind\nof compiler flag and building with another is not reliable, even if it's\nonly the optimization level.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 19 Dec 2001 19:46:17 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: SunOS patch for memcmp()" }, { "msg_contents": "> Bruce Momjian writes:\n> \n> > > What should we do? The only remaining issue is a non-8-bit-clean\n> > > memcmp, which seems pretty easy to fix it.\n> >\n> > OK, here is a patch to allow SunOS to pass the regression tests.\n> \n> At a glance, this patch looks okay. However, since this issue does not\n> represent a regression from 7.1 I'm not exactly in favor of installing it\n> now. We want to get a release out, so I think we need to get stricter in\n> those matters. And this patch is not exactly trivial.\n\nI have attached the patch without the autoconf run and without the new\nmemcmp.c file. Looks pretty small to me, but I understand.\n\nIf we don't want the patch, I will put it on an FTP site somewhere. \nHowever, the configure line numbers changes will be tough to keep\nworking if a subrelease changes configure.\n\nIt seemed worth it to me because it was to enable another port, and I\nthought that is what beta was for. Maybe I can put it in a minor\nrelease.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: configure.in\n===================================================================\nRCS file: /cvsroot/pgsql/configure.in,v\nretrieving revision 1.158\ndiff -c -r1.158 configure.in\n*** configure.in\t2001/12/13 22:00:22\t1.158\n--- configure.in\t2001/12/17 19:05:47\n***************\n*** 809,814 ****\n--- 809,817 ----\n AC_FUNC_ACCEPT_ARGTYPES\n PGAC_FUNC_GETTIMEOFDAY_1ARG\n \n+ # SunOS doesn't handle negative byte comparisons properly with +/- return\n+ PGAC_FUNC_MEMCMP\n+ \n AC_CHECK_FUNCS([fcvt getopt_long memmove pstat setproctitle setsid sigprocmask sysconf waitpid dlopen fdatasync])\n \n dnl Check whether <unistd.h> declares fdatasync().\nIndex: config/c-library.m4\n===================================================================\nRCS file: /cvsroot/pgsql/config/c-library.m4,v\nretrieving revision 1.9\ndiff -c -r1.9 c-library.m4\n*** config/c-library.m4\t2001/09/07 19:52:53\t1.9\n--- config/c-library.m4\t2001/12/17 19:05:47\n***************\n*** 36,41 ****\n--- 36,65 ----\n fi])# PGAC_FUNC_GETTIMEOFDAY_1ARG\n \n \n+ # PGAC_FUNC_MEMCMP\n+ # -----------\n+ # Check if memcmp() properly handles negative bytes and returns +/-.\n+ # SunOS does not.\n+ # AC_FUNC_MEMCMP\n+ AC_DEFUN(PGAC_FUNC_MEMCMP,\n+ [AC_CACHE_CHECK(for 8-bit clean memcmp, pgac_cv_func_memcmp_clean,\n+ [AC_TRY_RUN([\n+ main()\n+ {\n+ char c0 = 0x40, c1 = 0x80, c2 = 0x81;\n+ exit(memcmp(&c0, &c2, 1) < 0 && memcmp(&c1, &c2, 1) < 0 ? 0 : 1);\n+ }\n+ ], pgac_cv_func_memcmp_clean=yes, pgac_cv_func_memcmp_clean=no,\n+ pgac_cv_func_memcmp_clean=no)])\n+ if test $pgac_cv_func_memcmp_clean = no ; then\n+ MEMCMP=memcmp.o\n+ else\n+ MEMCMP=\n+ fi\n+ AC_SUBST(MEMCMP)dnl\n+ ])\n+ \n+ \n # PGAC_UNION_SEMUN\n # ----------------\n # Check if `union semun' exists. Define HAVE_UNION_SEMUN if so.\nIndex: src/Makefile.global.in\n===================================================================\nRCS file: /cvsroot/pgsql/src/Makefile.global.in,v\nretrieving revision 1.140\ndiff -c -r1.140 Makefile.global.in\n*** src/Makefile.global.in\t2001/10/13 15:24:23\t1.140\n--- src/Makefile.global.in\t2001/12/17 19:05:48\n***************\n*** 328,333 ****\n--- 328,334 ----\n STRERROR = @STRERROR@\n SNPRINTF = @SNPRINTF@\n STRDUP = @STRDUP@\n+ MEMCMP = @MEMCMP@\n STRTOUL = @STRTOUL@\n \n \nIndex: src/backend/port/Makefile.in\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/port/Makefile.in,v\nretrieving revision 1.29\ndiff -c -r1.29 Makefile.in\n*** src/backend/port/Makefile.in\t2001/05/08 19:38:57\t1.29\n--- src/backend/port/Makefile.in\t2001/12/17 19:05:48\n***************\n*** 22,28 ****\n include $(top_builddir)/src/Makefile.global\n \n OBJS = dynloader.o @INET_ATON@ @STRERROR@ @MISSING_RANDOM@ @SRANDOM@\n! OBJS+= @GETHOSTNAME@ @GETRUSAGE@ @STRCASECMP@ @TAS@ @ISINF@\n OBJS+= @STRTOL@ @STRTOUL@ @SNPRINTF@\n ifdef STRDUP\n OBJS += $(top_builddir)/src/utils/strdup.o\n--- 22,28 ----\n include $(top_builddir)/src/Makefile.global\n \n OBJS = dynloader.o @INET_ATON@ @STRERROR@ @MISSING_RANDOM@ @SRANDOM@\n! OBJS+= @GETHOSTNAME@ @GETRUSAGE@ @MEMCMP@ @STRCASECMP@ @TAS@ @ISINF@\n OBJS+= @STRTOL@ @STRTOUL@ @SNPRINTF@\n ifdef STRDUP\n OBJS += $(top_builddir)/src/utils/strdup.o", "msg_date": "Wed, 19 Dec 2001 14:01:40 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: SunOS patch for memcmp()" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> At a glance, this patch looks okay. However, since this issue does not\n> represent a regression from 7.1 I'm not exactly in favour of installing it\n> now. We want to get a release out, so I think we need to get stricter in\n> those matters. And this patch is not exactly trivial.\n\nYou're being way too harsh on it. The configure test is exactly the\nstandard AC_FUNC_MEMCMP test, tweaked to output its result the same way\nour other port inclusions do. The memcmp implementation is also well\ntested, being lifted from NetBSD. Where's the problem?\n\nClearly it should be tested, and I presume Tatsuo will do that,\nbut my vote is to apply it. Why should we drop SunOS off the list\nof supported ports?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 19 Dec 2001 14:58:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SunOS patch for memcmp() " }, { "msg_contents": "> Clearly it should be tested, and I presume Tatsuo will do that,\n> but my vote is to apply it. Why should we drop SunOS off the list\n> of supported ports?\n\nOk. I have tested patches from Bruce. Now tests for bit passed.\nRemaining issues seem that strtol() is broken on SunOS4, not detecting\nan overflow, which causes int4 and some of other tests failure. Should\nwe use our own strtol()?\n--\nTatsuo Ishii\n\n*** ./expected/int4.out\tWed Mar 15 08:06:56 2000\n--- ./results/int4.out\tThu Dec 20 10:22:52 2001\n***************\n*** 14,20 ****\n INSERT INTO INT4_TBL(f1) VALUES ('-2147483647');\n -- bad input values -- should give warnings \n INSERT INTO INT4_TBL(f1) VALUES ('1000000000000');\n- ERROR: pg_atoi: error reading \"1000000000000\": Numerical result out of range\n INSERT INTO INT4_TBL(f1) VALUES ('asdf');\n ERROR: pg_atoi: error in \"asdf\": can't parse \"asdf\"\n SELECT '' AS five, INT4_TBL.*;\n--- 14,19 ----\n***************\n*** 25,31 ****\n | -123456\n | 2147483647\n | -2147483647\n! (5 rows)\n \n SELECT '' AS four, i.* FROM INT4_TBL i WHERE i.f1 <> int2 '0';\n four | f1 \n--- 24,31 ----\n | -123456\n | 2147483647\n | -2147483647\n! | -727379968\n! (6 rows)\n \n SELECT '' AS four, i.* FROM INT4_TBL i WHERE i.f1 <> int2 '0';\n four | f1 \n***************\n*** 34,40 ****\n | -123456\n | 2147483647\n | -2147483647\n! (4 rows)\n \n SELECT '' AS four, i.* FROM INT4_TBL i WHERE i.f1 <> int4 '0';\n four | f1 \n--- 34,41 ----\n | -123456\n | 2147483647\n | -2147483647\n! | -727379968\n! (5 rows)\n \n SELECT '' AS four, i.* FROM INT4_TBL i WHERE i.f1 <> int4 '0';\n four | f1 \n***************\n*** 43,49 ****\n | -123456\n | 2147483647\n | -2147483647\n! (4 rows)\n \n SELECT '' AS one, i.* FROM INT4_TBL i WHERE i.f1 = int2 '0';\n one | f1 \n--- 44,51 ----\n | -123456\n | 2147483647\n | -2147483647\n! | -727379968\n! (5 rows)\n \n SELECT '' AS one, i.* FROM INT4_TBL i WHERE i.f1 = int2 '0';\n one | f1 \n***************\n*** 62,75 ****\n -----+-------------\n | -123456\n | -2147483647\n! (2 rows)\n \n SELECT '' AS two, i.* FROM INT4_TBL i WHERE i.f1 < int4 '0';\n two | f1 \n -----+-------------\n | -123456\n | -2147483647\n! (2 rows)\n \n SELECT '' AS three, i.* FROM INT4_TBL i WHERE i.f1 <= int2 '0';\n three | f1 \n--- 64,79 ----\n -----+-------------\n | -123456\n | -2147483647\n! | -727379968\n! (3 rows)\n \n SELECT '' AS two, i.* FROM INT4_TBL i WHERE i.f1 < int4 '0';\n two | f1 \n -----+-------------\n | -123456\n | -2147483647\n! | -727379968\n! (3 rows)\n \n SELECT '' AS three, i.* FROM INT4_TBL i WHERE i.f1 <= int2 '0';\n three | f1 \n***************\n*** 77,83 ****\n | 0\n | -123456\n | -2147483647\n! (3 rows)\n \n SELECT '' AS three, i.* FROM INT4_TBL i WHERE i.f1 <= int4 '0';\n three | f1 \n--- 81,88 ----\n | 0\n | -123456\n | -2147483647\n! | -727379968\n! (4 rows)\n \n SELECT '' AS three, i.* FROM INT4_TBL i WHERE i.f1 <= int4 '0';\n three | f1 \n***************\n*** 85,91 ****\n | 0\n | -123456\n | -2147483647\n! (3 rows)\n \n SELECT '' AS two, i.* FROM INT4_TBL i WHERE i.f1 > int2 '0';\n two | f1 \n--- 90,97 ----\n | 0\n | -123456\n | -2147483647\n! | -727379968\n! (4 rows)\n \n SELECT '' AS two, i.* FROM INT4_TBL i WHERE i.f1 > int2 '0';\n two | f1 \n***************\n*** 127,157 ****\n -- any evens \n SELECT '' AS three, i.* FROM INT4_TBL i WHERE (i.f1 % int4 '2') = int2 '0';\n three | f1 \n! -------+---------\n | 0\n | 123456\n | -123456\n! (3 rows)\n \n SELECT '' AS five, i.f1, i.f1 * int2 '2' AS x FROM INT4_TBL i;\n five | f1 | x \n! ------+-------------+---------\n | 0 | 0\n | 123456 | 246912\n | -123456 | -246912\n | 2147483647 | -2\n | -2147483647 | 2\n! (5 rows)\n \n SELECT '' AS five, i.f1, i.f1 * int4 '2' AS x FROM INT4_TBL i;\n five | f1 | x \n! ------+-------------+---------\n | 0 | 0\n | 123456 | 246912\n | -123456 | -246912\n | 2147483647 | -2\n | -2147483647 | 2\n! (5 rows)\n \n SELECT '' AS five, i.f1, i.f1 + int2 '2' AS x FROM INT4_TBL i;\n five | f1 | x \n--- 133,166 ----\n -- any evens \n SELECT '' AS three, i.* FROM INT4_TBL i WHERE (i.f1 % int4 '2') = int2 '0';\n three | f1 \n! -------+------------\n | 0\n | 123456\n | -123456\n! | -727379968\n! (4 rows)\n \n SELECT '' AS five, i.f1, i.f1 * int2 '2' AS x FROM INT4_TBL i;\n five | f1 | x \n! ------+-------------+-------------\n | 0 | 0\n | 123456 | 246912\n | -123456 | -246912\n | 2147483647 | -2\n | -2147483647 | 2\n! | -727379968 | -1454759936\n! (6 rows)\n \n SELECT '' AS five, i.f1, i.f1 * int4 '2' AS x FROM INT4_TBL i;\n five | f1 | x \n! ------+-------------+-------------\n | 0 | 0\n | 123456 | 246912\n | -123456 | -246912\n | 2147483647 | -2\n | -2147483647 | 2\n! | -727379968 | -1454759936\n! (6 rows)\n \n SELECT '' AS five, i.f1, i.f1 + int2 '2' AS x FROM INT4_TBL i;\n five | f1 | x \n***************\n*** 161,167 ****\n | -123456 | -123454\n | 2147483647 | -2147483647\n | -2147483647 | -2147483645\n! (5 rows)\n \n SELECT '' AS five, i.f1, i.f1 + int4 '2' AS x FROM INT4_TBL i;\n five | f1 | x \n--- 170,177 ----\n | -123456 | -123454\n | 2147483647 | -2147483647\n | -2147483647 | -2147483645\n! | -727379968 | -727379966\n! (6 rows)\n \n SELECT '' AS five, i.f1, i.f1 + int4 '2' AS x FROM INT4_TBL i;\n five | f1 | x \n***************\n*** 171,177 ****\n | -123456 | -123454\n | 2147483647 | -2147483647\n | -2147483647 | -2147483645\n! (5 rows)\n \n SELECT '' AS five, i.f1, i.f1 - int2 '2' AS x FROM INT4_TBL i;\n five | f1 | x \n--- 181,188 ----\n | -123456 | -123454\n | 2147483647 | -2147483647\n | -2147483647 | -2147483645\n! | -727379968 | -727379966\n! (6 rows)\n \n SELECT '' AS five, i.f1, i.f1 - int2 '2' AS x FROM INT4_TBL i;\n five | f1 | x \n***************\n*** 181,187 ****\n | -123456 | -123458\n | 2147483647 | 2147483645\n | -2147483647 | 2147483647\n! (5 rows)\n \n SELECT '' AS five, i.f1, i.f1 - int4 '2' AS x FROM INT4_TBL i;\n five | f1 | x \n--- 192,199 ----\n | -123456 | -123458\n | 2147483647 | 2147483645\n | -2147483647 | 2147483647\n! | -727379968 | -727379970\n! (6 rows)\n \n SELECT '' AS five, i.f1, i.f1 - int4 '2' AS x FROM INT4_TBL i;\n five | f1 | x \n***************\n*** 191,197 ****\n | -123456 | -123458\n | 2147483647 | 2147483645\n | -2147483647 | 2147483647\n! (5 rows)\n \n SELECT '' AS five, i.f1, i.f1 / int2 '2' AS x FROM INT4_TBL i;\n five | f1 | x \n--- 203,210 ----\n | -123456 | -123458\n | 2147483647 | 2147483645\n | -2147483647 | 2147483647\n! | -727379968 | -727379970\n! (6 rows)\n \n SELECT '' AS five, i.f1, i.f1 / int2 '2' AS x FROM INT4_TBL i;\n five | f1 | x \n***************\n*** 201,207 ****\n | -123456 | -61728\n | 2147483647 | 1073741823\n | -2147483647 | -1073741823\n! (5 rows)\n \n SELECT '' AS five, i.f1, i.f1 / int4 '2' AS x FROM INT4_TBL i;\n five | f1 | x \n--- 214,221 ----\n | -123456 | -61728\n | 2147483647 | 1073741823\n | -2147483647 | -1073741823\n! | -727379968 | -363689984\n! (6 rows)\n \n SELECT '' AS five, i.f1, i.f1 / int4 '2' AS x FROM INT4_TBL i;\n five | f1 | x \n***************\n*** 211,217 ****\n | -123456 | -61728\n | 2147483647 | 1073741823\n | -2147483647 | -1073741823\n! (5 rows)\n \n --\n -- more complex expressions\n--- 225,232 ----\n | -123456 | -61728\n | 2147483647 | 1073741823\n | -2147483647 | -1073741823\n! | -727379968 | -363689984\n! (6 rows)\n \n --\n -- more complex expressions\n\n======================================================================\n\n*** ./expected/numerology.out\tThu Mar 16 08:31:06 2000\n--- ./results/numerology.out\tThu Dec 20 10:25:54 2001\n***************\n*** 17,22 ****\n--- 17,23 ----\n ten | f1 \n -----+-------------\n | -2147483647\n+ | -727379968\n | -123456\n | -32767\n | -1234\n***************\n*** 26,32 ****\n | 32767\n | 123456\n | 2147483647\n! (10 rows)\n \n -- int4\n CREATE TABLE TEMP_INT4 (f1 INT4);\n--- 27,33 ----\n | 32767\n | 123456\n | 2147483647\n! (11 rows)\n \n -- int4\n CREATE TABLE TEMP_INT4 (f1 INT4);\n\n======================================================================\n\n*** ./expected/geometry.out\tFri Nov 30 03:57:31 2001\n--- ./results/geometry.out\tThu Dec 20 10:26:46 2001\n***************\n*** 150,160 ****\n six | box \n -----+----------------------------------------------------------------------------\n | (2.12132034355964,2.12132034355964),(-2.12132034355964,-2.12132034355964)\n! | (71.7106781186548,72.7106781186548),(-69.7106781186548,-68.7106781186548)\n! | (4.53553390593274,6.53553390593274),(-2.53553390593274,-0.535533905932738)\n! | (3.12132034355964,4.12132034355964),(-1.12132034355964,-0.121320343559643)\n | (107.071067811865,207.071067811865),(92.9289321881345,192.928932188135)\n! | (170.710678118655,70.7106781186548),(29.2893218813452,-70.7106781186548)\n (6 rows)\n \n -- translation\n--- 150,160 ----\n six | box \n -----+----------------------------------------------------------------------------\n | (2.12132034355964,2.12132034355964),(-2.12132034355964,-2.12132034355964)\n! | (71.7106781186547,72.7106781186547),(-69.7106781186547,-68.7106781186547)\n! | (4.53553390593274,6.53553390593274),(-2.53553390593274,-0.535533905932737)\n! | (3.12132034355964,4.12132034355964),(-1.12132034355964,-0.121320343559642)\n | (107.071067811865,207.071067811865),(92.9289321881345,192.928932188135)\n! | (170.710678118655,70.7106781186547),(29.2893218813453,-70.7106781186547)\n (6 rows)\n \n -- translation\n***************\n*** 445,452 ****\n -----+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n | ((-3,0),(-2.59807621135076,1.50000000000442),(-1.49999999999116,2.59807621135842),(1.53102359078377e-11,3),(1.50000000001768,2.59807621134311),(2.59807621136607,1.4999999999779),(3,-3.06204718156754e-11),(2.59807621133545,-1.50000000003094),(1.49999999996464,-2.59807621137373),(-4.59307077235131e-11,-3),(-1.5000000000442,-2.5980762113278),(-2.59807621138138,-1.49999999995138))\n | ((-99,2),(-85.6025403783588,52.0000000001473),(-48.9999999997054,88.602540378614),(1.00000000051034,102),(51.0000000005893,88.6025403781036),(87.6025403788692,51.9999999992634),(101,1.99999999897932),(87.6025403778485,-48.0000000010313),(50.9999999988214,-84.6025403791243),(0.999999998468976,-98),(-49.0000000014732,-84.6025403775933),(-85.6025403793795,-47.9999999983795))\n! | ((-4,3),(-3.33012701891794,5.50000000000737),(-1.49999999998527,7.3301270189307),(1.00000000002552,8),(3.50000000002946,7.33012701890518),(5.33012701894346,5.49999999996317),(6,2.99999999994897),(5.33012701889242,0.499999999948437),(3.49999999994107,-1.33012701895622),(0.999999999923449,-2),(-1.50000000007366,-1.33012701887966),(-3.33012701896897,0.500000000081028))\n! | ((-2,2),(-1.59807621135076,3.50000000000442),(-0.499999999991161,4.59807621135842),(1.00000000001531,5),(2.50000000001768,4.59807621134311),(3.59807621136607,3.4999999999779),(4,1.99999999996938),(3.59807621133545,0.499999999969062),(2.49999999996464,-0.59807621137373),(0.999999999954069,-1),(-0.500000000044197,-0.598076211327799),(-1.59807621138138,0.500000000048617))\n | ((90,200),(91.3397459621641,205.000000000015),(95.0000000000295,208.660254037861),(100.000000000051,210),(105.000000000059,208.66025403781),(108.660254037887,204.999999999926),(110,199.999999999898),(108.660254037785,194.999999999897),(104.999999999882,191.339745962088),(99.9999999998469,190),(94.9999999998527,191.339745962241),(91.3397459620621,195.000000000162))\n | ((0,0),(13.3974596216412,50.0000000001473),(50.0000000002946,86.602540378614),(100.00000000051,100),(150.000000000589,86.6025403781036),(186.602540378869,49.9999999992634),(200,-1.02068239385585e-09),(186.602540377848,-50.0000000010313),(149.999999998821,-86.6025403791243),(99.999999998469,-100),(49.9999999985268,-86.6025403775933),(13.3974596206205,-49.9999999983795))\n (6 rows)\n--- 445,452 ----\n -----+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n | ((-3,0),(-2.59807621135076,1.50000000000442),(-1.49999999999116,2.59807621135842),(1.53102359078377e-11,3),(1.50000000001768,2.59807621134311),(2.59807621136607,1.4999999999779),(3,-3.06204718156754e-11),(2.59807621133545,-1.50000000003094),(1.49999999996464,-2.59807621137373),(-4.59307077235131e-11,-3),(-1.5000000000442,-2.5980762113278),(-2.59807621138138,-1.49999999995138))\n | ((-99,2),(-85.6025403783588,52.0000000001473),(-48.9999999997054,88.602540378614),(1.00000000051034,102),(51.0000000005893,88.6025403781036),(87.6025403788692,51.9999999992634),(101,1.99999999897932),(87.6025403778485,-48.0000000010313),(50.9999999988214,-84.6025403791243),(0.999999998468976,-98),(-49.0000000014732,-84.6025403775933),(-85.6025403793795,-47.9999999983795))\n! | ((-4,3),(-3.33012701891794,5.50000000000737),(-1.49999999998527,7.3301270189307),(1.00000000002552,8),(3.50000000002946,7.33012701890518),(5.33012701894346,5.49999999996317),(6,2.99999999994897),(5.33012701889242,0.499999999948437),(3.49999999994107,-1.33012701895622),(0.999999999923449,-2),(-1.50000000007366,-1.33012701887967),(-3.33012701896897,0.500000000081028))\n! | ((-2,2),(-1.59807621135076,3.50000000000442),(-0.499999999991161,4.59807621135842),(1.00000000001531,5),(2.50000000001768,4.59807621134311),(3.59807621136607,3.4999999999779),(4,1.99999999996938),(3.59807621133545,0.499999999969062),(2.49999999996464,-0.598076211373729),(0.999999999954069,-1),(-0.500000000044197,-0.598076211327799),(-1.59807621138138,0.500000000048616))\n | ((90,200),(91.3397459621641,205.000000000015),(95.0000000000295,208.660254037861),(100.000000000051,210),(105.000000000059,208.66025403781),(108.660254037887,204.999999999926),(110,199.999999999898),(108.660254037785,194.999999999897),(104.999999999882,191.339745962088),(99.9999999998469,190),(94.9999999998527,191.339745962241),(91.3397459620621,195.000000000162))\n | ((0,0),(13.3974596216412,50.0000000001473),(50.0000000002946,86.602540378614),(100.00000000051,100),(150.000000000589,86.6025403781036),(186.602540378869,49.9999999992634),(200,-1.02068239385585e-09),(186.602540377848,-50.0000000010313),(149.999999998821,-86.6025403791243),(99.999999998469,-100),(49.9999999985268,-86.6025403775933),(13.3974596206205,-49.9999999983795))\n (6 rows)\n***************\n*** 457,467 ****\n six | polygon \n -----+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n | ((-3,0),(-2.12132034355423,2.12132034356506),(1.53102359078377e-11,3),(2.12132034357588,2.1213203435434),(3,-3.06204718156754e-11),(2.12132034353258,-2.12132034358671),(-4.59307077235131e-11,-3),(-2.12132034359753,-2.12132034352175))\n! | ((-99,2),(-69.7106781184743,72.7106781188352),(1.00000000051034,102),(71.710678119196,72.7106781181134),(101,1.99999999897932),(71.7106781177526,-68.7106781195569),(0.999999998468976,-98),(-69.7106781199178,-68.7106781173917))\n | ((-4,3),(-2.53553390592372,6.53553390594176),(1.00000000002552,8),(4.5355339059598,6.53553390590567),(6,2.99999999994897),(4.53553390588763,-0.535533905977846),(0.999999999923449,-2),(-2.53553390599589,-0.535533905869586))\n | ((-2,2),(-1.12132034355423,4.12132034356506),(1.00000000001531,5),(3.12132034357588,4.1213203435434),(4,1.99999999996938),(3.12132034353258,-0.121320343586707),(0.999999999954069,-1),(-1.12132034359753,-0.121320343521752))\n | ((90,200),(92.9289321881526,207.071067811884),(100.000000000051,210),(107.07106781192,207.071067811811),(110,199.999999999898),(107.071067811775,192.928932188044),(99.9999999998469,190),(92.9289321880082,192.928932188261))\n! | ((0,0),(29.2893218815257,70.7106781188352),(100.00000000051,100),(170.710678119196,70.7106781181134),(200,-1.02068239385585e-09),(170.710678117753,-70.7106781195569),(99.999999998469,-100),(29.2893218800822,-70.7106781173917))\n (6 rows)\n \n --\n--- 457,467 ----\n six | polygon \n -----+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n | ((-3,0),(-2.12132034355423,2.12132034356506),(1.53102359078377e-11,3),(2.12132034357588,2.1213203435434),(3,-3.06204718156754e-11),(2.12132034353258,-2.12132034358671),(-4.59307077235131e-11,-3),(-2.12132034359753,-2.12132034352175))\n! | ((-99,2),(-69.7106781184743,72.7106781188352),(1.00000000051034,102),(71.710678119196,72.7106781181135),(101,1.99999999897932),(71.7106781177526,-68.7106781195569),(0.999999998468976,-98),(-69.7106781199178,-68.7106781173917))\n | ((-4,3),(-2.53553390592372,6.53553390594176),(1.00000000002552,8),(4.5355339059598,6.53553390590567),(6,2.99999999994897),(4.53553390588763,-0.535533905977846),(0.999999999923449,-2),(-2.53553390599589,-0.535533905869586))\n | ((-2,2),(-1.12132034355423,4.12132034356506),(1.00000000001531,5),(3.12132034357588,4.1213203435434),(4,1.99999999996938),(3.12132034353258,-0.121320343586707),(0.999999999954069,-1),(-1.12132034359753,-0.121320343521752))\n | ((90,200),(92.9289321881526,207.071067811884),(100.000000000051,210),(107.07106781192,207.071067811811),(110,199.999999999898),(107.071067811775,192.928932188044),(99.9999999998469,190),(92.9289321880082,192.928932188261))\n! | ((0,0),(29.2893218815257,70.7106781188352),(100.00000000051,100),(170.710678119196,70.7106781181135),(200,-1.02068239385585e-09),(170.710678117753,-70.7106781195569),(99.999999998469,-100),(29.2893218800822,-70.7106781173917))\n (6 rows)\n \n --\n***************\n*** 503,513 ****\n WHERE (p1.f1 <-> c1.f1) > 0\n ORDER BY distance, circle, point using <<;\n twentyfour | circle | point | distance \n! ------------+----------------+------------+-------------------\n! | <(100,0),100> | (5.1,34.5) | 0.976531926977964\n | <(1,2),3> | (-3,4) | 1.47213595499958\n | <(0,0),3> | (-3,4) | 2\n! | <(100,0),100> | (-3,4) | 3.07764064044151\n | <(100,0),100> | (-5,-12) | 5.68348972285122\n | <(1,3),5> | (-10,0) | 6.40175425099138\n | <(1,3),5> | (10,10) | 6.40175425099138\n--- 503,513 ----\n WHERE (p1.f1 <-> c1.f1) > 0\n ORDER BY distance, circle, point using <<;\n twentyfour | circle | point | distance \n! ------------+----------------+------------+------------------\n! | <(100,0),100> | (5.1,34.5) | 0.97653192697797\n | <(1,2),3> | (-3,4) | 1.47213595499958\n | <(0,0),3> | (-3,4) | 2\n! | <(100,0),100> | (-3,4) | 3.07764064044152\n | <(100,0),100> | (-5,-12) | 5.68348972285122\n | <(1,3),5> | (-10,0) | 6.40175425099138\n | <(1,3),5> | (10,10) | 6.40175425099138\n***************\n*** 519,525 ****\n | <(0,0),3> | (10,10) | 11.142135623731\n | <(1,3),5> | (-5,-12) | 11.1554944214035\n | <(1,2),3> | (-5,-12) | 12.2315462117278\n! | <(1,3),5> | (5.1,34.5) | 26.7657047773224\n | <(1,2),3> | (5.1,34.5) | 29.757594539282\n | <(0,0),3> | (5.1,34.5) | 31.8749193547455\n | <(100,200),10> | (5.1,34.5) | 180.778038568384\n--- 519,525 ----\n | <(0,0),3> | (10,10) | 11.142135623731\n | <(1,3),5> | (-5,-12) | 11.1554944214035\n | <(1,2),3> | (-5,-12) | 12.2315462117278\n! | <(1,3),5> | (5.1,34.5) | 26.7657047773223\n | <(1,2),3> | (5.1,34.5) | 29.757594539282\n | <(0,0),3> | (5.1,34.5) | 31.8749193547455\n | <(100,200),10> | (5.1,34.5) | 180.778038568384\n\n======================================================================\n\n*** ./expected/horology.out\tThu Nov 22 03:27:25 2001\n--- ./results/horology.out\tThu Dec 20 10:26:49 2001\n***************\n*** 1499,1508 ****\n | Wed Mar 15 13:14:02 2000 PST | @ 34 years | Tue Mar 15 13:14:02 1966 PST\n | Sun Dec 31 17:32:01 2000 PST | @ 34 years | Sat Dec 31 17:32:01 1966 PST\n | Mon Jan 01 17:32:01 2001 PST | @ 34 years | Sun Jan 01 17:32:01 1967 PST\n! | Sat Sep 22 18:19:20 2001 PDT | @ 34 years | Fri Sep 22 18:19:20 1967 PDT\n! | Thu Jan 01 00:00:00 1970 PST | @ 5 mons 12 hours | Thu Jul 31 13:00:00 1969 PDT\n! | Thu Jan 01 00:00:00 1970 PST | @ 5 mons | Fri Aug 01 01:00:00 1969 PDT\n! | Thu Jan 01 00:00:00 1970 PST | @ 3 mons | Wed Oct 01 01:00:00 1969 PDT\n | Thu Jan 01 00:00:00 1970 PST | @ 10 days | Mon Dec 22 00:00:00 1969 PST\n | Thu Jan 01 00:00:00 1970 PST | @ 1 day 2 hours 3 mins 4 secs | Tue Dec 30 21:56:56 1969 PST\n | Thu Jan 01 00:00:00 1970 PST | @ 5 hours | Wed Dec 31 19:00:00 1969 PST\n--- 1499,1508 ----\n | Wed Mar 15 13:14:02 2000 PST | @ 34 years | Tue Mar 15 13:14:02 1966 PST\n | Sun Dec 31 17:32:01 2000 PST | @ 34 years | Sat Dec 31 17:32:01 1966 PST\n | Mon Jan 01 17:32:01 2001 PST | @ 34 years | Sun Jan 01 17:32:01 1967 PST\n! | Sat Sep 22 18:19:20 2001 PDT | @ 34 years | Fri Sep 22 17:19:20 1967 PST\n! | Thu Jan 01 00:00:00 1970 PST | @ 5 mons 12 hours | Thu Jul 31 12:00:00 1969 PST\n! | Thu Jan 01 00:00:00 1970 PST | @ 5 mons | Fri Aug 01 00:00:00 1969 PST\n! | Thu Jan 01 00:00:00 1970 PST | @ 3 mons | Wed Oct 01 00:00:00 1969 PST\n | Thu Jan 01 00:00:00 1970 PST | @ 10 days | Mon Dec 22 00:00:00 1969 PST\n | Thu Jan 01 00:00:00 1970 PST | @ 1 day 2 hours 3 mins 4 secs | Tue Dec 30 21:56:56 1969 PST\n | Thu Jan 01 00:00:00 1970 PST | @ 5 hours | Wed Dec 31 19:00:00 1969 PST\n\n======================================================================\n\n*** ./expected/union.out\tThu Nov 9 11:47:49 2000\n--- ./results/union.out\tThu Dec 20 10:28:50 2001\n***************\n*** 163,168 ****\n--- 163,169 ----\n -----------------------\n -1.2345678901234e+200\n -2147483647\n+ -727379968\n -123456\n -1004.3\n -34.84\n***************\n*** 170,176 ****\n 0\n 123456\n 2147483647\n! (9 rows)\n \n SELECT f1 AS ten FROM FLOAT8_TBL\n UNION ALL\n--- 171,177 ----\n 0\n 123456\n 2147483647\n! (10 rows)\n \n SELECT f1 AS ten FROM FLOAT8_TBL\n UNION ALL\n***************\n*** 187,193 ****\n -123456\n 2147483647\n -2147483647\n! (10 rows)\n \n SELECT f1 AS five FROM FLOAT8_TBL\n WHERE f1 BETWEEN -1e6 AND 1e6\n--- 188,195 ----\n -123456\n 2147483647\n -2147483647\n! -727379968\n! (11 rows)\n \n SELECT f1 AS five FROM FLOAT8_TBL\n WHERE f1 BETWEEN -1e6 AND 1e6\n\n======================================================================\n\n*** ./expected/random.out\tThu Jan 6 15:40:54 2000\n--- ./results/random.out\tThu Dec 20 10:28:58 2001\n***************\n*** 25,31 ****\n GROUP BY random HAVING count(random) > 1;\n random | count \n --------+-------\n! (0 rows)\n \n SELECT random FROM RANDOM_TBL\n WHERE random NOT BETWEEN 80 AND 120;\n--- 25,32 ----\n GROUP BY random HAVING count(random) > 1;\n random | count \n --------+-------\n! 105 | 2\n! (1 row)\n \n SELECT random FROM RANDOM_TBL\n WHERE random NOT BETWEEN 80 AND 120;\n\n======================================================================\n\n", "msg_date": "Thu, 20 Dec 2001 10:45:53 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: SunOS patch for memcmp() " }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> Ok. I have tested patches from Bruce. Now tests for bit passed.\n> Remaining issues seem that strtol() is broken on SunOS4, not detecting\n> an overflow, which causes int4 and some of other tests failure.\n\nLooks like you might also want to use horology-no-DST-before-1970.out.\n\n> Should we use our own strtol()?\n\nMumble. The memcmp fix was all from well-tested parts, but I don't\nthink we have a canned test for strtol breakage available. Also,\nI believe that the SunOS strtol lossage has been known and tolerated\nin previous Postgres releases. Since it's not a regression, I'm going\nto change sides and vote with Peter: let's not fix this one now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 19 Dec 2001 22:29:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SunOS patch for memcmp() " }, { "msg_contents": "> Looks like you might also want to use horology-no-DST-before-1970.out.\n\nDone.\n\n> > Should we use our own strtol()?\n> \n> Mumble. The memcmp fix was all from well-tested parts, but I don't\n> think we have a canned test for strtol breakage available. Also,\n> I believe that the SunOS strtol lossage has been known and tolerated\n> in previous Postgres releases. Since it's not a regression, I'm going\n> to change sides and vote with Peter: let's not fix this one now.\n\nOk.\n--\nTatsuo Ishii\n", "msg_date": "Thu, 20 Dec 2001 13:25:17 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: SunOS patch for memcmp() " }, { "msg_contents": "> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > Ok. I have tested patches from Bruce. Now tests for bit passed.\n> > Remaining issues seem that strtol() is broken on SunOS4, not detecting\n> > an overflow, which causes int4 and some of other tests failure.\n> \n> Looks like you might also want to use horology-no-DST-before-1970.out.\n> \n> > Should we use our own strtol()?\n> \n> Mumble. The memcmp fix was all from well-tested parts, but I don't\n> think we have a canned test for strtol breakage available. Also,\n> I believe that the SunOS strtol lossage has been known and tolerated\n> in previous Postgres releases. Since it's not a regression, I'm going\n> to change sides and vote with Peter: let's not fix this one now.\n\nOK, what do people want with the memcmp() fix? Tatsuo and I say apply,\nTom is yes, or was, and Peter is probably no. Can I have more vote\neither way? I have already posted the patch.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 20 Dec 2001 00:02:53 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: SunOS patch for memcmp()" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, what do people want with the memcmp() fix? Tatsuo and I say apply,\n> Tom is yes, or was,\n\nStill is. I don't want to gin up a strtol fix from scratch at this\nlate date in our cycle, but I think that the memcmp fix is safe.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 20 Dec 2001 00:06:06 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SunOS patch for memcmp() " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, what do people want with the memcmp() fix? Tatsuo and I say apply,\n> > Tom is yes, or was,\n> \n> Still is. I don't want to gin up a strtol fix from scratch at this\n> late date in our cycle, but I think that the memcmp fix is safe.\n\nOK, good. I will put the strtol on my list for 7.3. The memcmp is much\nmore significant. Overflow is minor for most uses.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 20 Dec 2001 00:07:57 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: SunOS patch for memcmp()" }, { "msg_contents": "> > > OK, what do people want with the memcmp() fix? Tatsuo and I say apply,\n> > > Tom is yes, or was,\n> > Still is. I don't want to gin up a strtol fix from scratch at this\n> > late date in our cycle, but I think that the memcmp fix is safe.\n> OK, good. I will put the strtol on my list for 7.3. The memcmp is much\n> more significant. Overflow is minor for most uses.\n\nRight. I'll plop SunOS back into the list of supported platforms for\nthis release. Thanks for the work Tatsuo and Bruce!\n\n - Thomas\n", "msg_date": "Thu, 20 Dec 2001 05:17:18 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: SunOS patch for memcmp()" }, { "msg_contents": "> > > > OK, what do people want with the memcmp() fix? Tatsuo and I say apply,\n> > > > Tom is yes, or was,\n> > > Still is. I don't want to gin up a strtol fix from scratch at this\n> > > late date in our cycle, but I think that the memcmp fix is safe.\n> > OK, good. I will put the strtol on my list for 7.3. The memcmp is much\n> > more significant. Overflow is minor for most uses.\n> \n> Right. I'll plop SunOS back into the list of supported platforms for\n> this release. Thanks for the work Tatsuo and Bruce!\n\nOK, I have four votes for the patch, and one against. I will apply it\nnow. We can consider SunOS supported. There is the the problem that\noverflow is not detected by strtol but that is not a critical feature:\n\n INSERT INTO INT4_TBL(f1) VALUES ('1000000000000');\n- ERROR: pg_atoi: error reading \"1000000000000\": Numerical result out of range\n\nI will try to get that fixed for 7.3.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 20 Dec 2001 16:21:25 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: SunOS patch for memcmp()" }, { "msg_contents": "> > Right. I'll plop SunOS back into the list of supported platforms for\n> > this release. Thanks for the work Tatsuo and Bruce!\n> \n> OK, I have four votes for the patch, and one against. I will apply it\n> now. We can consider SunOS supported. There is the the problem that\n> overflow is not detected by strtol but that is not a critical feature:\n> \n> INSERT INTO INT4_TBL(f1) VALUES ('1000000000000');\n> - ERROR: pg_atoi: error reading \"1000000000000\": Numerical result out of range\n> \n> I will try to get that fixed for 7.3.\n\nPlease note that (as I said before), I only tested the serial\nregression test. I did not execute the parallel test.\n--\nTatsuo Ishii\n", "msg_date": "Fri, 21 Dec 2001 10:44:59 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: SunOS patch for memcmp()" } ]
[ { "msg_contents": "I'm looking at gram.y to munge the precision support for date/time per\nrecent discussions, and am noticing once again the extensions added to\nsupport ODBC by allowing empty parens after some SQL-defined \"constants\"\n(e.g. CURRENT_TIMESTAMP, CURRENT_USER, etc etc). Currently, these are\ndone by replicating code and by altering the allowed grammar, and it\nhappens to be the same area of code I need to be looking at.\n\nThere is are existing mechanisms for supporting ODBC function mappings\nand function extensions which do not require altering gram.y. I would\npropose that we use these mechanisms for this case.\n\nThe mechanisms are:\n\n 1) remap function names in the ODBC driver (odbc/convert.c; trivial).\n 2) define (some) new functions as SQL functions (in odbc/odbc.sql;\ntrivial).\n\nThe remapped function names from (1) either already exist in PostgreSQL,\nor are newly defined in odbc.sql. For the particular cases gram.y was\naltered to support, the ODBC function names conflict in our grammar, so\nthey should be mapped to a non-conflicting name. For example, to support\n*ODBC* syntax CURRENT_TIMESTAMP() (illegal in SQL99), we would add a\nmapping in convert.c as\n\n \"CURRENT_TIMESTAMP\", odbc_timestamp\n\nand then augment odbc.sql to include\n\n create or replace function odbc_timestamp() returns timestamp as '\n select current_timestamp;\n ' language 'sql';\n\nThis can be done with the other half dozen or so functions now in\ngram.y.\n\nComments?\n\n - Thomas\n", "msg_date": "Fri, 07 Dec 2001 16:43:14 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "ODBC functions in gram.y" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> I'm looking at gram.y to munge the precision support for date/time per\n> recent discussions, and am noticing once again the extensions added to\n> support ODBC by allowing empty parens after some SQL-defined \"constants\"\n> (e.g. CURRENT_TIMESTAMP, CURRENT_USER, etc etc). Currently, these are\n> done by replicating code and by altering the allowed grammar, and it\n> happens to be the same area of code I need to be looking at.\n\nIt's not apparent to me that doing this in the ODBC support is simpler\nor better than the hack Peter put into gram.y. But more importantly,\nat this point in the 7.2 cycle we ought not be making any changes that\nare not *essential* bug fixes. If you want to reorganize the support\nfor CURRENT_TIMESTAMP() like that, let's leave it for 7.3.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Dec 2001 12:20:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ODBC functions in gram.y " }, { "msg_contents": "> It's not apparent to me that doing this in the ODBC support is simpler\n> or better than the hack Peter put into gram.y.\n\n?? We have a long standing design policy to not pollute SQL syntax with\nODBC cruft. And we have existing mechanisms to easily enable that,\ndelegating that compatibility layer to the ODBC driver where it belongs.\nIf we phrase this instead as an SQL syntax bug report it becomes even\nclearer.\n\n> But more importantly,\n> at this point in the 7.2 cycle we ought not be making any changes that\n> are not *essential* bug fixes. If you want to reorganize the support\n> for CURRENT_TIMESTAMP() like that, let's leave it for 7.3.\n\n*grumble* The support for this is already organized. I'm just suggesting\nthat we avoid arbitrarily damaging our design and implementation with an\narbitrary reorganization done in the 7.2 development cycle, and that we\nfix it before it become embedded in a release.\n\n - Thomas\n", "msg_date": "Fri, 07 Dec 2001 17:41:13 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: ODBC functions in gram.y" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n>> But more importantly,\n>> at this point in the 7.2 cycle we ought not be making any changes that\n>> are not *essential* bug fixes. If you want to reorganize the support\n>> for CURRENT_TIMESTAMP() like that, let's leave it for 7.3.\n\n> *grumble* The support for this is already organized. I'm just suggesting\n> that we avoid arbitrarily damaging our design and implementation with an\n> arbitrary reorganization done in the 7.2 development cycle, and that we\n> fix it before it become embedded in a release.\n\nI hear you ... but I think the time to have complained was when Peter\nput it in, which was nigh two months ago.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Dec 2001 12:46:30 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ODBC functions in gram.y " }, { "msg_contents": "...\n> I hear you ... but I think the time to have complained was when Peter\n> put it in, which was nigh two months ago.\n\nYup. I did complain then (see archives) but apparently others did not\ncare to have an opinion at that time, and it was not fixed.\n\n - Thomas\n", "msg_date": "Fri, 07 Dec 2001 18:01:56 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: ODBC functions in gram.y" }, { "msg_contents": "\nHi ppl,\n\nActually i am trying to locate the physical memeory\nlocation where a computation of a query is done e.g.\nif I am trying to find out the sum of 1000 record\nvalues of salaries and i want to know the actual\nphysical location of the memory where the SPI (server\nprogrammimg Interface) allocates it through the Upper\nMemory Executer by the \"palloc\" funtion of 'C'.\n\n\nthe whole idea is to retrieve the partial computation\nof the queries very through fast this memory location.\n\nregards,\nJ. James\n\n__________________________________________________\nDo You Yahoo!?\nSend your FREE holiday greetings online!\nhttp://greetings.yahoo.com\n", "msg_date": "Sat, 8 Dec 2001 01:41:01 -0800 (PST)", "msg_from": "John James <john0012001us@yahoo.com>", "msg_from_op": false, "msg_subject": "" }, { "msg_contents": "\n--- John James <john0012001us@yahoo.com> wrote:\n> \n> Hi ppl,\n> \n> Actually i am trying to locate the physical memeory\n> location where a computation of a query is done e.g.\n> if I am trying to find out the sum of 1000 record\n> values of salaries and i want to know the actual\n> physical location of the memory where the SPI\n> (server\n> programmimg Interface) allocates it through the\n> Upper\n> Memory Executer by the \"palloc\" funtion of 'C'.\n> \n> \n> the whole idea is to retrieve the partial\n> computation\n> of the queries very through fast this memory\n> location.\n> \n> regards,\n> J. James\n> \n> __________________________________________________\n> Do You Yahoo!?\n> Send your FREE holiday greetings online!\n> http://greetings.yahoo.com\n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the\n> unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n__________________________________________________\nDo You Yahoo!?\nSend your FREE holiday greetings online!\nhttp://greetings.yahoo.com\n", "msg_date": "Sat, 8 Dec 2001 04:07:04 -0800 (PST)", "msg_from": "John James <john0012001us@yahoo.com>", "msg_from_op": false, "msg_subject": "mapping physical memeory space" }, { "msg_contents": "> ...\n> > I hear you ... but I think the time to have complained was when Peter\n> > put it in, which was nigh two months ago.\n> \n> Yup. I did complain then (see archives) but apparently others did not\n> care to have an opinion at that time, and it was not fixed.\n\nI think I complained too, but was told it was only a few lines of code,\nand that was OK.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Dec 2001 05:50:59 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ODBC functions in gram.y" }, { "msg_contents": "Thomas Lockhart writes:\n\n> > It's not apparent to me that doing this in the ODBC support is simpler\n> > or better than the hack Peter put into gram.y.\n>\n> ?? We have a long standing design policy to not pollute SQL syntax with\n> ODBC cruft.\n\nBut that doesn't apply in this case because we're augmenting SQL cruft\nwith ODBC syntax. ;-)\n\n> And we have existing mechanisms to easily enable that, delegating that\n> compatibility layer to the ODBC driver where it belongs.\n\nI'm not a great fan of rewriting SQL code behind the scenes, especially\nnot when it's a trivial case such as this. Moreover, developers of ODBC\napplications might wish to test their code in, say, psql. It's not like\nwe're adding lisp syntax, we're only allowing parentheses after a function\ncall -- like after all other function calls.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 10 Dec 2001 14:09:19 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: ODBC functions in gram.y" }, { "msg_contents": "OK, it is clear that you don't think it is important. But some do, and\nare willing to do the work to maintain the ODBC driver and its features\nin a consistant way. So it shouldn't be a problem, eh?\n\n - Thomas\n", "msg_date": "Mon, 10 Dec 2001 16:33:27 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: ODBC functions in gram.y" }, { "msg_contents": "Thomas Lockhart writes:\n\n> OK, it is clear that you don't think it is important. But some do, and\n> are willing to do the work to maintain the ODBC driver and its features\n> in a consistant way. So it shouldn't be a problem, eh?\n\nAs long as it works, more power to you. ;)\n\nBtw., the odbc.sql file still has some doubled-up lines from your\ninterrupted commit.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 11 Dec 2001 12:22:48 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: ODBC functions in gram.y" }, { "msg_contents": "> As long as it works, more power to you. ;)\n\nOK. btw, can you recall the use case for the CURRENT_xxx() syntax (with\nparens)? ODBC officially defines synonyms for these functions which were\nalready supported by the driver. So some other application was\ngenerating these function calls?\n\n> Btw., the odbc.sql file still has some doubled-up lines from your\n> interrupted commit.\n\nThanks. I'm fixing it now (but may have to wait a bit to get it\ncommitted; the DSL line is still trouble).\n\n - Thomas\n", "msg_date": "Tue, 11 Dec 2001 14:41:46 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: ODBC functions in gram.y" } ]
[ { "msg_contents": "CVSROOT:\t/cvsroot\nModule name:\tpgsql\nChanges by:\tthomas@postgresql.org\t01/12/07 22:24:40\n\nModified files:\n\tdoc/src/sgml : Makefile cvs.sgml datatype.sgml docguide.sgml \n\t installation.sgml syntax.sgml \n\nLog message:\n\tUpdate list of currently supported platforms.\n\tMention SQL9x precision syntax for date/time types.\n\tUse PostgreSQL consistantly throughout docs. Before, usage was split evenly\n\tbetween Postgres and PostgreSQL.\n\nModified files:\n\tdoc/src/sgml/ref: abort.sgml allfiles.sgml alter_group.sgml \n\t alter_table.sgml alter_user.sgml analyze.sgml \n\t begin.sgml close.sgml cluster.sgml \n\t comment.sgml commit.sgml copy.sgml \n\t create_aggregate.sgml create_constraint.sgml \n\t create_database.sgml create_function.sgml \n\t create_group.sgml create_index.sgml \n\t create_language.sgml create_operator.sgml \n\t create_rule.sgml create_sequence.sgml \n\t create_table.sgml create_table_as.sgml \n\t create_trigger.sgml create_type.sgml \n\t create_user.sgml create_view.sgml \n\t createdb.sgml createlang.sgml createuser.sgml \n\t current_time.sgml current_timestamp.sgml \n\t current_user.sgml declare.sgml delete.sgml \n\t drop_aggregate.sgml drop_database.sgml \n\t drop_function.sgml drop_group.sgml \n\t drop_index.sgml drop_language.sgml \n\t drop_operator.sgml drop_rule.sgml \n\t drop_sequence.sgml drop_table.sgml \n\t drop_trigger.sgml drop_type.sgml \n\t drop_user.sgml drop_view.sgml dropdb.sgml \n\t droplang.sgml dropuser.sgml ecpg-ref.sgml \n\t end.sgml explain.sgml fetch.sgml grant.sgml \n\t initdb.sgml initlocation.sgml insert.sgml \n\t ipcclean.sgml listen.sgml lock.sgml move.sgml \n\t notify.sgml pg_ctl-ref.sgml pg_dump.sgml \n\t pg_dumpall.sgml pg_passwd.sgml pg_upgrade.sgml \n\t pgaccess-ref.sgml pgtclsh.sgml pgtksh.sgml \n\t postgres-ref.sgml postmaster.sgml \n\t psql-ref.sgml reindex.sgml reset.sgml \n\t revoke.sgml rollback.sgml select.sgml \n\t select_into.sgml set.sgml set_transaction.sgml \n\t show.sgml truncate.sgml unlisten.sgml \n\t update.sgml vacuum.sgml vacuumdb.sgml \n\nLog message:\n\tUse PostgreSQL consistantly throughout docs. Before, usage was split evenly\n\tbetween Postgres and PostgreSQL.\n\n", "msg_date": "Fri, 7 Dec 2001 22:24:41 -0500 (EST)", "msg_from": "thomas@postgresql.org", "msg_from_op": true, "msg_subject": "pgsql/doc/src/sgml Makefile cvs.sgml datatype. ..." }, { "msg_contents": "\nCan I ask why a full path of /usr/bin/ was added to collateindex.pl with\nno discussion and no mention in the cvs logs? This broke my automatic\nbuild because my binary is in the sgml binary directory. Can it be put\nback?\n\nThanks.\n\n> CVSROOT:\t/cvsroot\n> Module name:\tpgsql\n> Changes by:\tthomas@postgresql.org\t01/12/07 22:24:40\n> \n> Modified files:\n> \tdoc/src/sgml : Makefile cvs.sgml datatype.sgml docguide.sgml \n> \t installation.sgml syntax.sgml \n> \n> Log message:\n> \tUpdate list of currently supported platforms.\n> \tMention SQL9x precision syntax for date/time types.\n> \tUse PostgreSQL consistantly throughout docs. Before, usage was split evenly\n> \tbetween Postgres and PostgreSQL.\n> \n> Modified files:\n> \tdoc/src/sgml/ref: abort.sgml allfiles.sgml alter_group.sgml \n> \t alter_table.sgml alter_user.sgml analyze.sgml \n> \t begin.sgml close.sgml cluster.sgml \n> \t comment.sgml commit.sgml copy.sgml \n> \t create_aggregate.sgml create_constraint.sgml \n> \t create_database.sgml create_function.sgml \n> \t create_group.sgml create_index.sgml \n> \t create_language.sgml create_operator.sgml \n> \t create_rule.sgml create_sequence.sgml \n> \t create_table.sgml create_table_as.sgml \n> \t create_trigger.sgml create_type.sgml \n> \t create_user.sgml create_view.sgml \n> \t createdb.sgml createlang.sgml createuser.sgml \n> \t current_time.sgml current_timestamp.sgml \n> \t current_user.sgml declare.sgml delete.sgml \n> \t drop_aggregate.sgml drop_database.sgml \n> \t drop_function.sgml drop_group.sgml \n> \t drop_index.sgml drop_language.sgml \n> \t drop_operator.sgml drop_rule.sgml \n> \t drop_sequence.sgml drop_table.sgml \n> \t drop_trigger.sgml drop_type.sgml \n> \t drop_user.sgml drop_view.sgml dropdb.sgml \n> \t droplang.sgml dropuser.sgml ecpg-ref.sgml \n> \t end.sgml explain.sgml fetch.sgml grant.sgml \n> \t initdb.sgml initlocation.sgml insert.sgml \n> \t ipcclean.sgml listen.sgml lock.sgml move.sgml \n> \t notify.sgml pg_ctl-ref.sgml pg_dump.sgml \n> \t pg_dumpall.sgml pg_passwd.sgml pg_upgrade.sgml \n> \t pgaccess-ref.sgml pgtclsh.sgml pgtksh.sgml \n> \t postgres-ref.sgml postmaster.sgml \n> \t psql-ref.sgml reindex.sgml reset.sgml \n> \t revoke.sgml rollback.sgml select.sgml \n> \t select_into.sgml set.sgml set_transaction.sgml \n> \t show.sgml truncate.sgml unlisten.sgml \n> \t update.sgml vacuum.sgml vacuumdb.sgml \n> \n> Log message:\n> \tUse PostgreSQL consistantly throughout docs. Before, usage was split evenly\n> \tbetween Postgres and PostgreSQL.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Dec 2001 04:24:39 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql/doc/src/sgml Makefile cvs.sgml datatype. ..." }, { "msg_contents": "> Can I ask why a full path of /usr/bin/ was added to collateindex.pl with\n> no discussion and no mention in the cvs logs? This broke my automatic\n> build because my binary is in the sgml binary directory. Can it be put\n> back?\n\nWell, because it was a mistake on my part ;)\n\nBy default, *my* docs build does not work without an altered Makefile,\nsince the binaries are not found (being located in /usr/bin, not in some\nhardcoded directory elsewhere).\n\nThat would seem to be an undesirable feature (hardcoding paths to\nbinaries, wherever they are located). However, I did not intend to\ncommit that Makefile; it snuck through with the commit of the 50 other\nfiles in that same docs \"commitball\". Will back out the change (or you\ncan if you prefer). Of course, no hardcoded paths would be even\nbetter...\n\n - Thomas\n", "msg_date": "Mon, 10 Dec 2001 16:18:06 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: pgsql/doc/src/sgml Makefile cvs.sgml datatype. ..." }, { "msg_contents": "(cc'd to -hackers; the topic is my accidentally committing a modified\nMakefile, since recovered by Peter E.)\n\n> ... Of course, no hardcoded paths would be even\n> better...\n\nIt actually is a perl script, not a full-out standalone program. It\nseems like we would need this to be a config item or at least capable of\nbeing overridden with Makefile.custom.\n\nPeter, how would you suggest we think about generalizing this?\n\n - Thomas\n", "msg_date": "Tue, 11 Dec 2001 02:47:09 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: pgsql/doc/src/sgml Makefile cvs.sgml datatype. ..." }, { "msg_contents": "\nI understgand the accidental commit problem. Happens to me to at times.\n\nThanks. That fixed my build. My SGML files or on the web again.\n\n---------------------------------------------------------------------------\n\n> > Can I ask why a full path of /usr/bin/ was added to collateindex.pl with\n> > no discussion and no mention in the cvs logs? This broke my automatic\n> > build because my binary is in the sgml binary directory. Can it be put\n> > back?\n> \n> Well, because it was a mistake on my part ;)\n> \n> By default, *my* docs build does not work without an altered Makefile,\n> since the binaries are not found (being located in /usr/bin, not in some\n> hardcoded directory elsewhere).\n> \n> That would seem to be an undesirable feature (hardcoding paths to\n> binaries, wherever they are located). However, I did not intend to\n> commit that Makefile; it snuck through with the commit of the 50 other\n> files in that same docs \"commitball\". Will back out the change (or you\n> can if you prefer). Of course, no hardcoded paths would be even\n> better...\n> \n> - Thomas\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Dec 2001 05:40:01 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql/doc/src/sgml Makefile cvs.sgml datatype. ..." }, { "msg_contents": "Thomas Lockhart writes:\n\n> > ... Of course, no hardcoded paths would be even\n> > better...\n>\n> It actually is a perl script, not a full-out standalone program. It\n> seems like we would need this to be a config item or at least capable of\n> being overridden with Makefile.custom.\n\nProbably the latter, by putting ifdef's around the existing assignment.\n\nI like to keep several stylesheet releases around, so I prefer the current\nmethod of referring to the program via the stylesheet distribution's\ntop-level directory, because it makes it easier to switch everything by\nchanging a line in Makefile.custom. This is especially important because\nthere were critical bug fixes in collateindex in the latest stylesheet\nrelease (1.74b, I think), so it's good to know which one you're using\nexactly and you probably don't want to use the thing in /usr/bin anyway.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 12 Dec 2001 23:24:33 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql/doc/src/sgml Makefile cvs.sgml datatype. ..." }, { "msg_contents": "> Probably the latter, by putting ifdef's around the existing assignment.\n\nShould I commit an ifdef'd (actually, ifndef'd) version?\n\n - Thomas\n", "msg_date": "Thu, 13 Dec 2001 05:43:00 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql/doc/src/sgml Makefile cvs.sgml datatype. ..." } ]
[ { "msg_contents": "I have added the option of explicitly specifying a postgresql.conf (-C) file on\nthe command line. I have also added two config file entries:\n\npghbaconfig, and pgdatadir.\n\n\"pghbaconfig\" allows multiple different databases running on the same machine\nto use the same hba file, rather than the hard coded $PGDATA/pg_hba.conf.\n\n\"pgdatadir\" works with the explicit configuration file so that the data\ndirectory can be specified in the configuration file, not on the command line\nor environment.\n\nOne can start postgres as:\n\npostmaster -C /etc/pgsql/mydb1.conf \n\nPostgres will get all its required information from \"mydb1.conf.\"\n\nDoes anyone see any value to these mods? I wanted to be able to run multiple\nPostgreSQL instances on the same machine, and having the ability to keep these\ncontrol files in a central location and share the HBA control files between\ndatabases may be helpful for admins. It will certainly make my life easier.", "msg_date": "Sat, 08 Dec 2001 10:58:42 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Explicit configuration file" }, { "msg_contents": "> Does anyone see any value to these mods? I wanted to be able to run multiple\n> PostgreSQL instances on the same machine, and having the ability to keep these\n> control files in a central location and share the HBA control files between\n> databases may be helpful for admins. It will certainly make my life\neasier.\n\nIsn't it easier to just use symlinks?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Dec 2001 06:04:52 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Explicit configuration file" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > Does anyone see any value to these mods? I wanted to be able to run multiple\n> > PostgreSQL instances on the same machine, and having the ability to keep these\n> > control files in a central location and share the HBA control files between\n> > databases may be helpful for admins. It will certainly make my life\n> easier.\n> \n> Isn't it easier to just use symlinks?\n> \n\nThere is a sort of chicken and egg poblem with PostgreSQL administration. Since\nthe settings are not in the \"etc\" directory, you need to know where PostgreSQL\nis installed before you can administer it. If you have multiple systems,\nconfigured differently, you have to hunt around to find your PostgreSQL\ndirectory on the machine on which you are working.\n\nWe use a \"push\" system to push out a whole PostgreSQL data directory to\nmultiple boxes. It would be good to be able to specify default configuration\ndifferences between the master the slaves without having to edit the snap shot.\n\nAt our site, all the configuration files are in a centralized directory and\nunder CVS, except one. Guess which.\n\nSymlinks don't copy well via ssh.\n\nHaving the configuration files outside a standard directory, ala \"/etc\" is not\nvery UNIX like. \n\nI could run: \n\tinitdb -D/u01/pgsql \n\tsu pgsql -c \"postgresql -C /etc/pgsql/mydb.conf\"\n\nAnd get all the settings I specify without having to copy files.\n\nI could go on, and they are all just nit-picks to be sure, but it just seems\n\"cleaner\" to be able to put the configuration in a separate place than the\ndata.\n", "msg_date": "Mon, 10 Dec 2001 07:29:09 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: Explicit configuration file" }, { "msg_contents": "mlw wrote:\n> \n> Bruce Momjian wrote:\n> >\n> > > Does anyone see any value to these mods? I wanted to be able to run multiple\n> > > PostgreSQL instances on the same machine, and having the ability to keep these\n> > > control files in a central location and share the HBA control files between\n> > > databases may be helpful for admins. It will certainly make my life\n> > easier.\n> >\n> > Isn't it easier to just use symlinks?\n> >\n> \n> There is a sort of chicken and egg poblem with PostgreSQL administration. Since\n> the settings are not in the \"etc\" directory, you need to know where PostgreSQL\n> is installed before you can administer it. If you have multiple systems,\n> configured differently, you have to hunt around to find your PostgreSQL\n> directory on the machine on which you are working.\n> \n> We use a \"push\" system to push out a whole PostgreSQL data directory to\n> multiple boxes. It would be good to be able to specify default configuration\n> differences between the master the slaves without having to edit the snap shot.\n> \n> At our site, all the configuration files are in a centralized directory and\n> under CVS, except one. Guess which.\n> \n> Symlinks don't copy well via ssh.\n> \n> Having the configuration files outside a standard directory, ala \"/etc\" is not\n> very UNIX like.\n> \n> I could run:\n> initdb -D/u01/pgsql\n> su pgsql -c \"postgresql -C /etc/pgsql/mydb.conf\"\n> \n> And get all the settings I specify without having to copy files.\n> \n> I could go on, and they are all just nit-picks to be sure, but it just seems\n> \"cleaner\" to be able to put the configuration in a separate place than the\n> data.\n\nPerhaps, even use the standard GNU configure \"sysconfigdir\" setting to point to\npostgresql.conf, as well as pg_hba.conf. i.e. \"$sysconfigdir/postgresql.conf.\nThat should be easy enough, and hat would bring PostgreSQL in line with many of\nthe common practices.\n", "msg_date": "Mon, 10 Dec 2001 08:05:35 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: Explicit configuration file" }, { "msg_contents": "mlw writes:\n\n> I have added the option of explicitly specifying a postgresql.conf (-C) file on\n> the command line. I have also added two config file entries:\n>\n> pghbaconfig, and pgdatadir.\n>\n> \"pghbaconfig\" allows multiple different databases running on the same machine\n> to use the same hba file, rather than the hard coded $PGDATA/pg_hba.conf.\n\nThat could be mildly useful, although symlinks make this already possible,\nand a bit clearer, IMHO.\n\n> \"pgdatadir\" works with the explicit configuration file so that the data\n> directory can be specified in the configuration file, not on the command line\n> or environment.\n\nSo you exchange having to specify the data directory with having to\nspecify the configuration file which specifies the data directory. This\ndoesn't add any functionality, it only adds one more indirection.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 10 Dec 2001 14:07:44 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Explicit configuration file" }, { "msg_contents": "Peter Eisentraut wrote:\n> \n> mlw writes:\n> \n> > I have added the option of explicitly specifying a postgresql.conf (-C) file on\n> > the command line. I have also added two config file entries:\n> >\n> > pghbaconfig, and pgdatadir.\n> >\n> > \"pghbaconfig\" allows multiple different databases running on the same machine\n> > to use the same hba file, rather than the hard coded $PGDATA/pg_hba.conf.\n> \n> That could be mildly useful, although symlinks make this already possible,\n> and a bit clearer, IMHO.\n\nOn systems which support symlinks, yes. \n\nAlso, a system should be \"self documenting\" i.e. one should be able to put\nclues to why certain things were done. Symlinks allow one to do something, yes,\nbut if I leave the company, someone besides me should be able to administrate\nthe system I leave behind. \n\nDon't you consider symlinks as a kind of a hack around a basic flaw in the\nconfiguration process? Shouldn't the configuration file let you completely\nspecify how your system is configured? \n\n\n> \n> > \"pgdatadir\" works with the explicit configuration file so that the data\n> > directory can be specified in the configuration file, not on the command line\n> > or environment.\n> \n> So you exchange having to specify the data directory with having to\n> specify the configuration file which specifies the data directory. This\n> doesn't add any functionality, it only adds one more indirection.\n\nYes and no. There is an underlying methodology to most unix server programs,\nthe configuration information goes in one place, and the data is in another.\nFor people used to PostgreSQL, I don't think they see how alien it is to people\nthat know how to admin UNIX, but not Postgres.\n\nThink about named, sendmail, apache, dhcpd, sshd, etc. All these programs have\nthe notion that he configuration is separate from the working data. To see how\nthey are configured, you just go to the \"/etc\" or \"/etc/pgsql\" directory and\nread the configuration file(s).\n\nWith postgres, you need to know where, and go to the data directory, as ROOT\n(or the pg user) because you can't enter it otherwise, look at postgresql.conf,\ndo an \"ls -l\" to see which parts are symlinked and which are not. If you have\nmultiple PostgreSQL installations, you have to go to each directory and repeat\nthe process. (hyperbole, I know)\n\nI just posted a reply to a message from Bruce, and in it I theorized that,\nmaybe, even \"sysconfigdir\" could point to where postgresql.conf would be\nlocated by default.\n\nI am not suggesting we change the default behavior of PostgreSQL, I am merely\nsuggesting that adding these features may make it more comfortable for the UNIX\nadmin.\n", "msg_date": "Mon, 10 Dec 2001 08:31:59 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: Explicit configuration file" }, { "msg_contents": "...\n> I am not suggesting we change the default behavior of PostgreSQL, I am merely\n> suggesting that adding these features may make it more comfortable for the UNIX\n> admin.\n\nistm to be a useful addition. Symlinks are not a substitute for a\nconfigurable system, for the reasons Mark brought up.\n\n - Thomas\n", "msg_date": "Mon, 10 Dec 2001 16:24:25 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Explicit configuration file" }, { "msg_contents": "mlw writes:\n\n> > That could be mildly useful, although symlinks make this already possible,\n> > and a bit clearer, IMHO.\n>\n> On systems which support symlinks, yes.\n\nAll systems that are able to run PostgreSQL support symlinks.\n\nReally.\n\n> Also, a system should be \"self documenting\" i.e. one should be able to put\n> clues to why certain things were done. Symlinks allow one to do something, yes,\n> but if I leave the company, someone besides me should be able to administrate\n> the system I leave behind.\n>\n> Don't you consider symlinks as a kind of a hack around a basic flaw in the\n> configuration process?\n\nElsewhere you stated that a certain aspect of the current situation is not\nvery Unix-like. Well, symlinks are very Unix-like. They have been\ninvented exactly for the purpose of sharing files for several purposes,\nwhile maintaining the identity of the \"original\" (else use hard links).\nThey are not hacks, they can be nested and used recursively, they are\nself-documenting and they don't prevent you from adding your own\ndocumentation either. I don't want to be adding a new feature anytime\nsomeone doesn't like the tools the operating system already provides.\n\n> Shouldn't the configuration file let you completely specify how your\n> system is configured?\n\nSure, it specifies how the PostgreSQL server behaves. It is not concerned\nwith telling your operating system how to behave.\n\n> Think about named, sendmail, apache, dhcpd, sshd, etc. All these programs have\n> the notion that he configuration is separate from the working data. To see how\n> they are configured, you just go to the \"/etc\" or \"/etc/pgsql\" directory and\n> read the configuration file(s).\n\nnamed: Configuration and data files are both under /var/named.\n\nsendmail: is not a suitable example for a well-designed, easy-to-use\nsoftware system.\n\napache: Does not easily allow running multiple instances on one host.\nThe virtual host setups I've seen put both configuration and data files\nunder the same per-host directory. Imagine what a mess it would be\notherwise.\n\ndhcpd, sshd: Only one instance per host supported/necessary.\n\nIronically, if sendmail hadn't the bear it is, my former employer would\nnever have switched to a certain alternative MTA and I would never have\ngotten the inspiration for creating the postgresql.conf file the way it is\ntoday.\n\n> With postgres, you need to know where, and go to the data directory, as ROOT\n> (or the pg user) because you can't enter it otherwise, look at postgresql.conf,\n\nOther users don't have a business reading that file. That's a feature.\n\n> do an \"ls -l\" to see which parts are symlinked and which are not. If you have\n> multiple PostgreSQL installations, you have to go to each directory and repeat\n> the process. (hyperbole, I know)\n\nThe alternative is that you have to scan the startup code for each server\n(where would that be?) and check whether it has the -C option or not or\nwhether it has been overridden somewhere to see which configuration it\nwill end up using.\n\nAll of this wouldn't be a problem if we only allowed at most one server\nper host. Then we could standardize on fixed locations for everything.\nBut we do allow and many people do use more than one server instance per\nhost, and there it can be a great mess finding out where everything\nbelongs. Putting everything under one directory by default is undoubtedly\nthe cleanest solution. If you want to spread things around, use the tools\nthat the system gives you.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 11 Dec 2001 12:22:30 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Explicit configuration file" }, { "msg_contents": "\nPeter Eisentraut wrote:\n> \n> mlw writes:\n> \n> > > That could be mildly useful, although symlinks make this already possible,\n> > > and a bit clearer, IMHO.\n> >\n> > On systems which support symlinks, yes.\n> \n> All systems that are able to run PostgreSQL support symlinks.\n> \n> Really.\n\nWindows does not supprt symlinks.\n\n> \n> > Also, a system should be \"self documenting\" i.e. one should be able to put\n> > clues to why certain things were done. Symlinks allow one to do something, yes,\n> > but if I leave the company, someone besides me should be able to administrate\n> > the system I leave behind.\n> >\n> > Don't you consider symlinks as a kind of a hack around a basic flaw in the\n> > configuration process?\n> \n> Elsewhere you stated that a certain aspect of the current situation is not\n> very Unix-like. Well, symlinks are very Unix-like. They have been\n> invented exactly for the purpose of sharing files for several purposes,\n> while maintaining the identity of the \"original\" (else use hard links).\n> They are not hacks, they can be nested and used recursively, they are\n> self-documenting and they don't prevent you from adding your own\n> documentation either. I don't want to be adding a new feature anytime\n> someone doesn't like the tools the operating system already provides.\n\nThis is a bogus argument, you must know that. Do you really beleive this? \n\n> \n> > Shouldn't the configuration file let you completely specify how your\n> > system is configured?\n> \n> Sure, it specifies how the PostgreSQL server behaves. It is not concerned\n> with telling your operating system how to behave.\n\nWhat are you talking about? I am saying that the configuration file should be\nable to specify how PostgreSQL works. This has nothing to do with configuring\nthe OS.\n\n> \n> > Think about named, sendmail, apache, dhcpd, sshd, etc. All these programs have\n> > the notion that he configuration is separate from the working data. To see how\n> > they are configured, you just go to the \"/etc\" or \"/etc/pgsql\" directory and\n> > read the configuration file(s).\n> \n> named: Configuration and data files are both under /var/named.\n\nWrong, named uses named.conf (typically in the /etc directory) to point to\nwhere its files are. And named has the \"-c\" option to specify which config fle\nto use.\n\n> \n> sendmail: is not a suitable example for a well-designed, easy-to-use\n> software system.\n\nSendmail works great, it is dificult to configure, sure, but it does work quite\nwell. And yes, sendmail has a \"-C\" option to use a different configuration fle.\n\n\n> \n> apache: Does not easily allow running multiple instances on one host.\n> The virtual host setups I've seen put both configuration and data files\n> under the same per-host directory. Imagine what a mess it would be\n> otherwise.\n\nAgain, wrong. Apache is VERY easy to make run multiple instances. As a server,\nhowever, multiple instancs must run on different IP ports. One Apache process\ncan listen on multple ports, or you can use different configuration files to\nspecify how this works.\n\nIn apache, the option for the configuration file is \"-f\"\n\n> \n> dhcpd, sshd: Only one instance per host supported/necessary.\n\nAhh, again, wrong. dhcpd can be started to work on all the ethernet interfaces,\nor run one process for each, dhcpd uses the \"-cf\" option to specify which\nconfiguration file to use. sshd uses the \"-f\" option to specify which\nconfiguration to use, and more than one instance can be run on different ports.\n\n> \n> Ironically, if sendmail hadn't the bear it is, my former employer would\n> never have switched to a certain alternative MTA and I would never have\n> gotten the inspiration for creating the postgresql.conf file the way it is\n> today.\n\nThe postgresql.conf file is a great start. It should just be more inclusive of\nconfiguration details. why not be able to specify the configuration file? Why\nnot be able to specify the hba config file? Why not be able to specify where\nthe data directory is?\n\nThese are things that UNIX admins EXPECT to be in a configuration file, why NOT\nput them in?\n\n> \n> > With postgres, you need to know where, and go to the data directory, as ROOT\n> > (or the pg user) because you can't enter it otherwise, look at postgresql.conf,\n> \n> Other users don't have a business reading that file. That's a feature.\n\nThat's not completely true either. One can take off group all read rights if\nthey want it to be secret, however, securtity through obscurity is stupid and\nfoolish.\n\n> \n> > do an \"ls -l\" to see which parts are symlinked and which are not. If you have\n> > multiple PostgreSQL installations, you have to go to each directory and repeat\n> > the process. (hyperbole, I know)\n> \n> The alternative is that you have to scan the startup code for each server\n> (where would that be?) and check whether it has the -C option or not or\n> whether it has been overridden somewhere to see which configuration it\n> will end up using.\n> \n> All of this wouldn't be a problem if we only allowed at most one server\n> per host. Then we could standardize on fixed locations for everything.\n> But we do allow and many people do use more than one server instance per\n> host, and there it can be a great mess finding out where everything\n> belongs. Putting everything under one directory by default is undoubtedly\n> the cleanest solution. If you want to spread things around, use the tools\n> that the system gives you.\n\nI find your arguments irrational. Why NOT allow postgres to behave as other\nUNIX server applications? If adding a feature neither hurts performance nor\nchanges the default behavior, but provides more flexability for the admin, why\nNOT add it.\n\nArguing that \"Symlinks\" are clean is completely rediculous. There are so many\nreasons why you DON'T want to use symlinks it is rediculous. Yes symlinks are a\ntool in UNIX, one of its great features, but I think I speak for most UNIX\nadmins, if they can do something without a symlink, they would prefer to do so.\n\nArguing that PostgreSQL should be limited to one instance per host, putting\neverything under one directory is he \"cleanest\" solution is absurd.\n\nAgain, virtually every other server on UNIX has the notion of a configuration\nfle outside of its data, why is PostreSQL beter for not having this feature?\nWhy is being different than other UNIX servers beter? Why is NOT having\nconfiguration options in the configuration file cleaner?\n", "msg_date": "Tue, 11 Dec 2001 07:13:36 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: Explicit configuration file" }, { "msg_contents": "...\n> Isn't it easier to just use symlinks?\n\nMaybe. Sometimes. In some cases. But even in those cases, easier is not\nalways better.\n\nWe've had \"the symlink discussion\" from time to time in the past. Some\nfolks are very comfortable with symlinks as part of the PostgreSQL\ndesign model. But I'm *very* uncomfortable with symlinks in that role.\n\nistm that Mark's suggestion for a command line option to specify a\nconfiguration file (hmm, probably one or two more too) is well within\nthe scope of a reasonable feature for a good user interface. Configuring\na system should not require creating symlinks imho, especially not by\nhand.\n\n - Thomas\n", "msg_date": "Tue, 11 Dec 2001 15:05:12 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Explicit configuration file" }, { "msg_contents": "mlw wrote:\n\n> > All systems that are able to run PostgreSQL support symlinks.\n> >\n> > Really.\n> \n> Windows does not supprt symlinks.\n\nSure it does, windows does not call the symlinks, but it is used by\nthe desktop links and behave the same as you would expect.\nThey are sym-links, and CYGWIN does have symlinks.\n\n\n\n> Arguing that \"Symlinks\" are clean is completely rediculous. There are\n> so many reasons why you DON'T want to use symlinks it is rediculous.\n> Yes symlinks are a tool in UNIX, one of its great features, but I\n> think I speak for most UNIX admins, if they can do something without a\n> symlink, they would prefer to do so.\n\nWhy? I mean your argument to want a seperate config file is one issue.\nBut I don't see your point here. (I'v done unix admin for over 20\nyears).\nChasing a symlink or finding a config file - both work.", "msg_date": "Tue, 11 Dec 2001 13:04:20 -0700", "msg_from": "Doug Royer <Doug@royer.com>", "msg_from_op": false, "msg_subject": "Re: Explicit configuration file" }, { "msg_contents": "On Tuesday 11 December 2001 10:05 am, Thomas Lockhart wrote:\n> > Isn't it easier to just use symlinks?\n\n> Maybe. Sometimes. In some cases. But even in those cases, easier is not\n> always better.\n\n> We've had \"the symlink discussion\" from time to time in the past. Some\n> folks are very comfortable with symlinks as part of the PostgreSQL\n> design model. But I'm *very* uncomfortable with symlinks in that role.\n\nI most assuredly agree with Thomas and Mark on this issue. While some are \nvery comfortable with symlinks in this role, I for one am not. I would like \npostgresql.conf to not live in PGDATA. I would like postgresql.conf to not \nget overwritten if I have to re-initdb. I would, in fact, like to be able to \nput several postgresql.conf's, named differently, too, in /etc/pgsql for \nconsistency with things such as BIND, xinetd, sendmail, and virtually any \nother reasonable daemon.\n\nSuppose I host three databases on one box. One database is for a client \nnamed varitor, one is for ramifordistat, and one is for wgcr. I would love \nto have a /etc/pgsql with the three files varitor.conf, ramifordistat.conf, \nand wgcr.conf. Each config file would be able to specify the datadir (just \nlike a webserver's config file specifies a pageroot, or BIND's named.conf \nspecifies its datadir), as well as other unique config data such as IP \naddress and/or port to bind to. Then, postmaster could be started on these \nthree config files with, perhaps, 'postmaster \n--config-file=/etc/pgsql/wgcr.conf' and postmaster loads, with the right \nconfiguration, without specifying a datadir.\n\nTo me, this is the natural way to do things. Further, it is easily scripted, \nas well as easily packaged. And the vast majority of other daemons do it \nthis way.\n\nAnd allowing this way is not the same thing as trying to prevent it from \nbeing the existing way -- they can coexist.\n\nI just think, Peter, that you're being a mite rigid on this one.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 11 Dec 2001 22:04:49 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Explicit configuration file" }, { "msg_contents": "> > Don't you consider symlinks as a kind of a hack around a basic flaw in the\n> > configuration process?\n> \n> Elsewhere you stated that a certain aspect of the current situation is not\n> very Unix-like. Well, symlinks are very Unix-like. They have been\n> invented exactly for the purpose of sharing files for several purposes,\n> while maintaining the identity of the \"original\" (else use hard links).\n> They are not hacks, they can be nested and used recursively, they are\n> self-documenting and they don't prevent you from adding your own\n> documentation either. I don't want to be adding a new feature anytime\n> someone doesn't like the tools the operating system already provides.\n\nLet me throw in my ideas. Clearly, symlinks work, and clearly, a special\n-C flag is more documenting than the symlinks.\n\nMy issue is, should we add yet another configuration flag to an already\nflag-heavy command and let people use symlinks for special cases, or\nshould we add the flag. I guess the question is whether the option will\nbe used by enough people that the extra flag is worth the added\ncomplexity.\n\nThere is added complexity. Every flag is evaluated by users to\ndetermine if the flag is of any use to them, even if they don't use it.\n\nI wonder if we should go one step further. Should we be specifying the\nconfig file on the command line _rather_ than the data directory. We\ncould then specify the data directory location in the config file. That\nseems like the direction we should be headed in, though I am not sure it\nis worth the added headache of the switch.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Dec 2001 22:56:27 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Explicit configuration file" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> I wonder if we should go one step further. Should we be specifying the\n> config file on the command line _rather_ than the data directory. We\n> could then specify the data directory location in the config file. That\n> seems like the direction we should be headed in, though I am not sure it\n> is worth the added headache of the switch.\n\nThat's what mlw is advocating--all the startup script has to know is\nthe conf file location. I for one think it's totally worth doing, and\nit won't break any existing setups--if -C (or whatever) isn't\nspecified, postmaster expects PGDATA on the command line and gets the\ndonfig file from there; if it is, PGDATA comes from the config file.\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n", "msg_date": "12 Dec 2001 07:03:44 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Explicit configuration file" }, { "msg_contents": "Bruce Momjian wrote:\n\n> I wonder if we should go one step further. Should we be specifying the\n> config file on the command line _rather_ than the data directory. We\n> could then specify the data directory location in the config file. That\n> seems like the direction we should be headed in, though I am not sure it\n> is worth the added headache of the switch.\n\nThat is what the patch I submitted does.\n\nIn the postgresql.conf file, you can specify where the data directory\nis, as well as the pg_hba.conf file exists.\n\nThe purpose I had in mind was to allow sharing of pg_hba.conf files and\nkeep configuration separate from data.\n\nOne huge problem I have with symlinks is an admin has to \"notice\" that\ntwo files in two separate directories, possibly on two different\nvolumes, are the same file, so it is very likely the ramifications of\nediting one file are not obvious.\n\nIf, in the database configuration file, pghbaconfig points to\n\"/etc/pg_hba.conf\" it is likely, that the global significance of the\nfile is obvious.\n\n(Note: I don't nessisarily think \"pghbaconfig\" nor \"pgdatadir\" are the\nbest names for the parameters, but I couldn't think of anything else at\nthe time.)\n\nSymlinks are a perilous UNIX construct, yes, they make some things, that\nwould otherwise be a horrible kludge, elegant, but they are also no\nsubstitute for a properly configurable application.\n", "msg_date": "Wed, 12 Dec 2001 11:13:37 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: Explicit configuration file" }, { "msg_contents": "On Tue, Dec 11, 2001 at 10:56:27PM -0500, Bruce Momjian wrote:\n> \n> My issue is, should we add yet another configuration flag to an already\n> flag-heavy command and let people use symlinks for special cases, or\n> should we add the flag. I guess the question is whether the option will\n> be used by enough people that the extra flag is worth the added\n> complexity.\n\nI can tell you that the Debian (and probably RedHat) packaged binaries\nwould use this switch: It already installs pg_ident.conf, pg_hba.conf,\nand postgresql.conf in /etc/postgresql, and puts symlinks into the\nPGDATA dor pointing _back_ there, to keep the server happy. I'd forsee\nthe symlinks just going away.\n\n> \n> There is added complexity. Every flag is evaluated by users to\n> determine if the flag is of any use to them, even if they don't use it.\n\nMost users never look at _any_ of the flags. Most users who _compile there\nown_ read the man page and evaluate the flags, I agree.\n \n> I wonder if we should go one step further. Should we be specifying the\n> config file on the command line _rather_ than the data directory. We\n> could then specify the data directory location in the config file. That\n> seems like the direction we should be headed in, though I am not sure it\n> is worth the added headache of the switch.\n\nSeems that's what's actually ben proposed, but in a backwards compatible\nway.\n\nRoss\n", "msg_date": "Wed, 12 Dec 2001 14:40:49 -0600", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: Explicit configuration file" }, { "msg_contents": "mlw writes:\n\n> One huge problem I have with symlinks is an admin has to \"notice\" that\n> two files in two separate directories, possibly on two different\n> volumes, are the same file, so it is very likely the ramifications of\n> editing one file are not obvious.\n>\n> If, in the database configuration file, pghbaconfig points to\n> \"/etc/pg_hba.conf\" it is likely, that the global significance of the\n> file is obvious.\n\nHow about making the \"local\" pg_hba.conf symlinked to /etc/pg_hba.conf?\nShould be the same, no?\n\nI guess I'm losing the symlink debate, but anyway...\n\nConsider this: What if I want to share my postgresql.conf file (because\nof the clever performance tuning) but not my pg_hba.conf file (because I\nhave completely different databases and users in each server). I think\nthat case should be covered as long as we're moving in this direction.\n\nI think looming in the back is the answer, \"add an 'include' directive to\npostgresql.conf\".\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 12 Dec 2001 23:25:41 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Explicit configuration file" }, { "msg_contents": "Peter Eisentraut wrote:\n> \n> mlw writes:\n> \n> > One huge problem I have with symlinks is an admin has to \"notice\" that\n> > two files in two separate directories, possibly on two different\n> > volumes, are the same file, so it is very likely the ramifications of\n> > editing one file are not obvious.\n> >\n> > If, in the database configuration file, pghbaconfig points to\n> > \"/etc/pg_hba.conf\" it is likely, that the global significance of the\n> > file is obvious.\n> \n> How about making the \"local\" pg_hba.conf symlinked to /etc/pg_hba.conf?\n> Should be the same, no?\n> \n> I guess I'm losing the symlink debate, but anyway...\n> \n> Consider this: What if I want to share my postgresql.conf file (because\n> of the clever performance tuning) but not my pg_hba.conf file (because I\n> have completely different databases and users in each server). I think\n> that case should be covered as long as we're moving in this direction.\n\nIn my patch, if no pghbaconfig setting is made, then the default is to\nlook in the data directory, as it has always done. If no pgdatadir is\nspecified, then it will get the information the old way too.\n\npostmaster -C mysuperconfig.conf -D /u01/db\n\nShould work fine.\n\n> \n> I think looming in the back is the answer, \"add an 'include' directive to\n> postgresql.conf\".\n\nYikes. Obviously that is next, however do we need this functionality?\nWill a few changes be enough, or do we need includes? Do we need\nincludes within includes?\n", "msg_date": "Wed, 12 Dec 2001 17:56:59 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: Explicit configuration file" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > Bruce Momjian wrote:\n> >\n> > > I wonder if we should go one step further. Should we be specifying the\n> > > config file on the command line _rather_ than the data directory. We\n> > > could then specify the data directory location in the config file. That\n> > > seems like the direction we should be headed in, though I am not sure it\n> > > is worth the added headache of the switch.\n> >\n> > That is what the patch I submitted does.\n> >\n> > In the postgresql.conf file, you can specify where the data directory\n> > is, as well as the pg_hba.conf file exists.\n> >\n> > The purpose I had in mind was to allow sharing of pg_hba.conf files and\n> > keep configuration separate from data.\n> \n> My issue is that once we put the data directory location in\n> postgresql.conf, we can't share that with other installs because they\n> need different data locations, so what have we really gained _except_\n> having the *.conf file in a different location.\n> \n> Seems any solution will need to allow the *.conf file itself to be\n> shared.\n> \n> Here is an idea. Allow multiple -C parameters to be used, with the\n> files read in order, with newer parameters overriding older ones. Seems\n> this would be better than #includes.\n> \n> Now that I think of it, #include does the same thing. Instead of\n> multiple -C, we have one file per instance and #include the global one,\n> then set whatever we want.\n> \n> One major thing this does that _symlinks_ do not do is allow most\n> parameters to be set globally for postgresql.conf, and for individual\n> instances to override _part_ of the global file.\n> \n> Sorry I did not read the patch earlier. I was more responding to the\n> emails.\n\nThere is no reason that:\n\npostmaster -C /path/postgresql.conf -D /u01/mydb \n\nWould not work. (Just don't specify a data diectory)\n", "msg_date": "Wed, 12 Dec 2001 21:35:46 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: Explicit configuration file" }, { "msg_contents": "> Bruce Momjian wrote:\n> \n> > I wonder if we should go one step further. Should we be specifying the\n> > config file on the command line _rather_ than the data directory. We\n> > could then specify the data directory location in the config file. That\n> > seems like the direction we should be headed in, though I am not sure it\n> > is worth the added headache of the switch.\n> \n> That is what the patch I submitted does.\n> \n> In the postgresql.conf file, you can specify where the data directory\n> is, as well as the pg_hba.conf file exists.\n> \n> The purpose I had in mind was to allow sharing of pg_hba.conf files and\n> keep configuration separate from data.\n\nMy issue is that once we put the data directory location in\npostgresql.conf, we can't share that with other installs because they\nneed different data locations, so what have we really gained _except_\nhaving the *.conf file in a different location.\n\nSeems any solution will need to allow the *.conf file itself to be\nshared.\n\nHere is an idea. Allow multiple -C parameters to be used, with the\nfiles read in order, with newer parameters overriding older ones. Seems\nthis would be better than #includes.\n\nNow that I think of it, #include does the same thing. Instead of\nmultiple -C, we have one file per instance and #include the global one,\nthen set whatever we want.\n\nOne major thing this does that _symlinks_ do not do is allow most\nparameters to be set globally for postgresql.conf, and for individual\ninstances to override _part_ of the global file.\n\nSorry I did not read the patch earlier. I was more responding to the\nemails.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 Dec 2001 21:36:11 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Explicit configuration file" } ]
[ { "msg_contents": "Hi,\n\nwhile fixing a bug in ecpg that caused it to interprete a pointer to a\nstruct with a struct and thus using s.a syntax instead of s->a I found one\nrather annoying bug. If an argument is a pointer to a struct and the\nindicator is a struct ecpg segfaults. Since I don't think we can keep it\nthis way, I will try to fix it asap. IMO a stable release should not have\nsuch a bug.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Sat, 8 Dec 2001 21:42:24 +0100", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": true, "msg_subject": "Bug in ecpg that has to be fixed prior 7.2" } ]
[ { "msg_contents": "Stefan Hadjistoytchev (sth@hq.bsbg.net) reports a bug with a severity of 1\nThe lower the number the more severe it is.\n\nShort Description\nBLOB (lo type) objects could not be restored\n\nLong Description\nHi :)\n\nProblem appeared in POSTGRESQL 7.2b3 CVS distribution\n\nAfter creating a table containing BLOB (lo type) column and filling it in an error occured restoring this table using \"pg_restore\":\n\nERROR: Unable to identify an operator '=' for types 'oid' and 'lo'\n\tYou will have to retype this query using an explicit cast\n\t\nUsing Postgres 7.1.3 - there is no such error :(, but I need 7.2\n because there are other fixes in it and I need them.\n\nPlease, see example:\n\n\nSample Code\n-- user root\n-- in postgres 7.2.b3 directory\n\n./configure\n./gmake\n./gmake install\n\nchmod -R 777 /usr/local/pgsql\n\n-- user postgres\n\ncd /usr/local/pgsql/bin\n./initdb -D /usr/local/pgsql/data\n\n-- ACTION: change access permissions in /usr/local/pgsql/data/pg_hba.conf to allow access\n\n./postmaster -D /usr/local/pgsql/data -i &\n./createdb test\n./psql -f test1.sql test\n\n>-- test1.sql contains:\n>\n>CREATE TYPE lo ( \n> internallength=4, externallength=10, \n> input=int4in, output=int4out, \n> default='', passedbyvalue \n>);\n>\n>CREATE TABLE \"tb_snimki\" (\n> \"egn\" varchar(10) NOT NULL, \n> \"img\" lo\n>-- CONSTRAINT \"snimki_pkey\" PRIMARY KEY (\"egn\")\n>);\n\n-- ACTION: After this I inserted 1 small BLOB object (10K) in \"tb_snimki\" \n-- from another PC with \"egn\" = \"1234\"\n\n./pg_dump -b -Fc test > dump1.bin\n./dropdb test\n./createdb test\n\n./pg_restore -v -Fc -d test dump1.bin\n\n-- RESULT:\npg_restore: connecting to database for restore\npg_restore: creating TYPE lo\npg_restore: creating TABLE tb_snimki\npg_restore: restoring data for table tb_snimki\npg_restore: restoring data for table BLOBS\npg_restore: connecting to database test as user postgres\npg_restore: creating table for large object cross-references\npg_restore: restored 1 large objects\npg_restore: fixing up large object cross-reference for tb_snimki\npg_restore: fixing large object cross-references for tb_snimki.img\npg_restore: [archiver (db)] error while updating column \"img\" of table \"tb_snimki\": \nERROR: Unable to identify an operator '=' for types 'oid' and 'lo'\n\tYou will have to retype this query using an explicit cast\npg_restore: *** aborted because of error\n\n\nNo file was uploaded with this report\n\n", "msg_date": "Sun, 9 Dec 2001 05:54:26 -0500 (EST)", "msg_from": "pgsql-bugs@postgresql.org", "msg_from_op": true, "msg_subject": "Bug #533: BLOB (lo type) objects could not be restored" }, { "msg_contents": "pgsql-bugs@postgresql.org wrote:\n> \n> Stefan Hadjistoytchev (sth@hq.bsbg.net) reports a bug with a severity of 1\n> The lower the number the more severe it is.\n> \n> Short Description\n> BLOB (lo type) objects could not be restored\n> \n> Long Description\n> Hi :)\n> \n> Problem appeared in POSTGRESQL 7.2b3 CVS distribution\n> \n> After creating a table containing BLOB (lo type) column and filling it in an error occured restoring this table using \"pg_restore\":\n> \n> ERROR: Unable to identify an operator '=' for types 'oid' and 'lo'\n> You will have to retype this query using an explicit cast\n\npg_restore in 7.2 could handle the type lo defined in\ncontrib/lo but couldn't handle the type lo you created by\n CREATE TYPE lo (\n internallength=4, externallength=10,\n input=int4in, output=int4out,\n default='', passedbyvalue\n );\nThe type is incomplete and hard to handle in pg_restore.\n\n> Using Postgres 7.1.3 - there is no such error :(,\n\npg_restore in 7.1.x couldn't restore large objects\nof type lo properly though it doesn't cause any error.\n\nOne way I can think of is to create the type lo in\ncontrib/lo in advance of pg_restore and ignore \nthe 'create type lo ..' commands in pg_restore but\nit seems pretty unnatural for pg_restore.\n\nComments ?\n\nI've worried about the use of type lo which has\nbeen used only(?) in ODBC. It doesn't seem wrong\nbecause PostgreSQL hasn't provided the proper\nBLOB type. However I'm not sure now how to handle\nlarge objects in ODBC in future.\n\nComments ?\n\nregards,\nHiroshi Inoue\n", "msg_date": "Mon, 10 Dec 2001 16:38:57 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Bug #533: BLOB (lo type) objects could not be restored" } ]
[ { "msg_contents": "I added several error messages and hopefully by doing this removed all\npossibilities for a segfault.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Sun, 9 Dec 2001 16:24:53 +0100", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": true, "msg_subject": "ECPG is okay again" }, { "msg_contents": "In current CVS I've got;\n\ngcc -O2 -mpentiumpro -Wall -Wmissing-prototypes -Wmissing-declarations -I./../include -I../../../../src/include -DMAJOR_VERSION=2 -DMINOR_VERSION=9 -DPATCHLEVEL=0 -DINCLUDE_PATH=\\\"/usr/local/pgsql/include\\\" -c -o descriptor.o descriptor.c\ndescriptor.c: In function \u0005CPGnumeric_lvalue':\ndescriptor.c:63: incompatible type for argument 2 of \nmerror'\ndescriptor.c:63: too few arguments to function \nmerror'\ndescriptor.c: In function \u0004rop_descriptor':\ndescriptor.c:124: incompatible type for argument 2 of \nmerror'\ndescriptor.c:124: too few arguments to function \nmerror'\ndescriptor.c: In function \fookup_descriptor':\ndescriptor.c:147: incompatible type for argument 2 of \nmerror'\ndescriptor.c:147: too few arguments to function \nmerror'\ndescriptor.c: In function tput_get_descr_header':\ndescriptor.c:164: incompatible type for argument 2 of \nmerror'\ndescriptor.c:164: too few arguments to function \nmerror'\ndescriptor.c: In function tput_get_descr':\n\n\nThis is a Linux, glibc 2.1.3\n\n\tOleg\n\nOn Sun, 9 Dec 2001, Michael Meskes wrote:\n\n> I added several error messages and hopefully by doing this removed all\n> possibilities for a segfault.\n>\n> Michael\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Mon, 10 Dec 2001 15:39:45 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: ECPG is okay again" }, { "msg_contents": "On Mon, Dec 10, 2001 at 03:39:45PM +0300, Oleg Bartunov wrote:\n> In current CVS I've got;\n> \n> gcc -O2 -mpentiumpro -Wall -Wmissing-prototypes -Wmissing-declarations -I./../include -I../../../../src/include -DMAJOR_VERSION=2 -DMINOR_VERSION=9 -DPATCHLEVEL=0 -DINCLUDE_PATH=\\\"/usr/local/pgsql/include\\\" -c -o descriptor.o descriptor.c\n> descriptor.c: In function \u0005CPGnumeric_lvalue':\n> ...\n\nOops, sorry, seems I forgot two files. Thanks for the report. Just committed\nthese two files.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Mon, 10 Dec 2001 15:56:52 +0100", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: ECPG is okay again" } ]
[ { "msg_contents": "I've committed some small changes to try to resolve the problem of SQL99-mandated\ntruncation of precision of time fields for TIMESTAMP and TIME data types.\nPer the recent discussion, I've relaxed the mandated behavior for unspecified\nprecision of time fields for both date/time literals and for schema definitions.\n\nI've also committed some small changes to the ODBC driver to convert CURRENT_TIME()\n(note the parens) to an ODBC-specific function per the usual convention for\nsupporting goofy ODBC functions.\n\nI would *like to* remove the corresponding syntax extensions in gram.y since they\nhave little to no apparent value. However, the archives are currently broken so I\ndon't know if there are other reasons for wanting them there.\n", "msg_date": "Sun, 9 Dec 2001 19:22:24 -0500 (EST)", "msg_from": "thomas@postgresql.org", "msg_from_op": true, "msg_subject": "Minor updates for date/time committed" } ]
[ { "msg_contents": "I'm trying to use the archives, and am getting the following response\nfor a query on the hackers list:\n\n An error occured! \n\n connectDBStart() -- connect() failed: Connection refused\n Is the postmaster running (with -i) at 'db.postgresql.org'\n and accepting connections on TCP/IP port 5437? \n\nDoes the postmaster need to be restarted?\n\n - Thomas\n", "msg_date": "Mon, 10 Dec 2001 00:23:46 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Dead archives?" } ]
[ { "msg_contents": "I've committed some small changes to try to resolve the problem of\nSQL99-mandated truncation of precision of time fields for TIMESTAMP and\nTIME data types. Per the recent discussion, I've relaxed the mandated\nbehavior for unspecified precision of time fields for both date/time\nliterals and for schema definitions.\n\nI've also committed some small changes to the ODBC driver to convert\nCURRENT_TIME() (note the parens) to an ODBC-specific function per the\nusual convention for supporting goofy ODBC functions.\n\nI would *like to* remove the corresponding syntax extensions in gram.y\nsince they have little to no apparent value. However, the archives are\ncurrently broken so I don't know if there are other reasons for wanting\nthem there.\n\n - Thomas\n", "msg_date": "Mon, 10 Dec 2001 04:59:07 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Minor updates for date/time committed" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Thomas Lockhart [mailto:lockhart@fourpalms.org] \n> Sent: 07 December 2001 16:15\n> To: Hackers List; Bill Studenmund; tegge@repas-aeg.de; Mark \n> Knox; Neale.Ferguson@softwareAG-usa.com; prlw1@cam.ac.uk; Tatsuo Ishii\n> Subject: Third call for platform testing\n> \n> \n> We've got most platforms ironed out, with just a few left to \n> get a definitive report. It looks like we'll end up dropping \n> a few platforms for this release (the first time in several \n> years that the number of supported platforms decreased!).\n> \n> Windows/native Magnus Hagander (clients only)\n> Any reports?\n\nCompiled OK with a few warnings using M$ VC++ 6, Service Pack 5. \n\nRandomly tested psql/libpq with 7.2b3 on Slackware Linux 8.0 - no problems\nfound.\n\nNOTE: I'm not able to test libpq++ as I don't know C++ or have any apps to\ncompile/test it with - it compiled OK though :-).\n\nRegards, Dave.\n\n", "msg_date": "Mon, 10 Dec 2001 09:28:47 -0000", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: Third call for platform testing" }, { "msg_contents": "Dave Page wrote:\n> \n> > Windows/native Magnus Hagander (clients only)\n> Compiled OK with a few warnings using M$ VC++ 6, Service Pack 5.\n> Randomly tested psql/libpq with 7.2b3 on Slackware Linux 8.0 - no problems\n> found.\n\nGreat! Thanks for testing and reporting...\n\n - Thomas\n", "msg_date": "Tue, 11 Dec 2001 06:52:41 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Third call for platform testing" } ]
[ { "msg_contents": "Hello together,\n\ni�ve worked out several topics so that postgreSQL will run on NetWare in the future. \nNow I want to start to add all the stuff that is neccessary for the NetWare port peace by\npeace.\n\nI need to add some modifications to the sources and I need to add another port directory.\n\nMy idea is the following:\nPG7.2 is in Beta now and I think it makes no sense to modify these sources now. \nIf PG7.2 is finished and the next release is starting I want to add NetWare support in this\nfuture release. Additional I want to participate in development of the databse itself.\n\nIf someone can tell me what steps i have to do next to include the changes this would be helpful.\n\nregards\n\n", "msg_date": "Mon, 10 Dec 2001 11:56:00 +0200", "msg_from": "Ulrich Neumann<u_neumann@gne.de>", "msg_from_op": true, "msg_subject": "New Port targetting NetWare" } ]
[ { "msg_contents": "My nightly regresison tests bailed on sparc and i386 obsd:\n\ngcc -O2 -pipe -Wall -Wmissing-prototypes -Wmissing-declarations\n-I./../include -\nI../../../../src/include -DMAJOR_VERSION=2 -DMINOR_VERSION=9\n-DPATCHLEVEL=0 -DI\nNCLUDE_PATH=\\\"/usr/local/pgsql/include\\\" -c -o descriptor.o descriptor.c\ndescriptor.c: In function `ECPGnumeric_lvalue':\ndescriptor.c:63: incompatible type for argument 2 of `mmerror'\ndescriptor.c:63: too few arguments to function `mmerror'\ndescriptor.c: In function `drop_descriptor':\ndescriptor.c:124: incompatible type for argument 2 of `mmerror'\ndescriptor.c:124: too few arguments to function `mmerror'\ndescriptor.c: In function `lookup_descriptor':\ndescriptor.c:147: incompatible type for argument 2 of `mmerror'\ndescriptor.c:147: too few arguments to function `mmerror'\ndescriptor.c: In function `output_get_descr_header':\ndescriptor.c:164: incompatible type for argument 2 of `mmerror'\ndescriptor.c:164: too few arguments to function `mmerror'\ndescriptor.c: In function `output_get_descr':\ndescriptor.c:186: incompatible type for argument 2 of `mmerror'\ndescriptor.c:186: too few arguments to function `mmerror'\ndescriptor.c:189: incompatible type for argument 2 of `mmerror'\ndescriptor.c:189: too few arguments to function `mmerror'\ngmake[4]: *** [descriptor.o] Error 1\ngmake[4]: Leaving directory\n`/usr/home/bpalmer/APPS/regression/postgresql-snapsh\not/src/interfaces/ecpg/preproc'\ngmake[3]: *** [all] Error 2\ngmake[3]: Leaving directory\n`/usr/home/bpalmer/APPS/regression/postgresql-snapsh\not/src/interfaces/ecpg'\ngmake[2]: *** [all] Error 2\ngmake[2]: Leaving directory\n`/usr/home/bpalmer/APPS/regression/postgresql-snapsh\not/src/interfaces'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory\n`/usr/home/bpalmer/APPS/regression/postgresql-snapsh\not/src'\ngmake: *** [all] Error 2\n*** Error code 2\n\n\n\n\n\n\n0 Brandon\n\n", "msg_date": "Mon, 10 Dec 2001 09:13:12 -0500 (EST)", "msg_from": "merlin <merlin@nyc2600.org>", "msg_from_op": true, "msg_subject": "OpenBSD snapshot failure" }, { "msg_contents": "(sorry, this was from me, forgot to change pine aliases)\n\n- brandon\n\n\nOn Mon, 10 Dec 2001, merlin wrote:\n\n> My nightly regresison tests bailed on sparc and i386 obsd:\n\n\n----------------------------------------------------------------------------\n c: 646-456-5455 h: 201-798-4983\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n", "msg_date": "Mon, 10 Dec 2001 12:38:01 -0500 (EST)", "msg_from": "bpalmer <bpalmer@crimelabs.net>", "msg_from_op": false, "msg_subject": "Re: OpenBSD snapshot failure" } ]
[ { "msg_contents": "\nWith the changes that have been going on, I'd like to release *something*\nthis week ... my personal feeling is to go with a Beta4, since not all\nchanges were docs related ...\n\nDoes anyone have an opinion on this? I'd like to wrap something up\ntonight/tomorrow morning, unless someone has something they are really\nsitting on?\n\nThanks ...\n\n\n", "msg_date": "Mon, 10 Dec 2001 13:53:35 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Beta4 or RC1 ... ?" }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> With the changes that have been going on, I'd like to release *something*\n> this week ... my personal feeling is to go with a Beta4, since not all\n> changes were docs related ...\n> Does anyone have an opinion on this? I'd like to wrap something up\n> tonight/tomorrow morning, unless someone has something they are really\n> sitting on?\n\nI just found (I believe) the cause of Tatsuo's report of instability on\na 4-way AIX machine. I still have a couple of uncommitted patches for\nother problems, but should be ready to go by the end of this evening.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Dec 2001 16:30:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Beta4 or RC1 ... ? " } ]
[ { "msg_contents": "Got it. The AIX compiler apparently feels free to rearrange the\nsequence\n\n\t\tproc->lwWaiting = true;\n\t\tproc->lwExclusive = (mode == LW_EXCLUSIVE);\n\t\tproc->lwWaitLink = NULL;\n\t\tif (lock->head == NULL)\n\t\t\tlock->head = proc;\n\t\telse\n\t\t\tlock->tail->lwWaitLink = proc;\n\t\tlock->tail = proc;\n\n\t\t/* Can release the mutex now */\n\t\tSpinLockRelease_NoHoldoff(&lock->mutex);\n\ninto something wherein the SpinLockRelease (which is just \"x = 0\")\noccurs before the last two assignments into the lock structure.\nBoo, hiss. Evidently, on your multiprocessor machine, there may be\nanother CPU that is able to obtain the spinlock and then read the\nun-updated lock values before the stores occur.\n\nDeclaring the lock pointer \"volatile\" seems to prevent this misbehavior.\n\nPersonally I'd call this a compiler bug; isn't it supposed to consider\nsemicolons as sequence points? I never heard that rearranging the order\nof stores into memory was considered a kosher optimization of C code.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Dec 2001 16:24:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Intermediate report for AIX 5L port " }, { "msg_contents": "...\n> Declaring the lock pointer \"volatile\" seems to prevent this misbehavior.\n\nGreat. That is what it is anyway, right?\n\n> Personally I'd call this a compiler bug; isn't it supposed to consider\n> semicolons as sequence points? I never heard that rearranging the order\n> of stores into memory was considered a kosher optimization of C code.\n\nSure it is. Presumably \"-O0\" or equivalent would have kept this from\nhappening, but seemingly unrelated stores into non-overlapping memory\nare always fair game at even modest levels of optimization. The \"x = 0\"\nis cheaper than the other operations, though it may be reordered for\ninternal RISC-y reasons rather than \"cheapest first\" considerations.\n\n - Thomas\n", "msg_date": "Tue, 11 Dec 2001 00:53:40 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Intermediate report for AIX 5L port" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n>> Declaring the lock pointer \"volatile\" seems to prevent this misbehavior.\n\n> Great. That is what it is anyway, right?\n\nThe reason I hadn't declared it volatile in the first place was that I\nwas assuming there'd be sequence points at the spin lock and unlock\ncalls. The order of operations *while holding the lock* is, and should\nbe, optimizable. Pushing updates outside of the range where the lock is\nheld, however, isn't cool.\n\nNow that I think a little more, I am worried about the idea that the\nsame thing could potentially happen in other modules that access shared\nmemory and use spinlocks to serialize the access. Perhaps the fix I\napplied was wrong, and the correct fix is to change S_LOCK from\n\n#define S_UNLOCK(lock)\t\t(*(lock) = 0)\n\nto\n\n#define S_UNLOCK(lock)\t\t(*((volatile slock_t *) (lock)) = 0)\n\nAssuming that the compiler does faithfully treat that as a sequence\npoint, it would solve potential related problems in other modules, not\nonly LWLock. I note that we've carefully marked all the TAS operations\nas using volatile pointers ... but we forgot about S_UNLOCK.\n\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Dec 2001 20:06:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Intermediate report for AIX 5L port " }, { "msg_contents": "> Got it. The AIX compiler apparently feels free to rearrange the\n> sequence\n> \n> \t\tproc->lwWaiting = true;\n> \t\tproc->lwExclusive = (mode == LW_EXCLUSIVE);\n> \t\tproc->lwWaitLink = NULL;\n> \t\tif (lock->head == NULL)\n> \t\t\tlock->head = proc;\n> \t\telse\n> \t\t\tlock->tail->lwWaitLink = proc;\n> \t\tlock->tail = proc;\n> \n> \t\t/* Can release the mutex now */\n> \t\tSpinLockRelease_NoHoldoff(&lock->mutex);\n> \n> into something wherein the SpinLockRelease (which is just \"x = 0\")\n> occurs before the last two assignments into the lock structure.\n> Boo, hiss. Evidently, on your multiprocessor machine, there may be\n> another CPU that is able to obtain the spinlock and then read the\n> un-updated lock values before the stores occur.\n> \n> Declaring the lock pointer \"volatile\" seems to prevent this misbehavior.\n> \n> Personally I'd call this a compiler bug; isn't it supposed to consider\n> semicolons as sequence points? I never heard that rearranging the order\n> of stores into memory was considered a kosher optimization of C code.\n\nLooks funny to me too. I will let the IBM engineers know what you have\nfound. Thanks.\n--\nTatsuo Ishii\n", "msg_date": "Tue, 11 Dec 2001 10:07:55 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Intermediate report for AIX 5L port " }, { "msg_contents": "I said:\n> Now that I think a little more, I am worried about the idea that the\n> same thing could potentially happen in other modules that access shared\n> memory and use spinlocks to serialize the access. Perhaps the fix I\n> applied was wrong, and the correct fix is to change S_LOCK from\n> #define S_UNLOCK(lock)\t\t(*(lock) = 0)\n> to\n> #define S_UNLOCK(lock)\t\t(*((volatile slock_t *) (lock)) = 0)\n\nI have applied this patch also, since on reflection it seems the clearly\nRight Thing. However, I find that AIX 5's compiler must have the LWLock\npointers marked volatile as well, else it still generates bad code.\n\nOriginal assembly code fragment (approximately lines 244-271 of\nlwlock.c):\n\n\tl\tr3,8(r25)\n\tstb\tr24,44(r25)\n\tcmpi\t0,r0,0\n\tstb\tr4,45(r25)\n\tst\tr23,48(r25)\n\tcal\tr5,0(r0)\n\tstx\tr23,r28,r27 <----- spinlock release\n\tbc\tBO_IF_NOT,CR0_EQ,__L834\n\tst\tr25,12(r26) <----- store into lock->head\n\tst\tr25,16(r26) <----- store into lock->tail\n\tl\tr4,12(r25)\n\tbl\t.IpcSemaphoreLock{PR}\n\nWith \"volatile\" added in S_UNLOCK:\n\n\tstb\tr24,44(r25)\n\tstb\tr3,45(r25)\n\tcmpi\t0,r0,0\n\tcal\tr5,0(r0)\n\tst\tr23,48(r25)\n\tbc\tBO_IF_NOT,CR0_EQ,__L81c\n\tst\tr25,12(r26) <----- store into lock->head\n\tstx\tr23,r28,r27 <----- spinlock release\n\tl\tr3,8(r25)\n\tst\tr25,16(r26) <----- store into lock->tail\n\tl\tr4,12(r25)\n\tbl\t.IpcSemaphoreLock{PR}\n\nWith \"volatile\" lock pointer in LWLockAcquire:\n\n\tstb\tr25,44(r23)\n\tstb\tr3,45(r23)\n\tcmpi\t0,r0,0\n\tcal\tr5,0(r0)\n\tst\tr24,48(r23)\n\tbc\tBO_IF_NOT,CR0_EQ,__L850\n\tst\tr23,12(r26) <----- store into lock->head\n\tst\tr23,16(r26) <----- store into lock->tail\n\tstx\tr24,r28,r27 <----- spinlock release\n\tl\tr3,8(r23)\n\tl\tr4,12(r23)\n\tbl\t.IpcSemaphoreLock{PR}\n\nI believe the second of these cases is inarguably a compiler bug.\nIt is moving a store (into lock->tail) across a store through a\nvolatile-qualified pointer. As I read the spec, that's not kosher.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Dec 2001 22:20:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Intermediate report for AIX 5L port " }, { "msg_contents": "> Thomas Lockhart <lockhart@fourpalms.org> writes:\n> >> Declaring the lock pointer \"volatile\" seems to prevent this misbehavior.\n> \n> > Great. That is what it is anyway, right?\n> \n> The reason I hadn't declared it volatile in the first place was that I\n> was assuming there'd be sequence points at the spin lock and unlock\n> calls. The order of operations *while holding the lock* is, and should\n> be, optimizable. Pushing updates outside of the range where the lock is\n> held, however, isn't cool.\n> \n> Now that I think a little more, I am worried about the idea that the\n> same thing could potentially happen in other modules that access shared\n> memory and use spinlocks to serialize the access. Perhaps the fix I\n> applied was wrong, and the correct fix is to change S_LOCK from\n\nHere is my limited experience with volatile. There was a BSD/OS\nmultiport card that mapped card memory to a RAM address, but the\npointers pointing to that address weren't marked as volatile. An\nupgrade to a better compiler caused the driver to fail, and I finally\nfigured out why. Marking them as volatile fixed it.\n\nSeems this is the same case. We are not pointing to memory on a card\nbut to shared memory which can change on its own, hence it is volatile.\n\nTom, I assume what you are saying is that the access to the spinlocks,\nalready marked as volatile, should have prevented any code from\nmigrating over those locks. I guess my big question is does any\nvolatile access prevent optimization of other variables across that\nvolatiles access? I didn't think that was guaranteed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Dec 2001 20:37:41 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Intermediate report for AIX 5L port" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom, I assume what you are saying is that the access to the spinlocks,\n> already marked as volatile, should have prevented any code from\n> migrating over those locks. I guess my big question is does any\n> volatile access prevent optimization of other variables across that\n> volatiles access? I didn't think that was guaranteed.\n\nAfter eyeballing the C spec some more, I think you might be right.\nIf that's the correct reading then it is indeed necessary for lwlock.c\nto mark the whole lock structure as volatile, not only the spinlock\nfields.\n\nHowever, if that's true then (a) 7.2 has three other modules that are\npotentially vulnerable to similar problems; (b) prior releases had\nmany more places that were potentially vulnerable --- ie, all the\nmodules that used to use spinlocks directly and now use LWLocks.\nIf this sort of behavior is allowed, ISTM we should have been seeing\nmajor instability on lots of SMP machines.\n\nComments? Do we need to put a bunch of \"volatile\" keywords into\nevery place that uses raw spinlocks? If so, why wasn't the\nprevious code equally broken??\n\nI don't think the places that use LWLocks need volatile markers on\ntheir data structures, since the LWLock lock and unlock calls will\nbe out-of-line subroutine calls. But for spinlocks that can be\ninlined, it seems there is a risk.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Dec 2001 16:08:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Intermediate report for AIX 5L port " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom, I assume what you are saying is that the access to the spinlocks,\n> > already marked as volatile, should have prevented any code from\n> > migrating over those locks. I guess my big question is does any\n> > volatile access prevent optimization of other variables across that\n> > volatiles access? I didn't think that was guaranteed.\n> \n> After eyeballing the C spec some more, I think you might be right.\n> If that's the correct reading then it is indeed necessary for lwlock.c\n> to mark the whole lock structure as volatile, not only the spinlock\n> fields.\n\nOK.\n\n> However, if that's true then (a) 7.2 has three other modules that are\n> potentially vulnerable to similar problems; (b) prior releases had\n\nThat was going to be my next question.\n\n> many more places that were potentially vulnerable --- ie, all the\n> modules that used to use spinlocks directly and now use LWLocks.\n> If this sort of behavior is allowed, ISTM we should have been seeing\n> major instability on lots of SMP machines.\n\nAgain, a good question. No idea.\n\nHere is a more general question:\n\nIf you do:\n\n\tget lock;\n\ta=4\n\trelease lock;\n\nCan the compiler reorder that to:\n\n\ta=4\n\tget lock;\n\trelease lock;\n\nIt can see the lock values don't have any effect on 'a'. What actually\ndoes keep this stuff from moving around?\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 13 Dec 2001 05:35:02 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Intermediate report for AIX 5L port" }, { "msg_contents": "...\n> It can see the lock values don't have any effect on 'a'. What actually\n> does keep this stuff from moving around?\n\nLack of ambition?\n\nI'm pretty sure that the only reasons *to* reorder instructions are:\n\n1) there could be a performance gain, as in \n a) loop unrolling\n b) pipeline fill considerations\n c) unnecessary assignment (e.g. result is ignored, or only used on one\npath)\n\n2) the optimization level allows it (-O0 does not reorder at all)\n\nI vaguely recall that the gcc docs discuss the kinds of optimizations\nallowed at each level. Presumably IBM's AIX compiler was a bit more\naggressive in evaluating costs or pipeline fills than is gcc on other\nprocessors.\n\n - Thomas\n", "msg_date": "Thu, 13 Dec 2001 15:29:22 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Intermediate report for AIX 5L port" }, { "msg_contents": "Tom, Have you fixed the case on AIX 5L? I still see hunging backends\nwith pgbench -c 64. Maybe AIX 5L (more precisely xlc) needs additional\nfixed? If so, I'm wondering why see no improvements even with gcc.\n--\nTatsuo Ishii\n\n(dbx) where\nsemop(??, ??, ??) at 0xd02be73c\nIpcSemaphoreLock() at 0x100091d0\nLWLockAcquire() at 0x10019df4\nReleaseBuffer() at 0x100205a4\nCatalogIndexFetchTuple() at 0x1005a31c\nAttributeRelidNumIndexScan() at 0x1005a4e4\nbuild_tupdesc_ind() at 0x10030c5c\nRelationBuildTupleDesc() at 0x10031180\nRelationBuildDesc() at 0x100309c0\nRelationNameGetRelation() at 0x100337b0\nrelation_openr() at 0x10014f84\nheap_openr() at 0x10014d3c\nCatalogCacheInitializeCache() at 0x1000f194\nSearchCatCache() at 0x1000fe9c\nSearchSysCache() at 0x1000daac\neqsel() at 0x100c5388\nOidFunctionCall4() at 0x10045ccc\nrestriction_selectivity() at 0x100c7594\nclauselist_selectivity() at 0x100c72a4\nrestrictlist_selectivity() at 0x100c7424\nset_baserel_size_estimates() at 0x100c8924\nset_plain_rel_pathlist() at 0x100e1268\nset_base_rel_pathlists() at 0x100e13a4\nmake_one_rel() at 0x100e1518\nsubplanner() at 0x100e0b6c\nquery_planner() at 0x100e0d98\ngrouping_planner() at 0x100df0f0\nsubquery_planner() at 0x100dff00\nplanner() at 0x100dffe0\npg_plan_query() at 0x1001c6b0\npg_exec_query_string() at 0x1001c530\nPostgresMain() at 0x1001c0a8\nDoBackend() at 0x10003380\nBackendStartup() at 0x1000287c\nServerLoop() at 0x10002be8\nPostmasterMain() at 0x10004934\nmain() at 0x100004ec\n(dbx) \n\n> I said:\n> > Now that I think a little more, I am worried about the idea that the\n> > same thing could potentially happen in other modules that access shared\n> > memory and use spinlocks to serialize the access. Perhaps the fix I\n> > applied was wrong, and the correct fix is to change S_LOCK from\n> > #define S_UNLOCK(lock)\t\t(*(lock) = 0)\n> > to\n> > #define S_UNLOCK(lock)\t\t(*((volatile slock_t *) (lock)) = 0)\n> \n> I have applied this patch also, since on reflection it seems the clearly\n> Right Thing. However, I find that AIX 5's compiler must have the LWLock\n> pointers marked volatile as well, else it still generates bad code.\n> \n> Original assembly code fragment (approximately lines 244-271 of\n> lwlock.c):\n> \n> \tl\tr3,8(r25)\n> \tstb\tr24,44(r25)\n> \tcmpi\t0,r0,0\n> \tstb\tr4,45(r25)\n> \tst\tr23,48(r25)\n> \tcal\tr5,0(r0)\n> \tstx\tr23,r28,r27 <----- spinlock release\n> \tbc\tBO_IF_NOT,CR0_EQ,__L834\n> \tst\tr25,12(r26) <----- store into lock->head\n> \tst\tr25,16(r26) <----- store into lock->tail\n> \tl\tr4,12(r25)\n> \tbl\t.IpcSemaphoreLock{PR}\n> \n> With \"volatile\" added in S_UNLOCK:\n> \n> \tstb\tr24,44(r25)\n> \tstb\tr3,45(r25)\n> \tcmpi\t0,r0,0\n> \tcal\tr5,0(r0)\n> \tst\tr23,48(r25)\n> \tbc\tBO_IF_NOT,CR0_EQ,__L81c\n> \tst\tr25,12(r26) <----- store into lock->head\n> \tstx\tr23,r28,r27 <----- spinlock release\n> \tl\tr3,8(r25)\n> \tst\tr25,16(r26) <----- store into lock->tail\n> \tl\tr4,12(r25)\n> \tbl\t.IpcSemaphoreLock{PR}\n> \n> With \"volatile\" lock pointer in LWLockAcquire:\n> \n> \tstb\tr25,44(r23)\n> \tstb\tr3,45(r23)\n> \tcmpi\t0,r0,0\n> \tcal\tr5,0(r0)\n> \tst\tr24,48(r23)\n> \tbc\tBO_IF_NOT,CR0_EQ,__L850\n> \tst\tr23,12(r26) <----- store into lock->head\n> \tst\tr23,16(r26) <----- store into lock->tail\n> \tstx\tr24,r28,r27 <----- spinlock release\n> \tl\tr3,8(r23)\n> \tl\tr4,12(r23)\n> \tbl\t.IpcSemaphoreLock{PR}\n> \n> I believe the second of these cases is inarguably a compiler bug.\n> It is moving a store (into lock->tail) across a store through a\n> volatile-qualified pointer. As I read the spec, that's not kosher.\n> \n> \t\t\tregards, tom lane\n> \n", "msg_date": "Fri, 14 Dec 2001 14:06:13 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Intermediate report for AIX 5L port " }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> Tom, Have you fixed the case on AIX 5L? I still see hunging backends\n> with pgbench -c 64. Maybe AIX 5L (more precisely xlc) needs additional\n> fixed? If so, I'm wondering why see no improvements even with gcc.\n\nHmm, I thought I'd fixed it ... are you using CVS tip? The largest\ntest case I tried was 5 client * 10000 transactions, but that ran to\ncompletion just fine.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 14 Dec 2001 10:21:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Intermediate report for AIX 5L port " }, { "msg_contents": "> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > Tom, Have you fixed the case on AIX 5L? I still see hunging backends\n> > with pgbench -c 64. Maybe AIX 5L (more precisely xlc) needs additional\n> > fixed? If so, I'm wondering why see no improvements even with gcc.\n> \n> Hmm, I thought I'd fixed it ... are you using CVS tip? The largest\n> test case I tried was 5 client * 10000 transactions, but that ran to\n> completion just fine.\n\nI believe I grabbed the latest current source. But I will try again...\n--\nTatsuo Ishii\n", "msg_date": "Sat, 15 Dec 2001 11:15:49 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Intermediate report for AIX 5L port " } ]
[ { "msg_contents": "Hi guys,\n\nI'm fixing the function editor in phpPgAdmin, however there's a problem. We\nuse this query to return a list of all the possible function return types:\n\nSELECT typname\nFROM pg_type pt\nWHERE typname NOT LIKE '\\\\\\_%' AND typname NOT LIKE 'pg\\\\\\_%'\nEXCEPT\nSELECT relname\nFROM pg_class\nWHERE relkind = 'S'\nORDER BY typname;\n\nPart of this result is this:\n\nbit\nbool\nbox\n...\ntext\ntid\ntime\ntimestamp\ntimetz\ntinterval\ntransactions_log\n\nNow the problem is that if you have a function with a defined return type of\n'boolean' or 'timestamp with time zone' or 'timestamp without time zone',\nhow on earth do you match it to something in this list? Alternatively, how\ndo I query in all the _alias_ types?\n\nChris\n\n", "msg_date": "Tue, 11 Dec 2001 11:12:01 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "phpPgAdmin query help" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Now the problem is that if you have a function with a defined return type of\n> 'boolean' or 'timestamp with time zone' or 'timestamp without time zone',\n> how on earth do you match it to something in this list?\n\nformat_type(pg_type.oid, NULL) will deliver the prettified equivalent\nname of each type.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Dec 2001 22:28:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: phpPgAdmin query help " } ]
[ { "msg_contents": "Hello together,\n\nsorry for asking again, but my server was broken yesterday and the result was that I haven�t got\nthe responses to my post from yesterday. If someone has answered, please do it again. Thank you.\n\n\nHere the post again:\ni�ve worked out several topics so that postgreSQL will run on NetWare in the future. \nNow I want to start to add all the stuff that is neccessary for the NetWare port peace by\npeace.\n\nI need to add some modifications to the sources and I need to add another port directory.\n\nMy idea is the following:\nPG7.2 is in Beta now and I think it makes no sense to modify these sources now. \nIf PG7.2 is finished and the next release is starting I want to add NetWare support in this\nfuture release. Additional I want to participate in development of the databse itself.\n\nIf someone can tell me what steps i have to do next to include the changes this would be helpful.\n\nregards\n", "msg_date": "Tue, 11 Dec 2001 10:32:00 +0200", "msg_from": "Ulrich Neumann<u_neumann@gne.de>", "msg_from_op": true, "msg_subject": "New Port targetting NetWare" }, { "msg_contents": "...\n> I need to add some modifications to the sources and I need to add another port directory.\n\nAnother port directory is (almost) always no problem. \"Some\nmodifications to the sources\" needs some clarification and examples; I'm\nnot sure what APIs NetWare supports and if they are nothing close to the\nPosix set of APIs we are currently using there may need to be some\ndiscussion on whether it is appropriate. But, we need the examples first\n;)\n\nSo, give a short description of the kinds of things which are different\nin NetWare. Are there just a few areas of differences, or is every API\ndifferent? If the latter (and especially if they are different from\nevery other API in the world) then we may not be happy with large\nadditions to the sources, but would be happy supporting it as a patch\nset.\n\nBut those possibilities are just speculation until we have more details\non what is required for the port. And the group discusses it, so please\ndon't infer a decision from my hints or suggestions above.\n\nAnd welcome to PostgreSQL development!\n\n - Thomas\n", "msg_date": "Tue, 11 Dec 2001 14:49:49 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: New Port targetting NetWare" } ]
[ { "msg_contents": "> Does anyone have an opinion on this? I'd like to wrap something up\n> tonight/tomorrow morning, unless someone has something they are really\n> sitting on?\n\nI know there aren't too much Hungarian users but I'd like to fix hu.po in\nthe backend which can cause crash now. This should be done in 8 hours from\nnow.\n\nZoltan\n\n-- \n Kov\\'acs, Zolt\\'an\n kovacsz@pc10.radnoti-szeged.sulinet.hu\n http://www.math.u-szeged.hu/~kovzol\n ftp://pc10.radnoti-szeged.sulinet.hu/home/kovacsz\n\n", "msg_date": "Tue, 11 Dec 2001 10:30:57 +0100 (CET)", "msg_from": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu>", "msg_from_op": true, "msg_subject": "Re: Beta4 or RC1 ... ?" }, { "msg_contents": "Kovacs Zoltan writes:\n\n> > Does anyone have an opinion on this? I'd like to wrap something up\n> > tonight/tomorrow morning, unless someone has something they are really\n> > sitting on?\n>\n> I know there aren't too much Hungarian users but I'd like to fix hu.po in\n> the backend which can cause crash now. This should be done in 8 hours from\n> now.\n\nYou probably want to delete all the German translations that are left in\nthe file.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 11 Dec 2001 23:20:12 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Beta4 or RC1 ... ?" }, { "msg_contents": "> > I know there aren't too much Hungarian users but I'd like to fix hu.po in\n> > the backend which can cause crash now. This should be done in 8 hours from\n> > now.\n> \n> You probably want to delete all the German translations that are left in\n> the file.\n\nYes, sure. AFAIK I deleted them all but I'm not sure if some remained in\nthere.\n\nZoltan\n\n-- \n Kov\\'acs, Zolt\\'an\n kovacsz@pc10.radnoti-szeged.sulinet.hu\n http://www.math.u-szeged.hu/~kovzol\n ftp://pc10.radnoti-szeged.sulinet.hu/home/kovacsz\n\n", "msg_date": "Wed, 12 Dec 2001 00:10:42 +0100 (CET)", "msg_from": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu>", "msg_from_op": true, "msg_subject": "Re: Beta4 or RC1 ... ?" } ]
[ { "msg_contents": "\nMaybe I am missing something obvious, but I am unable to load\nlarger tables (~300k rows) with COPY command that pg_dump by\ndefault produces. Yes, dump as INSERTs works but is slow.\n\n\"Cant\" as in \"it does not work with the default setup I have\nrunning on devel machine\" - 128M mem, 128M swap, basically\ndefault postgresql.conf:\n\n1) Too few WAL files.\n - well, increase the wal_files (eg to 32),\n\n2) Machine runs out of swap, PostgreSQL seems to keep whole TX\n in memory.\n - So I must put 1G of swap? But what if I have 1G of rows?\n\nOr shortly: during pg_restore the resource requirements are\norder of magnitude higher than during pg_dump, which is\nnon-obvious and may be a bad surprise when in real trouble.\n\nThis is annoying, especially as dump as COPY's should be\npreferred as it is faster and smaller. Ofcourse the\ndump-as-INSERTs has also positive side - eg. ALTER TABLE DROP\nCOLUMN with sed...\n\nPatch below implements '-m NUM' switch to pg_dump, which splits\neach COPY command to chunks, each maximum NUM rows.\n\nComments? What am I missing?\n\n-- \nmarko\n\n\n\nIndex: doc/src/sgml/ref/pg_dump.sgml\n===================================================================\nRCS file: /opt/cvs/pgsql/pgsql/doc/src/sgml/ref/pg_dump.sgml,v\nretrieving revision 1.41\ndiff -u -c -r1.41 pg_dump.sgml\n*** doc/src/sgml/ref/pg_dump.sgml\t8 Dec 2001 03:24:37 -0000\t1.41\n--- doc/src/sgml/ref/pg_dump.sgml\t11 Dec 2001 03:58:30 -0000\n***************\n*** 35,40 ****\n--- 35,41 ----\n <arg>-f <replaceable>file</replaceable></arg> \n <arg>-F <replaceable>format</replaceable></arg>\n <arg>-i</arg>\n+ <arg>-m <replaceable>num_rows</replaceable></arg>\n <group> <arg>-n</arg> <arg>-N</arg> </group>\n <arg>-o</arg>\n <arg>-O</arg>\n***************\n*** 301,306 ****\n--- 302,321 ----\n \tif you need to override the version check (and if\n \t<command>pg_dump</command> then fails, don't\n \tsay you weren't warned).\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n+ <varlistentry>\n+ <term>-m <replaceable class=\"parameter\">num_rows</replaceable></term>\n+ <term>--maxrows=<replaceable class=\"parameter\">num_rows</replaceable></term>\n+ <listitem>\n+ <para>\n+ \tSet maximum number of rows to put into one COPY statement.\n+ \tThis starts new COPY command after every\n+ \t<replaceable class=\"parameter\">num_rows</replaceable>.\n+ \tThis is useful on large tables to avoid restoring whole table in \n+ \tone transaction which may consume lot of resources.\n </para>\n </listitem>\n </varlistentry>\nIndex: src/bin/pg_dump/pg_dump.c\n===================================================================\nRCS file: /opt/cvs/pgsql/pgsql/src/bin/pg_dump/pg_dump.c,v\nretrieving revision 1.236\ndiff -u -c -r1.236 pg_dump.c\n*** src/bin/pg_dump/pg_dump.c\t28 Oct 2001 06:25:58 -0000\t1.236\n--- src/bin/pg_dump/pg_dump.c\t11 Dec 2001 04:48:42 -0000\n***************\n*** 116,121 ****\n--- 116,123 ----\n bool\t\tdataOnly;\n bool\t\taclsSkip;\n \n+ int\t\t\tg_max_copy_rows = 0;\n+ \n char\t\tg_opaque_type[10];\t/* name for the opaque type */\n \n /* placeholders for the delimiters for comments */\n***************\n*** 151,156 ****\n--- 153,159 ----\n \t\t\t\t \" -h, --host=HOSTNAME database server host name\\n\"\n \t\t\t\t \" -i, --ignore-version proceed even when server version mismatches\\n\"\n \t\t\t\t \" pg_dump version\\n\"\n+ \t\t\t\t \" �m, --maxrows=NUM max rows in one COPY command\\n\"\n \t\" -n, --no-quotes suppress most quotes around identifiers\\n\"\n \t \" -N, --quotes enable most quotes around identifiers\\n\"\n \t\t\t\t \" -o, --oids include oids in dump\\n\"\n***************\n*** 187,192 ****\n--- 190,196 ----\n \t\t\t\t \" pg_dump version\\n\"\n \t\" -n suppress most quotes around identifiers\\n\"\n \t \" -N enable most quotes around identifiers\\n\"\n+ \t\t\t\t \" �m NUM max rows in one COPY command\\n\"\n \t\t\t\t \" -o include oids in dump\\n\"\n \t\t\t\t \" -O do not output \\\\connect commands in plain\\n\"\n \t\t\t\t \" text format\\n\"\n***************\n*** 244,249 ****\n--- 248,255 ----\n \tint\t\t\tret;\n \tbool\t\tcopydone;\n \tchar\t\tcopybuf[COPYBUFSIZ];\n+ \tint\t\t\tcur_row;\n+ \tint\t\t\tlinestart;\n \n \tif (g_verbose)\n \t\twrite_msg(NULL, \"dumping out the contents of table %s\\n\", classname);\n***************\n*** 297,302 ****\n--- 303,310 ----\n \t\telse\n \t\t{\n \t\t\tcopydone = false;\n+ \t\t\tlinestart = 1;\n+ \t\t\tcur_row = 0;\n \n \t\t\twhile (!copydone)\n \t\t\t{\n***************\n*** 310,316 ****\n--- 318,338 ----\n \t\t\t\t}\n \t\t\t\telse\n \t\t\t\t{\n+ \t\t\t\t\t/*\n+ \t\t\t\t\t * Avoid too large transactions by breaking them up.\n+ \t\t\t\t\t */\n+ \t\t\t\t\tif (g_max_copy_rows > 0 && linestart\n+ \t\t\t\t\t\t\t&& cur_row >= g_max_copy_rows)\n+ \t\t\t\t\t{\n+ \t\t\t\t\t\tcur_row = 0;\n+ \t\t\t\t\t\tarchputs(\"\\\\.\\n\", fout);\n+ \t\t\t\t\t\tarchprintf(fout, \"COPY %s %sFROM stdin;\\n\",\n+ \t\t\t\t\t\t\t\tfmtId(classname, force_quotes),\n+ \t\t\t\t\t\t\t\t(oids && hasoids) ? \"WITH OIDS \" : \"\");\n+ \t\t\t\t\t}\n+ \n \t\t\t\t\tarchputs(copybuf, fout);\n+ \t\t\t\t\t\n \t\t\t\t\tswitch (ret)\n \t\t\t\t\t{\n \t\t\t\t\t\tcase EOF:\n***************\n*** 318,325 ****\n--- 340,350 ----\n \t\t\t\t\t\t\t/* FALLTHROUGH */\n \t\t\t\t\t\tcase 0:\n \t\t\t\t\t\t\tarchputc('\\n', fout);\n+ \t\t\t\t\t\t\tcur_row++;\n+ \t\t\t\t\t\t\tlinestart = 1;\n \t\t\t\t\t\t\tbreak;\n \t\t\t\t\t\tcase 1:\n+ \t\t\t\t\t\t\tlinestart = 0;\n \t\t\t\t\t\t\tbreak;\n \t\t\t\t\t}\n \t\t\t\t}\n***************\n*** 696,701 ****\n--- 721,727 ----\n \t\t{\"compress\", required_argument, NULL, 'Z'},\n \t\t{\"help\", no_argument, NULL, '?'},\n \t\t{\"version\", no_argument, NULL, 'V'},\n+ \t\t{\"maxrows\", required_argument, NULL, 'm'},\n \n \t\t/*\n \t\t * the following options don't have an equivalent short option\n***************\n*** 748,756 ****\n \t}\n \n #ifdef HAVE_GETOPT_LONG\n! \twhile ((c = getopt_long(argc, argv, \"abcCdDf:F:h:inNoOp:RsS:t:uU:vWxX:zZ:V?\", long_options, &optindex)) != -1)\n #else\n! \twhile ((c = getopt(argc, argv, \"abcCdDf:F:h:inNoOp:RsS:t:uU:vWxX:zZ:V?-\")) != -1)\n #endif\n \n \t{\n--- 774,782 ----\n \t}\n \n #ifdef HAVE_GETOPT_LONG\n! \twhile ((c = getopt_long(argc, argv, \"abcCdDf:F:h:im:nNoOp:RsS:t:uU:vWxX:zZ:V?\", long_options, &optindex)) != -1)\n #else\n! \twhile ((c = getopt(argc, argv, \"abcCdDf:F:h:im:nNoOp:RsS:t:uU:vWxX:zZ:V?-\")) != -1)\n #endif\n \n \t{\n***************\n*** 798,803 ****\n--- 824,833 ----\n \n \t\t\tcase 'i':\t\t\t/* ignore database version mismatch */\n \t\t\t\tignore_version = true;\n+ \t\t\t\tbreak;\n+ \n+ \t\t\tcase 'm':\n+ \t\t\t\tg_max_copy_rows = atoi(optarg);\n \t\t\t\tbreak;\n \n \t\t\tcase 'n':\t\t\t/* Do not force double-quotes on\n", "msg_date": "Tue, 11 Dec 2001 17:10:05 +0200", "msg_from": "Marko Kreen <marko@l-t.ee>", "msg_from_op": true, "msg_subject": "Restoring large tables with COPY" }, { "msg_contents": "Marko Kreen <marko@l-t.ee> writes:\n> Maybe I am missing something obvious, but I am unable to load\n> larger tables (~300k rows) with COPY command that pg_dump by\n> default produces.\n\nI'd like to find out what the problem is, rather than work around it\nwith such an ugly hack.\n\n> 1) Too few WAL files.\n> - well, increase the wal_files (eg to 32),\n\nWhat PG version are you running? 7.1.3 or later should not have a\nproblem with WAL file growth.\n\n> 2) Machine runs out of swap, PostgreSQL seems to keep whole TX\n> in memory.\n\nThat should not happen either. Could we see the full schema of the\ntable you are having trouble with?\n\n> Or shortly: during pg_restore the resource requirements are\n> order of magnitude higher than during pg_dump,\n\nWe found some client-side memory leaks in pg_restore recently; is that\nwhat you're talking about?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Dec 2001 10:55:30 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Restoring large tables with COPY " }, { "msg_contents": "On Tue, Dec 11, 2001 at 10:55:30AM -0500, Tom Lane wrote:\n> Marko Kreen <marko@l-t.ee> writes:\n> > Maybe I am missing something obvious, but I am unable to load\n> > larger tables (~300k rows) with COPY command that pg_dump by\n> > default produces.\n> \n> I'd like to find out what the problem is, rather than work around it\n> with such an ugly hack.\n> \n> > 1) Too few WAL files.\n> > - well, increase the wal_files (eg to 32),\n> \n> What PG version are you running? 7.1.3 or later should not have a\n> problem with WAL file growth.\n\n7.1.3\n\n> > 2) Machine runs out of swap, PostgreSQL seems to keep whole TX\n> > in memory.\n> \n> That should not happen either. Could we see the full schema of the\n> table you are having trouble with?\n\nWell, there are several such tables, I will reproduce it,\nthen send the schema. I guess its the first one, but maybe\nnot. postgres gets killed by Linux OOM handler, so I cant\ntell by messages, which one it was. (hmm, i should probably\nrun it as psql -q -a > log).\n\n> > Or shortly: during pg_restore the resource requirements are\n> > order of magnitude higher than during pg_dump,\n> \n> We found some client-side memory leaks in pg_restore recently; is that\n> what you're talking about?\n\nNo, its the postgres process thats memory-hungry, it happens\nwith \"psql < db.dump\" too.\n\nIf I run a dump thats produced with \"pg_dump -m 5000\" then\nit loops between 20M and 10M is much better. (the 10M\ndepends on shared_buffers I guess).\n\n-- \nmarko\n\n", "msg_date": "Tue, 11 Dec 2001 18:19:36 +0200", "msg_from": "Marko Kreen <marko@l-t.ee>", "msg_from_op": true, "msg_subject": "Re: Restoring large tables with COPY" }, { "msg_contents": "On Tue, Dec 11, 2001 at 10:55:30AM -0500, Tom Lane wrote:\n> Marko Kreen <marko@l-t.ee> writes:\n> > Maybe I am missing something obvious, but I am unable to load\n> > larger tables (~300k rows) with COPY command that pg_dump by\n> > default produces.\n> \n> I'd like to find out what the problem is, rather than work around it\n> with such an ugly hack.\n\nHmm, the problem was more 'interesting' than I thought.\nBasically:\n\n1) pg_dump of 7.1.3 dumps constraints and primary keys\n with table defs in this case, so they are run during COPY.\n2) I have some tricky CHECK contraints.\n\nLook at the attached Python script, it reproduces the problem.\nSorry, cannot test on 7.2 at the moment.\n\n-- \nmarko", "msg_date": "Tue, 11 Dec 2001 19:25:51 +0200", "msg_from": "Marko Kreen <marko@l-t.ee>", "msg_from_op": true, "msg_subject": "Re: Restoring large tables with COPY" }, { "msg_contents": "----- Original Message -----\nFrom: Marko Kreen <marko@l-t.ee>\nSent: Tuesday, December 11, 2001 10:10 AM\n\nIf this thing ever gets through, shouldn't this\n\n> /* placeholders for the delimiters for comments */\n> ***************\n> *** 151,156 ****\n> --- 153,159 ----\n> \" -h, --host=HOSTNAME database server host name\\n\"\n> \" -i, --ignore-version proceed even when server version mismatches\\n\"\n> \" pg_dump version\\n\"\n> + \" �m, --maxrows=NUM max rows in one COPY command\\n\"\n\nsay '-m'\n\n> + \" �m NUM max rows in one COPY command\\n\"\n\nand this one too?\n\n\n", "msg_date": "Tue, 11 Dec 2001 12:29:07 -0500", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": false, "msg_subject": "Re: Restoring large tables with COPY" }, { "msg_contents": "On Tue, Dec 11, 2001 at 12:29:07PM -0500, Serguei Mokhov wrote:\n> If this thing ever gets through, shouldn't this\n> \n> > /* placeholders for the delimiters for comments */\n> > ***************\n> > *** 151,156 ****\n> > --- 153,159 ----\n> > \" -h, --host=HOSTNAME database server host name\\n\"\n> > \" -i, --ignore-version proceed even when server version mismatches\\n\"\n> > \" pg_dump version\\n\"\n> > + \" �m, --maxrows=NUM max rows in one COPY command\\n\"\n> \n> say '-m'\n> \n> > + \" �m NUM max rows in one COPY command\\n\"\n> \n> and this one too?\n\nOne is for systems that have 'getopt_long', second for\nshort-getopt-only ones. The '-h, --host=HOSTNAME' means\nthat '-h HOSTNAME' and '--host=HOSTNAME' are same.\n\n-- \nmarko\n\n", "msg_date": "Tue, 11 Dec 2001 19:38:27 +0200", "msg_from": "Marko Kreen <marko@l-t.ee>", "msg_from_op": true, "msg_subject": "Re: Restoring large tables with COPY" }, { "msg_contents": "Marko Kreen <marko@l-t.ee> writes:\n> Look at the attached Python script, it reproduces the problem.\n\nHmm. You'd probably have much better luck if you rewrote the check_code\nfunction in plpgsql: that should eliminate the memory-leak problem, and\nalso speed things up because plpgsql knows about caching query plans\nacross function calls. IIRC, sql functions don't do that.\n\nThe memory leakage is definitely a bug, but not one that is going to get\nfixed for 7.2. It'll take some nontrivial work on the SQL function\nexecutor...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Dec 2001 13:06:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Restoring large tables with COPY " }, { "msg_contents": "----- Original Message -----\nFrom: Marko Kreen <marko@l-t.ee>\nSent: Tuesday, December 11, 2001 12:38 PM\n\n> On Tue, Dec 11, 2001 at 12:29:07PM -0500, Serguei Mokhov wrote:\n> > If this thing ever gets through, shouldn't this\n> >\n> > > /* placeholders for the delimiters for comments */\n> > > ***************\n> > > *** 151,156 ****\n> > > --- 153,159 ----\n> > > \" -h, --host=HOSTNAME database server host name\\n\"\n> > > \" -i, --ignore-version proceed even when server version mismatches\\n\"\n> > > \" pg_dump version\\n\"\n> > > + \" �m, --maxrows=NUM max rows in one COPY command\\n\"\n> >\n> > say '-m'\n> >\n> > > + \" �m NUM max rows in one COPY command\\n\"\n> >\n> > and this one too?\n>\n> One is for systems that have 'getopt_long', second for\n> short-getopt-only ones. The '-h, --host=HOSTNAME' means\n> that '-h HOSTNAME' and '--host=HOSTNAME' are same.\n\nI know, I know. I just was trying to point out a typo :)\nYou forgot to add '-' in the messages before 'm'.\n\n\n\n", "msg_date": "Tue, 11 Dec 2001 13:37:03 -0500", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": false, "msg_subject": "Re: Restoring large tables with COPY" }, { "msg_contents": "> > > > + \" �m, --maxrows=NUM max rows in one COPY command\\n\"\n> > >\n> > > say '-m'\n\n> You forgot to add '-' in the messages before 'm'.\n\nAh. On my screen it looks lot like a '-', but od shows 0xAD...\nWell, thats VIM's digraph feature in action ;)\n\n-- \nmarko\n\n", "msg_date": "Tue, 11 Dec 2001 21:37:58 +0200", "msg_from": "Marko Kreen <marko@l-t.ee>", "msg_from_op": true, "msg_subject": "Re: Restoring large tables with COPY" }, { "msg_contents": "On Tue, Dec 11, 2001 at 01:06:13PM -0500, Tom Lane wrote:\n> Marko Kreen <marko@l-t.ee> writes:\n> > Look at the attached Python script, it reproduces the problem.\n> \n> Hmm. You'd probably have much better luck if you rewrote the check_code\n> function in plpgsql: that should eliminate the memory-leak problem, and\n> also speed things up because plpgsql knows about caching query plans\n> across function calls. IIRC, sql functions don't do that.\n\nAnd I thought that the 'sql' is the more lightweight approach...\n\nThanks, now it seems to work.\n\n-- \nmarko\n\n", "msg_date": "Tue, 11 Dec 2001 21:39:48 +0200", "msg_from": "Marko Kreen <marko@l-t.ee>", "msg_from_op": true, "msg_subject": "Re: Restoring large tables with COPY" } ]
[ { "msg_contents": "If the recent problem reports from Kenoyer and Eustace are the same\nthing you saw, then the problem must exist in 7.1 as well as current\nsources.\n\nThe best idea I've had so far is that there's something going wrong in\nold-style VACUUM, probably in the chain-moving code which is horridly\ncomplex and has had several bugs found before. (Eustace's\nheavily-updated single-row table would present a very long chain, so\nthat seems to fit.) But this does not explain your problem seen with\n7.2 sources, unless you were using VACUUM FULL. Were you?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Dec 2001 11:17:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Duplicate-rows bug reports" } ]
[ { "msg_contents": "\nUnless I hear any out-cries against it, I'm going to wrap up beta4 first\nthing Tuesday morning, for an announce on Tuesday evening ... is anyone\nsitting on anything that they want me to wait a day or two for this?\n\n", "msg_date": "Tue, 11 Dec 2001 21:26:24 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Beta4 ..." }, { "msg_contents": "On Tue, 11 Dec 2001, Marc G. Fournier wrote:\n\n>\n> Unless I hear any out-cries against it, I'm going to wrap up beta4 first\n> thing Tuesday morning, for an announce on Tuesday evening ... is anyone\n> sitting on anything that they want me to wait a day or two for this?\n\nIt's already Tuesday nite, are you giving a one week warning or did\nyour calender get smudged? :)\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 11 Dec 2001 22:14:13 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Beta4 ..." }, { "msg_contents": "\nI need a holiday and one helluva long nap :(\n\nOn Tue, 11 Dec 2001, Vince Vielhaber wrote:\n\n> On Tue, 11 Dec 2001, Marc G. Fournier wrote:\n>\n> >\n> > Unless I hear any out-cries against it, I'm going to wrap up beta4 first\n> > thing Tuesday morning, for an announce on Tuesday evening ... is anyone\n> > sitting on anything that they want me to wait a day or two for this?\n>\n> It's already Tuesday nite, are you giving a one week warning or did\n> your calender get smudged? :)\n>\n> Vince.\n> --\n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n>\n>\n>\n>\n\n", "msg_date": "Wed, 12 Dec 2001 08:20:42 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: Beta4 ..." }, { "msg_contents": "On Wed, 12 Dec 2001, Marc G. Fournier wrote:\n\n>\n> I need a holiday and one helluva long nap :(\n\nOk, tell ya what. You can have both at the end of next\nweek. In the mean time, did you already roll this one?\n\n>\n> On Tue, 11 Dec 2001, Vince Vielhaber wrote:\n>\n> > On Tue, 11 Dec 2001, Marc G. Fournier wrote:\n> >\n> > >\n> > > Unless I hear any out-cries against it, I'm going to wrap up beta4 first\n> > > thing Tuesday morning, for an announce on Tuesday evening ... is anyone\n> > > sitting on anything that they want me to wait a day or two for this?\n> >\n> > It's already Tuesday nite, are you giving a one week warning or did\n> > your calender get smudged? :)\n> >\n> > Vince.\n> > --\n> > ==========================================================================\n> > Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> > 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> > Online Campground Directory http://www.camping-usa.com\n> > Online Giftshop Superstore http://www.cloudninegifts.com\n> > ==========================================================================\n> >\n> >\n> >\n> >\n>\n>\n\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 12 Dec 2001 08:39:15 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Beta4 ..." }, { "msg_contents": "On Wed, 12 Dec 2001, Vince Vielhaber wrote:\n\n> On Wed, 12 Dec 2001, Marc G. Fournier wrote:\n>\n> >\n> > I need a holiday and one helluva long nap :(\n>\n> Ok, tell ya what. You can have both at the end of next\n> week. In the mean time, did you already roll this one?\n\nJust did it\n\n\n", "msg_date": "Wed, 12 Dec 2001 09:28:34 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: Beta4 ..." }, { "msg_contents": "Beta 4 no longer installs outside the final destination (used for\npackaging). The python parts tries to install into /usr/lib/python1.5,\nand no longer into DESTDIR/usr/lib/python1.5\n\nBeta 3 built fine, no other changes were made.\n\n\nmake[4]: Leaving directory `/usr/src/redhat/BUILD/postgresql-7.2b4/src/interfaces/perl5'\nmake[3]: Leaving directory `/usr/src/redhat/BUILD/postgresql-7.2b4/src/interfaces/perl5'\nmake[3]: Entering directory `/usr/src/redhat/BUILD/postgresql-7.2b4/src/interfaces/python'\nmake -C ../../../src/interfaces/libpq all\nmake[4]: Entering directory `/usr/src/redhat/BUILD/postgresql-7.2b4/src/interfaces/libpq'\nmake[4]: Nothing to be done for `all'.\nmake[4]: Leaving directory `/usr/src/redhat/BUILD/postgresql-7.2b4/src/interfaces/libpq'\nmkdir /var/tmp/postgresql-7.2b4-root/usr/lib/python1.5\nmkdir /var/tmp/postgresql-7.2b4-root/usr/lib/python1.5/site-packages\n/bin/sh ../../../config/install-sh -c -m 755 lib_pgmodule.so.0.0 /var/tmp/postgresql-7.2b4-root/usr/lib/python1.5/site-packages/_pgmodule.so\n/bin/sh ../../../config/install-sh -c -m 644 pg.py /usr/lib/python1.5/site-packages\ncp: cannot create regular file `/usr/lib/python1.5/site-packages/#inst.31937#': Ikke tilgang\n/bin/sh ../../../config/install-sh -c -m 644 pgdb.py /usr/lib/python1.5/site-packages\ncp: cannot create regular file `/usr/lib/python1.5/site-packages/#inst.31944#': Ikke tilgang\nmake[3]: *** [install] Error 1\nmake[3]: Leaving directory `/usr/src/redhat/BUILD/postgresql-7.2b4/src/interfaces/python'\nmake[2]: *** [install] Error 2\nmake[2]: Leaving directory `/usr/src/redhat/BUILD/postgresql-7.2b4/src/interfaces'\nmake[1]: *** [install] Error 2\nmake[1]: Leaving directory `/usr/src/redhat/BUILD/postgresql-7.2b4/src'\nmake: *** [install] Error 2\n\n\n(\"Ikke tilgang\" means \"Access denied\")\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "13 Dec 2001 12:21:16 -0500", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Beta 4 - build regression" }, { "msg_contents": "Trond Eivind Glomsr�d writes:\n\n> Beta 4 no longer installs outside the final destination (used for\n> packaging). The python parts tries to install into /usr/lib/python1.5,\n> and no longer into DESTDIR/usr/lib/python1.5\n>\n> Beta 3 built fine, no other changes were made.\n\nI have reverted the last change to the python-related makefile.\nApparently, D'Arcy checked in a feature-changing patch during beta despite\nexplicit objections.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 13 Dec 2001 19:41:15 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Beta 4 - build regression" } ]
[ { "msg_contents": "Is there any problem at all with just twiddling the 'attnotnull' field in\nthe pg_attribute table to make a column NULL where before it was NOT NULL?\n\nChris\n\n", "msg_date": "Wed, 12 Dec 2001 14:46:02 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "not null columns" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: mlw [mailto:markw@mohawksoft.com] \n> Sent: 11 December 2001 12:14\n> Cc: PostgreSQL-development\n> Subject: Re: Explicit configuration file\n> \n> \n> \n> Peter Eisentraut wrote:\n> > \n> > mlw writes:\n> > \n> > > > That could be mildly useful, although symlinks make \n> this already \n> > > > possible, and a bit clearer, IMHO.\n> > >\n> > > On systems which support symlinks, yes.\n> > \n> > All systems that are able to run PostgreSQL support symlinks.\n> > \n> > Really.\n> \n> Windows does not supprt symlinks.\n\nNo, but Cygwin does and that's the environment that PostgreSQL runs in.\n\nOther than that, I agree entirely with mlw (sorry Peter :-) ). \n\nIf /etc/postgresql.conf included settings on the hba.conf file, ident.conf\nfile & data directory (and anything else I've forgotten) to use then a new\nuser could get up and running far quicker, and control every aspect of the\nserver from /etc/postgresql.conf without mucking about with symlinks (which\ncan be very confusing if working on a (convoluted) system setup by someone\nelse and *not* otherwise documented) OR (and I think this is the important\nbit for a new user) having to modify their environment.\n\nMore complex/multiple instance systems could easily use a -C, -cf -f or\nwhatever option to specify an alternate config file.\n\nJust my 2 pennies worth...\n\nRegards, Dave.\n", "msg_date": "Wed, 12 Dec 2001 08:43:10 -0000", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: Explicit configuration file" } ]
[ { "msg_contents": "Lamar wrote:\n> > I would like postgresql.conf to not \n> > get overwritten if I have to re-initdb.\n\nThis is something I would also like. \n\nBruce wrote:\n> I wonder if we should go one step further. Should we be specifying the\n> config file on the command line _rather_ than the data directory. We\n> could then specify the data directory location in the config file. That\n> seems like the direction we should be headed in, though I am not sure it\n> is worth the added headache of the switch.\n\nYes, I vote for a -C switch for postmaster (postmaster -C /etc/postgresql.conf)\nand inclusion of PGDATA in postgresql.conf .\nMaybe even a new envvar PGCONFIG that a la long replaces PGDATA.\n\nImho PGDATA does not really have a future anyway with tablespaces coming,\nand multiple directories where pgdata lives (root on hdisk1, pg_xlog on \nhdisk2 ...).\n\nAndreas\n", "msg_date": "Wed, 12 Dec 2001 10:44:00 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Explicit configuration file" }, { "msg_contents": "Zeugswetter Andreas SB SD writes:\n\n> Imho PGDATA does not really have a future anyway with tablespaces coming,\n> and multiple directories where pgdata lives (root on hdisk1, pg_xlog on\n> hdisk2 ...).\n\nProbably true. The question is where the table space setup is going to be\nconfigured. (Some of it will be done within the system catalogs, but the\nlog directories, etc. need to be configured externally.) If the answer is\npostgresql.conf then I don't think that's a good idea because it would\nmake it impossible to use the same postgresql.conf for multiple servers.\nI guess that's impossible already because the port is configured there,\nbut it *should* be possible, because else why would you want to have it in\na global location.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 12 Dec 2001 23:26:33 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Explicit configuration file" } ]
[ { "msg_contents": "\nWon't announce it until later tonight, to give mirrors a chance to pull\nit, but I just packaged it up and created a README.ChangeLog file in the\nftp section ...\n\nLet me know if there are any glaring errors ...\n\n", "msg_date": "Wed, 12 Dec 2001 08:50:41 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "v7.2b4 packaged ..." } ]
[ { "msg_contents": "I'm having a very scary problem.\n\nFirst, here's my system: smp dual PIII800 512MB memory running redhat 6.2\nkernel 2.2.18\nPostgreSQL 7.1.3 on i686-pc-linux-gnu, compiled by GCC egcs-2.91.66\n\nIn a nutshell, my primary key index got a NOTICE to recreate when the\ndatabase was vacuumed. I dropped the index and tried to recreate it. I get\na key violation when i try to do this. I find there are some 200 rows\nwith the exact same developer_id and oid.\n\nThis is a very serious problem. 1) the uinque index should have prevented\nthis from happening. 2) i looked at my code and there is absolutely no way\nmy code inserted multiple rows with the same id.\n\nThis leads me to believe that there is a big problem with postgres; possibly\nin vacuum. This has also underminded my confidence in postgres data\nintegrity that such a basic concept can be violated.\n\nI want to help you guys find this problem because I have a lot invested in\npostgres and overall have been very happy with it. I've included\ninformation that i think might be useful. If there is more that i can\nsupply, let me know and I will provide it if I can.\n\nMy system runs a vacuum every day at 4AMCST. I've checked all of my\napplication logs for the day before the NOTICE appeared and the day that it\nappeared. I see no SQL errors logged (and all SQL errors are logged by my\napplications) for either day. I've checked the postmaster logs for both\ndays and don't see any ERROR's logged. There are some NOTICES the day\nbefore that i don't know what they mean, but don't look good.\n\nNOTICE: Cannot rename init file\n/moby/pgsql/base/156130/pg_internal.init.19833 to\n/moby/pgsql/base/156130/pg_internal.init: No such file or directory\nNOTICE: Cannot rename init file\n/moby/pgsql/base/156130/pg_internal.init.19839 to\n/moby/pgsql/base/156130/pg_internal.init: No such file or directory\nNOTICE: Cannot rename init file\n/moby/pgsql/base/156130/pg_internal.init.19835 to\n/moby/pgsql/base/156130/pg_internal.init: No such file or directory\nNOTICE: Cannot rename init file\n/moby/pgsql/base/156130/pg_internal.init.19834 to\n/moby/pgsql/base/156130/pg_internal.init: No such file or directory\nNOTICE: Cannot rename init file\n/moby/pgsql/base/156130/pg_internal.init.19837 to\n/moby/pgsql/base/156130/pg_internal.init: No such file or direc\n\nhowever, on further inspection, the error above appears fairly frequently.\nIn fact, the first occurance was over 6 weeks before the corruption.\n\nHere is the DEBUG notices for vacuum the day before the corruption:\n\nDEBUG: --Relation developer--\nDEBUG: Pages 514: Changed 29, reaped 39, Empty 0, New 0; Tup 47971: Vac 52,\nKeep/VTL 0/0, Crash 0, UnUsed 89, MinLen 65, MaxLen 133; Re-using:\nFree/Avail. Space 6812/1768; EndEmpty/Avail. Pages 0/10. CPU 0.04s/0.00u\nsec.\nDEBUG: Index developer_primary_key: Pages 120; Tuples 47971: Deleted 52.\nCPU 0.00s/0.09u sec.\nDEBUG: Index developer_recent_mod_key: Pages 119; Tuples 47971: Deleted 52.\nCPU 0.00s/0.05u sec.\nDEBUG: Index developer_approved: Pages 121; Tuples 47971: Deleted 52. CPU\n0.03s/0.06u sec.\nDEBUG: Index developer_search_idx: Pages 204; Tuples 47971: Deleted 52. CPU\n0.00s/0.04u sec.\nDEBUG: Rel developer: Pages: 514 --> 514; Tuple(s) moved: 15. CPU\n0.01s/0.02u sec.\nDEBUG: Index developer_primary_key: Pages 120; Tuples 47971: Deleted 15.\nCPU 0.00s/0.05u sec.\nDEBUG: Index developer_recent_mod_key: Pages 119; Tuples 47971: Deleted 15.\nCPU 0.00s/0.04u sec.\nDEBUG: Index developer_approved: Pages 121; Tuples 47971: Deleted 15. CPU\n0.00s/0.05u sec.\nDEBUG: Index developer_search_idx: Pages 204; Tuples 47971: Deleted 15. CPU\n0.00s/0.05u sec.\n\nHere is the vaccum DEBUG messages the day of the corruption:\n\nDEBUG: --Relation developer--\nDEBUG: Pages 515: Changed 25, reaped 39, Empty 0, New 0; Tup 48038: Vac 53,\nKeep/VTL 0/0, Crash 0, UnUsed 89, MinLen 65, MaxLen 133; Re-using:\nFree/Avail. Space 9144/3348; EndEmpty/Avail. Pages 0/9. CPU 0.02s/0.01u sec.\nDEBUG: Index developer_primary_key: Pages 120; Tuples 48023: Deleted 53.\nCPU 0.00s/0.04u sec.\nNOTICE: Index developer_primary_key: NUMBER OF INDEX' TUPLES (48023) IS NOT\nTHE SAME AS HEAP' (48038).\n Recreate the index.\nDEBUG: Index developer_recent_mod_key: Pages 119; Tuples 48023: Deleted 53.\nCPU 0.00s/0.04u sec.\nNOTICE: Index developer_recent_mod_key: NUMBER OF INDEX' TUPLES (48023) IS\nNOT THE SAME AS HEAP' (48038).\n Recreate the index.\nDEBUG: Index developer_approved: Pages 121; Tuples 48023: Deleted 53. CPU\n0.00s/0.04u sec.\nNOTICE: Index developer_approved: NUMBER OF INDEX' TUPLES (48023) IS NOT\nTHE SAME AS HEAP' (48038).\n Recreate the index.\nDEBUG: Index developer_search_idx: Pages 204; Tuples 48023: Deleted 53. CPU\n0.00s/0.04u sec.\nNOTICE: Index developer_search_idx: NUMBER OF INDEX' TUPLES (48023) IS NOT\nTHE SAME AS HEAP' (48038).\n Recreate the index.\nDEBUG: Rel developer: Pages: 515 --> 515; Tuple(s) moved: 34. CPU\n0.00s/0.03u sec.\nDEBUG: Index developer_primary_key: Pages 120; Tuples 48023: Deleted 34.\nCPU 0.00s/0.04u sec.\nNOTICE: Index developer_primary_key: NUMBER OF INDEX' TUPLES (48023) IS NOT\nTHE SAME AS HEAP' (48038).\n Recreate the index.\nDEBUG: Index developer_recent_mod_key: Pages 119; Tuples 48023: Deleted 34.\nCPU 0.00s/0.04u sec.\nNOTICE: Index developer_recent_mod_key: NUMBER OF INDEX' TUPLES (48023) IS\nNOT THE SAME AS HEAP' (48038).\n Recreate the index.\nDEBUG: Index developer_approved: Pages 121; Tuples 48023: Deleted 34. CPU\n0.00s/0.03u sec.\nNOTICE: Index developer_approved: NUMBER OF INDEX' TUPLES (48023) IS NOT\nTHE SAME AS HEAP' (48038).\n Recreate the index.\nDEBUG: Index developer_search_idx: Pages 204; Tuples 48023: Deleted 34. CPU\n0.00s/0.06u sec.\nNOTICE: Index developer_search_idx: NUMBER OF INDEX' TUPLES (48023) IS NOT\nTHE SAME AS HEAP' (48038).\n Recreate the index.\n\nI also appear to be getting this quite often:\n\nNOTICE: RegisterSharedInvalid: SI buffer overflow\nNOTICE: InvalidateSharedInvalid: cache state reset\n\n\nHere is output from me trying to re-create an index.\n\nbasement=# drop index developer_primary_key;\nDROP\nbasement=# create unique index developer_primary_key on\ndeveloper(developer_id);ERROR: Cannot create unique index. Table contains\nnon-unique values\nbasement=# select developer_id,count(*) from developer group by developer_id\nhaving count(*) > 1;\n developer_id | count\n--------------+-------\n 11107 | 2\n 18493 | 2\n 50983 | 2\n 50984 | 2\n 50985 | 2\n 50986 | 2\n 50987 | 2\n 50988 | 2\n 50989 | 2\n 50990 | 2\n 50991 | 2\n 50992 | 2\n 50993 | 2\n 50994 | 2\n 50995 | 2\n 50996 | 2\n 50997 | 2\n 51020 | 2\n 51021 | 2\n 51022 | 2\n 51023 | 2\n 51024 | 2\n 51025 | 2\n 51026 | 2\n 51029 | 2\n 51030 | 2\n 51031 | 2\n 51032 | 2\n 51033 | 2\n 51034 | 2\n 51035 | 2\n 51036 | 2\n 51037 | 2\n 51038 | 2\n 51039 | 2\n 51040 | 2\n 51041 | 2\n 51042 | 2\n 51043 | 3\n 51044 | 3\n 51045 | 3\n 51046 | 3\n 51047 | 3\n 51048 | 3\n 51049 | 3\n 51050 | 3\n 51051 | 3\n 51052 | 3\n 51053 | 3\n 51054 | 3\n 51055 | 3\n 51056 | 3\n 51057 | 3\n 51058 | 3\n 51059 | 3\n 51060 | 3\n 51061 | 3\n 51062 | 2\n 51063 | 2\n 51064 | 2\n 51065 | 2\n 51066 | 2\n 51067 | 2\n 51068 | 2\n 51069 | 2\n 51070 | 2\n 51071 | 2\n 51072 | 2\n 51073 | 2\n 51074 | 2\n 51075 | 2\n 51076 | 2\n 51152 | 2\n 51614 | 2\n 51615 | 2\n 51616 | 2\n 51617 | 2\n 51618 | 2\n 51619 | 2\n 51620 | 2\n 51621 | 2\n 51622 | 2\n 51623 | 2\n 51678 | 2\n 51680 | 2\n 51681 | 2\n 51682 | 2\n 51683 | 2\n 51768 | 2\n 51862 | 2\n 51863 | 2\n 51864 | 2\n 51950 | 2\n 52094 | 2\n 52095 | 2\n 52096 | 2\n 52097 | 2\n 52098 | 2\n 52099 | 2\n 52100 | 2\n 52101 | 2\n 52103 | 2\n 52104 | 2\n 52105 | 2\n 52106 | 2\n 52107 | 2\n 52108 | 2\n 52109 | 2\n 52110 | 2\n 52111 | 2\n 52112 | 2\n 52113 | 2\n 52114 | 2\n 52115 | 2\n 52116 | 2\n 52117 | 2\n 52118 | 2\n 52119 | 2\n 52120 | 2\n 52121 | 2\n 52122 | 2\n 52123 | 2\n 52124 | 2\n 52125 | 2\n 52126 | 2\n 52127 | 2\n 52128 | 2\n 52129 | 2\n 52130 | 2\n 52131 | 2\n 52132 | 2\n 52133 | 2\n 52134 | 2\n 52135 | 2\n 52136 | 2\n 52137 | 2\n 52167 | 2\n 52168 | 2\n 52169 | 2\n 52170 | 2\n 52171 | 2\n 52172 | 2\n 52173 | 2\n 52174 | 2\n 52175 | 2\n 52180 | 2\n 52181 | 2\n 52182 | 2\n 52222 | 2\n 52223 | 2\n 52224 | 2\n 52225 | 2\n 52226 | 2\n 52227 | 2\n 52228 | 2\n 52229 | 2\n 52230 | 2\n 52232 | 2\n 52233 | 2\n 52234 | 2\n 52235 | 2\n 52236 | 2\n 52237 | 2\n 52238 | 2\n 52239 | 2\n 52240 | 2\n 52241 | 2\n 52242 | 2\n 52243 | 2\n 52244 | 2\n 52245 | 2\n 52246 | 2\n 52247 | 2\n 52248 | 2\n 52249 | 2\n 52250 | 2\n 52251 | 2\n 52465 | 2\n 52466 | 2\n 52467 | 2\n 52468 | 2\n 52469 | 2\n 52470 | 2\n 52471 | 2\n 52472 | 2\n 52473 | 2\n 52474 | 2\n 52475 | 2\n 52476 | 2\n 52477 | 2\n 52478 | 2\n 52479 | 2\n 52480 | 2\n 52481 | 2\n 52482 | 2\n 52483 | 2\n 52484 | 2\n 52485 | 2\n 52486 | 2\n 52487 | 2\n(200 rows)\n\nbasement=# select oid,developer_id from developer where developer_id =52487;\n oid | developer_id\n-----------+--------------\n 401440180 | 52487\n 401440180 | 52487\n\n\n\n", "msg_date": "Wed, 12 Dec 2001 08:10:07 -0700", "msg_from": "\"Brian Hirt\" <bhirt@mobygames.com>", "msg_from_op": true, "msg_subject": "ACK table corrupted, unique index violated." }, { "msg_contents": "\"Brian Hirt\" <bhirt@mobygames.com> writes:\n> In a nutshell, my primary key index got a NOTICE to recreate when the\n> database was vacuumed. I dropped the index and tried to recreate it. I get\n> a key violation when i try to do this. I find there are some 200 rows\n> with the exact same developer_id and oid.\n\nYou're the third person to have reported something like this, so there's\nsomething strange going on. Can you give access to your system to\nsomeone who can poke into it (probably me or Vadim)?\n\n> There are some NOTICES the day\n> before that i don't know what they mean, but don't look good.\n\n> NOTICE: Cannot rename init file\n> /moby/pgsql/base/156130/pg_internal.init.19833 to\n> /moby/pgsql/base/156130/pg_internal.init: No such file or directory\n\nThese seem extremely bizarre.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Dec 2001 13:01:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ACK table corrupted, unique index violated. " }, { "msg_contents": "Tom,\n\nI'm a little uncomfortable about giving ssh access to our box. We have a\nlot of sensitive information in the database, and we would be violating our\nprivacy policy by giving someone access. If there is some way I could give\nyou any information, or help you out that would be better. I ended up\nshutting down postgres and copying the pgdata directory somewhere else and\nre-creating the database -- so i do have a copy of the corrupted database.\n\nI've been doing a little investigating and i might have a possible lead.\nThe two tables that were corrupted recently had new indexes put on them that\nare based on a plpgsql function. Basically in the form \"create index blah\non table(myfunction(blah_id))\" These are the only two tables in my system\nthat have an index using a plpgsql function. Both tables became corrupt on\nthe same day, and the corruption happened the night that i added the\nindexes. I have no imperical evidence to support that this is the cause,\nbut it seems possible.\n\nOne other note, even after recreating the database, I'm getting NOTICE\n\"InvalidateShardeInvalid\" and \"RegisterSharedInvalid: SI buffer overflow\".\nI never used to get them and now I'm getting tons of them. Should this\nconcern me? I don't understand the implications.\n\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Brian Hirt\" <bhirt@mobygames.com>\nCc: \"Postgres Hackers\" <pgsql-hackers@postgresql.org>;\n<pgsql-general@postgresql.org>; \"Brian A Hirt\" <bhirt@berkhirt.com>\nSent: Wednesday, December 12, 2001 11:01 AM\nSubject: Re: [GENERAL] ACK table corrupted, unique index violated.\n\n\n> \"Brian Hirt\" <bhirt@mobygames.com> writes:\n> > In a nutshell, my primary key index got a NOTICE to recreate when the\n> > database was vacuumed. I dropped the index and tried to recreate it. I\nget\n> > a key violation when i try to do this. I find there are some 200 rows\n> > with the exact same developer_id and oid.\n>\n> You're the third person to have reported something like this, so there's\n> something strange going on. Can you give access to your system to\n> someone who can poke into it (probably me or Vadim)?\n>\n> > There are some NOTICES the day\n> > before that i don't know what they mean, but don't look good.\n>\n> > NOTICE: Cannot rename init file\n> > /moby/pgsql/base/156130/pg_internal.init.19833 to\n> > /moby/pgsql/base/156130/pg_internal.init: No such file or directory\n>\n> These seem extremely bizarre.\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n", "msg_date": "Wed, 12 Dec 2001 13:38:24 -0700", "msg_from": "\"Brian Hirt\" <bhirt@mobygames.com>", "msg_from_op": true, "msg_subject": "Re: ACK table corrupted, unique index violated. " }, { "msg_contents": "Brian,\n\n> I've been doing a little investigating and i might have a possible lead.\n> The two tables that were corrupted recently had new indexes put on them that\n> are based on a plpgsql function. Basically in the form \"create index blah\n> on table(myfunction(blah_id))\" These are the only two tables in my system\n> that have an index using a plpgsql function. Both tables became corrupt on\n> the same day, and the corruption happened the night that i added the\n> indexes. I have no imperical evidence to support that this is the cause,\n> but it seems possible.\n\nI've actually seen a very similiar problem, but in a controlled\n'situation'. That is, I was intentionally modifying the\nsource. Unfortunately, I turfed code which produced these kinds of errors\non vacuum but it had to do with recreating heaps and notflushing old heap\ndata from the cache.\n\nPerhaps you could include the plpgsql code so that other people could\nrecreate?\n\n> \n> One other note, even after recreating the database, I'm getting NOTICE\n> \"InvalidateShardeInvalid\" and \"RegisterSharedInvalid: SI buffer overflow\".\n> I never used to get them and now I'm getting tons of them. Should this\n> concern me? I don't understand the implications.\n\nNo problem here. These messages really should be surpressed -- they seem\nmuchof a problem than they are.\n\nGavin\n\n\n", "msg_date": "Thu, 13 Dec 2001 11:30:10 +1100 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] ACK table corrupted, unique index violated." } ]
[ { "msg_contents": "I hope this message wasn't posted multiple times, but i was having some\ntrouble sending it the first time......\n\nI'm having a very scary problem.\n\nFirst, here's my system: smp dual PIII800 512MB memory running redhat 6.2\nkernel 2.2.18\nPostgreSQL 7.1.3 on i686-pc-linux-gnu, compiled by GCC egcs-2.91.66\n\nIn a nutshell, my primary key index got a NOTICE to recreate when the\ndatabase was vacuumed. I dropped the index and tried to recreate it. I get\na key violation when i try to do this. I find there are some 200 rows\nwith the exact same developer_id and oid.\n\nThis is a very serious problem. 1) the uinque index should have prevented\nthis from happening. 2) i looked at my code and there is absolutely no way\nmy code inserted multiple rows with the same id.\n\nThis leads me to believe that there is a big problem with postgres; possibly\nin vacuum. This has also underminded my confidence in postgres data\nintegrity that such a basic concept can be violated.\n\nI want to help you guys find this problem because I have a lot invested in\npostgres and overall have been very happy with it. I've included\ninformation that i think might be useful. If there is more that i can\nsupply, let me know and I will provide it if I can.\n\nMy system runs a vacuum every day at 4AMCST. I've checked all of my\napplication logs for the day before the NOTICE appeared and the day that it\nappeared. I see no SQL errors logged (and all SQL errors are logged by my\napplications) for either day. I've checked the postmaster logs for both\ndays and don't see any ERROR's logged. There are some NOTICES the day\nbefore that i don't know what they mean, but don't look good.\n\nNOTICE: Cannot rename init file\n/moby/pgsql/base/156130/pg_internal.init.19833 to\n/moby/pgsql/base/156130/pg_internal.init: No such file or directory\nNOTICE: Cannot rename init file\n/moby/pgsql/base/156130/pg_internal.init.19839 to\n/moby/pgsql/base/156130/pg_internal.init: No such file or directory\nNOTICE: Cannot rename init file\n/moby/pgsql/base/156130/pg_internal.init.19835 to\n/moby/pgsql/base/156130/pg_internal.init: No such file or directory\nNOTICE: Cannot rename init file\n/moby/pgsql/base/156130/pg_internal.init.19834 to\n/moby/pgsql/base/156130/pg_internal.init: No such file or directory\nNOTICE: Cannot rename init file\n/moby/pgsql/base/156130/pg_internal.init.19837 to\n/moby/pgsql/base/156130/pg_internal.init: No such file or direc\n\nhowever, on further inspection, the error above appears fairly frequently.\nIn fact, the first occurance was over 6 weeks before the corruption.\n\nHere is the DEBUG notices for vacuum the day before the corruption:\n\nDEBUG: --Relation developer--\nDEBUG: Pages 514: Changed 29, reaped 39, Empty 0, New 0; Tup 47971: Vac 52,\nKeep/VTL 0/0, Crash 0, UnUsed 89, MinLen 65, MaxLen 133; Re-using:\nFree/Avail. Space 6812/1768; EndEmpty/Avail. Pages 0/10. CPU 0.04s/0.00u\nsec.\nDEBUG: Index developer_primary_key: Pages 120; Tuples 47971: Deleted 52.\nCPU 0.00s/0.09u sec.\nDEBUG: Index developer_recent_mod_key: Pages 119; Tuples 47971: Deleted 52.\nCPU 0.00s/0.05u sec.\nDEBUG: Index developer_approved: Pages 121; Tuples 47971: Deleted 52. CPU\n0.03s/0.06u sec.\nDEBUG: Index developer_search_idx: Pages 204; Tuples 47971: Deleted 52. CPU\n0.00s/0.04u sec.\nDEBUG: Rel developer: Pages: 514 --> 514; Tuple(s) moved: 15. CPU\n0.01s/0.02u sec.\nDEBUG: Index developer_primary_key: Pages 120; Tuples 47971: Deleted 15.\nCPU 0.00s/0.05u sec.\nDEBUG: Index developer_recent_mod_key: Pages 119; Tuples 47971: Deleted 15.\nCPU 0.00s/0.04u sec.\nDEBUG: Index developer_approved: Pages 121; Tuples 47971: Deleted 15. CPU\n0.00s/0.05u sec.\nDEBUG: Index developer_search_idx: Pages 204; Tuples 47971: Deleted 15. CPU\n0.00s/0.05u sec.\n\nHere is the vaccum DEBUG messages the day of the corruption:\n\nDEBUG: --Relation developer--\nDEBUG: Pages 515: Changed 25, reaped 39, Empty 0, New 0; Tup 48038: Vac 53,\nKeep/VTL 0/0, Crash 0, UnUsed 89, MinLen 65, MaxLen 133; Re-using:\nFree/Avail. Space 9144/3348; EndEmpty/Avail. Pages 0/9. CPU 0.02s/0.01u sec.\nDEBUG: Index developer_primary_key: Pages 120; Tuples 48023: Deleted 53.\nCPU 0.00s/0.04u sec.\nNOTICE: Index developer_primary_key: NUMBER OF INDEX' TUPLES (48023) IS NOT\nTHE SAME AS HEAP' (48038).\n Recreate the index.\nDEBUG: Index developer_recent_mod_key: Pages 119; Tuples 48023: Deleted 53.\nCPU 0.00s/0.04u sec.\nNOTICE: Index developer_recent_mod_key: NUMBER OF INDEX' TUPLES (48023) IS\nNOT THE SAME AS HEAP' (48038).\n Recreate the index.\nDEBUG: Index developer_approved: Pages 121; Tuples 48023: Deleted 53. CPU\n0.00s/0.04u sec.\nNOTICE: Index developer_approved: NUMBER OF INDEX' TUPLES (48023) IS NOT\nTHE SAME AS HEAP' (48038).\n Recreate the index.\nDEBUG: Index developer_search_idx: Pages 204; Tuples 48023: Deleted 53. CPU\n0.00s/0.04u sec.\nNOTICE: Index developer_search_idx: NUMBER OF INDEX' TUPLES (48023) IS NOT\nTHE SAME AS HEAP' (48038).\n Recreate the index.\nDEBUG: Rel developer: Pages: 515 --> 515; Tuple(s) moved: 34. CPU\n0.00s/0.03u sec.\nDEBUG: Index developer_primary_key: Pages 120; Tuples 48023: Deleted 34.\nCPU 0.00s/0.04u sec.\nNOTICE: Index developer_primary_key: NUMBER OF INDEX' TUPLES (48023) IS NOT\nTHE SAME AS HEAP' (48038).\n Recreate the index.\nDEBUG: Index developer_recent_mod_key: Pages 119; Tuples 48023: Deleted 34.\nCPU 0.00s/0.04u sec.\nNOTICE: Index developer_recent_mod_key: NUMBER OF INDEX' TUPLES (48023) IS\nNOT THE SAME AS HEAP' (48038).\n Recreate the index.\nDEBUG: Index developer_approved: Pages 121; Tuples 48023: Deleted 34. CPU\n0.00s/0.03u sec.\nNOTICE: Index developer_approved: NUMBER OF INDEX' TUPLES (48023) IS NOT\nTHE SAME AS HEAP' (48038).\n Recreate the index.\nDEBUG: Index developer_search_idx: Pages 204; Tuples 48023: Deleted 34. CPU\n0.00s/0.06u sec.\nNOTICE: Index developer_search_idx: NUMBER OF INDEX' TUPLES (48023) IS NOT\nTHE SAME AS HEAP' (48038).\n Recreate the index.\n\nI also appear to be getting this quite often:\n\nNOTICE: RegisterSharedInvalid: SI buffer overflow\nNOTICE: InvalidateSharedInvalid: cache state reset\n\n\nHere is output from me trying to re-create an index.\n\nbasement=# drop index developer_primary_key;\nDROP\nbasement=# create unique index developer_primary_key on\ndeveloper(developer_id);ERROR: Cannot create unique index. Table contains\nnon-unique values\nbasement=# select developer_id,count(*) from developer group by developer_id\nhaving count(*) > 1;\n developer_id | count\n--------------+-------\n 11107 | 2\n 18493 | 2\n 50983 | 2\n 50984 | 2\n 50985 | 2\n 50986 | 2\n 50987 | 2\n 50988 | 2\n 50989 | 2\n 50990 | 2\n 50991 | 2\n 50992 | 2\n 50993 | 2\n 50994 | 2\n 50995 | 2\n 50996 | 2\n 50997 | 2\n 51020 | 2\n 51021 | 2\n 51022 | 2\n 51023 | 2\n 51024 | 2\n 51025 | 2\n 51026 | 2\n 51029 | 2\n 51030 | 2\n 51031 | 2\n 51032 | 2\n 51033 | 2\n 51034 | 2\n 51035 | 2\n 51036 | 2\n 51037 | 2\n 51038 | 2\n 51039 | 2\n 51040 | 2\n 51041 | 2\n 51042 | 2\n 51043 | 3\n 51044 | 3\n 51045 | 3\n 51046 | 3\n 51047 | 3\n 51048 | 3\n 51049 | 3\n 51050 | 3\n 51051 | 3\n 51052 | 3\n 51053 | 3\n 51054 | 3\n 51055 | 3\n 51056 | 3\n 51057 | 3\n 51058 | 3\n 51059 | 3\n 51060 | 3\n 51061 | 3\n 51062 | 2\n 51063 | 2\n 51064 | 2\n 51065 | 2\n 51066 | 2\n 51067 | 2\n 51068 | 2\n 51069 | 2\n 51070 | 2\n 51071 | 2\n 51072 | 2\n 51073 | 2\n 51074 | 2\n 51075 | 2\n 51076 | 2\n 51152 | 2\n 51614 | 2\n 51615 | 2\n 51616 | 2\n 51617 | 2\n 51618 | 2\n 51619 | 2\n 51620 | 2\n 51621 | 2\n 51622 | 2\n 51623 | 2\n 51678 | 2\n 51680 | 2\n 51681 | 2\n 51682 | 2\n 51683 | 2\n 51768 | 2\n 51862 | 2\n 51863 | 2\n 51864 | 2\n 51950 | 2\n 52094 | 2\n 52095 | 2\n 52096 | 2\n 52097 | 2\n 52098 | 2\n 52099 | 2\n 52100 | 2\n 52101 | 2\n 52103 | 2\n 52104 | 2\n 52105 | 2\n 52106 | 2\n 52107 | 2\n 52108 | 2\n 52109 | 2\n 52110 | 2\n 52111 | 2\n 52112 | 2\n 52113 | 2\n 52114 | 2\n 52115 | 2\n 52116 | 2\n 52117 | 2\n 52118 | 2\n 52119 | 2\n 52120 | 2\n 52121 | 2\n 52122 | 2\n 52123 | 2\n 52124 | 2\n 52125 | 2\n 52126 | 2\n 52127 | 2\n 52128 | 2\n 52129 | 2\n 52130 | 2\n 52131 | 2\n 52132 | 2\n 52133 | 2\n 52134 | 2\n 52135 | 2\n 52136 | 2\n 52137 | 2\n 52167 | 2\n 52168 | 2\n 52169 | 2\n 52170 | 2\n 52171 | 2\n 52172 | 2\n 52173 | 2\n 52174 | 2\n 52175 | 2\n 52180 | 2\n 52181 | 2\n 52182 | 2\n 52222 | 2\n 52223 | 2\n 52224 | 2\n 52225 | 2\n 52226 | 2\n 52227 | 2\n 52228 | 2\n 52229 | 2\n 52230 | 2\n 52232 | 2\n 52233 | 2\n 52234 | 2\n 52235 | 2\n 52236 | 2\n 52237 | 2\n 52238 | 2\n 52239 | 2\n 52240 | 2\n 52241 | 2\n 52242 | 2\n 52243 | 2\n 52244 | 2\n 52245 | 2\n 52246 | 2\n 52247 | 2\n 52248 | 2\n 52249 | 2\n 52250 | 2\n 52251 | 2\n 52465 | 2\n 52466 | 2\n 52467 | 2\n 52468 | 2\n 52469 | 2\n 52470 | 2\n 52471 | 2\n 52472 | 2\n 52473 | 2\n 52474 | 2\n 52475 | 2\n 52476 | 2\n 52477 | 2\n 52478 | 2\n 52479 | 2\n 52480 | 2\n 52481 | 2\n 52482 | 2\n 52483 | 2\n 52484 | 2\n 52485 | 2\n 52486 | 2\n 52487 | 2\n(200 rows)\n\nbasement=# select oid,developer_id from developer where developer_id =52487;\n oid | developer_id\n-----------+--------------\n 401440180 | 52487\n 401440180 | 52487\n\n\n\n\n\n", "msg_date": "Wed, 12 Dec 2001 10:13:42 -0700", "msg_from": "\"Brian Hirt\" <bhirt@mobygames.com>", "msg_from_op": true, "msg_subject": "ACK table corrupted, unique index violated." } ]
[ { "msg_contents": "What about the common file having an entry for each 'instance' set up on \nthe system.\n\nIt could be formatted something like...\n\nName DataDir ConfFile AutoStart\n\nthen pg_ctl could be called with an Instance name and start or stop the \ninstance.\nthe initd script could scan the file looking for instances to start \nautomatically.\n\n$0.02\n\nmlw wrote:\n\n>Bruce Momjian wrote:\n>\n>>I wonder if we should go one step further. Should we be specifying the\n>>config file on the command line _rather_ than the data directory. We\n>>could then specify the data directory location in the config file. That\n>>seems like the direction we should be headed in, though I am not sure it\n>>is worth the added headache of the switch.\n>>\n>\n>That is what the patch I submitted does.\n>\n>In the postgresql.conf file, you can specify where the data directory\n>is, as well as the pg_hba.conf file exists.\n>\n>The purpose I had in mind was to allow sharing of pg_hba.conf files and\n>keep configuration separate from data.\n>\n>One huge problem I have with symlinks is an admin has to \"notice\" that\n>two files in two separate directories, possibly on two different\n>volumes, are the same file, so it is very likely the ramifications of\n>editing one file are not obvious.\n>\n>If, in the database configuration file, pghbaconfig points to\n>\"/etc/pg_hba.conf\" it is likely, that the global significance of the\n>file is obvious.\n>\n>(Note: I don't nessisarily think \"pghbaconfig\" nor \"pgdatadir\" are the\n>best names for the parameters, but I couldn't think of anything else at\n>the time.)\n>\n>Symlinks are a perilous UNIX construct, yes, they make some things, that\n>would otherwise be a horrible kludge, elegant, but they are also no\n>substitute for a properly configurable application.\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n\n\n", "msg_date": "Wed, 12 Dec 2001 12:22:31 -0500", "msg_from": "\"Dwayne Miller\" <dmiller@espgroup.net>", "msg_from_op": true, "msg_subject": "Re: Explicit configuration file" }, { "msg_contents": "\nIt certainly can be a slippery slope.\n\nDwayne Miller wrote:\n> \n> What about the common file having an entry for each 'instance' set up on\n> the system.\n> \n> It could be formatted something like...\n> \n> Name DataDir ConfFile AutoStart\n> \n> then pg_ctl could be called with an Instance name and start or stop the\n> instance.\n> the initd script could scan the file looking for instances to start\n> automatically.\n> \n> $0.02\n", "msg_date": "Wed, 12 Dec 2001 12:50:54 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Explicit configuration file" } ]
[ { "msg_contents": "Okay, here's a follow up to my previous messages \"ACK table corrupted,\nunique index violated.\"\n\nI've been trying to clean up the corruptions that i mentioned earlier. I\nfelt most comfortable shutting down all my application servers, restarting\npostgres, doing a dump of my database and rebuilding it with a pginit and\ncomplete reload. So far so good. I went to fix one of the corrupted tables\nand i have another strange experience. I'm still looking into other\npossibilities such as a hardware failure; but i thought this might be\ninteresting or helpful in the context of my previous post: Basically the\ntable with duplicate oid/id now has unique oid from the relead, so I'm going\nto delete the duplicate rows and recreate the unique index on the identity\ncolumn.\n\nbasement=# select count(*),developer_aka_id from developer_aka group by\ndeveloper_aka_id having count(*) <> 1;\n count | developer_aka_id\n-------+------------------\n 2 | 9789\n 2 | 10025\n 2 | 40869\n(3 rows)\n\nbasement=# select oid,* from developer_aka where developer_aka_id in\n(9789,10025,40869);\n oid | developer_id | developer_aka_id | first_name | last_name\n-------+--------------+------------------+-------------------+-----------\n 48390 | 1916 | 9789 | Chris | Smith\n 48402 | 35682 | 40869 | Donald \"Squirral\" | Fisk\n 48425 | 4209 | 10025 | Mike | Glosecki\n 48426 | 1916 | 9789 | Chris | Smith\n 48427 | 35682 | 40869 | Donald \"Squirral\" | Fisk\n 48428 | 4209 | 10025 | Mike | Glosecki\n(6 rows)\n\nbasement=# delete from developer_aka where oid in (48390,48402,48425);\nDELETE 3\nbasement=# select count(*),developer_aka_id from developer_aka group by\ndeveloper_aka_id having count(*) <> 1;\n count | developer_aka_id\n-------+------------------\n(0 rows)\n\nbasement=# create unique index developer_aka_pkey on\ndeveloper_aka(developer_aka_id);\nCREATE\nbasement=# VACUUM ANALYZE developer_aka;\nERROR: Cannot insert a duplicate key into unique index developer_aka_pkey\n\n\n", "msg_date": "Wed, 12 Dec 2001 11:30:47 -0700", "msg_from": "\"Brian Hirt\" <bhirt@mobygames.com>", "msg_from_op": true, "msg_subject": "problems with table corruption continued" }, { "msg_contents": "Tom & All:\n\nI've been looking into this problem some more and I've been able to\nconsistantly reproduce the error. I've testing it on two different\nmachines; both running 7.1.3 on a i686-pc-linux-gnu configuration. One\nmachine is RH7.2 and the other is RH6.2.\n\nI still haven't been able to reproduce the problem with corrupted\nindexes/tables (NUMBER OF TUPLES IS NOT THE SAME AS HEAP -- duplicate rows\nwith same oid & pkey); but I'm hopefull that these two problems are related.\nAlso, i wanted to inform you that the same two tables became corrupted on\n12-15-2001 after my 12-12-2001 reload).\n\nI've attached three files, a typescript, and two sql files. I found that if\nthe commands were combined into a single file and run in a single pgsql\nsession, the error does not occur -- so it's important to follow the\ncommands exactly like they are in the typescript. If for some reason the\nerrors aren't reproducable on your machines, let me know and we will try to\nfind out what's unique about my setup.\n\nThanks.\n\n----- Original Message -----\nFrom: \"Brian Hirt\" <bhirt@mobygames.com>\nTo: \"Postgres Hackers\" <pgsql-hackers@postgresql.org>\nCc: \"Brian A Hirt\" <bhirt@berkhirt.com>\nSent: Wednesday, December 12, 2001 11:30 AM\nSubject: [HACKERS] problems with table corruption continued\n\n\n> Okay, here's a follow up to my previous messages \"ACK table corrupted,\n> unique index violated.\"\n>\n> I've been trying to clean up the corruptions that i mentioned earlier. I\n> felt most comfortable shutting down all my application servers, restarting\n> postgres, doing a dump of my database and rebuilding it with a pginit and\n> complete reload. So far so good. I went to fix one of the corrupted\ntables\n> and i have another strange experience. I'm still looking into other\n> possibilities such as a hardware failure; but i thought this might be\n> interesting or helpful in the context of my previous post: Basically the\n> table with duplicate oid/id now has unique oid from the relead, so I'm\ngoing\n> to delete the duplicate rows and recreate the unique index on the identity\n> column.\n>\n> basement=# select count(*),developer_aka_id from developer_aka group by\n> developer_aka_id having count(*) <> 1;\n> count | developer_aka_id\n> -------+------------------\n> 2 | 9789\n> 2 | 10025\n> 2 | 40869\n> (3 rows)\n>\n> basement=# select oid,* from developer_aka where developer_aka_id in\n> (9789,10025,40869);\n> oid | developer_id | developer_aka_id | first_name | last_name\n> -------+--------------+------------------+-------------------+-----------\n> 48390 | 1916 | 9789 | Chris | Smith\n> 48402 | 35682 | 40869 | Donald \"Squirral\" | Fisk\n> 48425 | 4209 | 10025 | Mike | Glosecki\n> 48426 | 1916 | 9789 | Chris | Smith\n> 48427 | 35682 | 40869 | Donald \"Squirral\" | Fisk\n> 48428 | 4209 | 10025 | Mike | Glosecki\n> (6 rows)\n>\n> basement=# delete from developer_aka where oid in (48390,48402,48425);\n> DELETE 3\n> basement=# select count(*),developer_aka_id from developer_aka group by\n> developer_aka_id having count(*) <> 1;\n> count | developer_aka_id\n> -------+------------------\n> (0 rows)\n>\n> basement=# create unique index developer_aka_pkey on\n> developer_aka(developer_aka_id);\n> CREATE\n> basement=# VACUUM ANALYZE developer_aka;\n> ERROR: Cannot insert a duplicate key into unique index developer_aka_pkey\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>", "msg_date": "Tue, 18 Dec 2001 08:53:59 -0700", "msg_from": "\"Brian Hirt\" <bhirt@mobygames.com>", "msg_from_op": true, "msg_subject": "Re: problems with table corruption continued" }, { "msg_contents": "Great, I'm also trying to create a reproducable test case for the original\nproblem i reported with duplicate rows/oids/pkeys. Maybe both problems are\na result of the same bug; i don't know.\n\nBTW, if you remove the index that is based on the pgpsql function\n\"developer_aka_seach_idx\" the problem is not reproducable. But you've\nprobably figured that out by now.\n\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Brian Hirt\" <bhirt@mobygames.com>\nCc: \"Postgres Hackers\" <pgsql-hackers@postgresql.org>; \"Brian A Hirt\"\n<bhirt@berkhirt.com>\nSent: Tuesday, December 18, 2001 10:52 AM\nSubject: Re: [HACKERS] problems with table corruption continued\n\n\n> \"Brian Hirt\" <bhirt@mobygames.com> writes:\n> > I've attached three files, a typescript, and two sql files. I found\nthat if\n> > the commands were combined into a single file and run in a single pgsql\n> > session, the error does not occur -- so it's important to follow the\n> > commands exactly like they are in the typescript. If for some reason\nthe\n> > errors aren't reproducable on your machines, let me know and we will try\nto\n> > find out what's unique about my setup.\n>\n> Indeed, it reproduces just fine here --- not only that, but I get an\n> Assert failure when I try it in current sources.\n>\n> Many, many thanks! I shall start digging forthwith...\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n", "msg_date": "Tue, 18 Dec 2001 09:49:20 -0700", "msg_from": "\"Brian Hirt\" <bhirt@mobygames.com>", "msg_from_op": true, "msg_subject": "Re: problems with table corruption continued " }, { "msg_contents": "Tom,\n\nI'm glad you found the cause. I already deleted the index a few days back\nafter I strongly suspected that they were related to the problem. The\nreason i created the index was to help locate a record based on the name of\na person with an index lookup instead of a sequential scan on a table with\nseveral hundred thousand rows.\n\nI was trying to avoid adding additional computed fields to the tables and\nmaintaining them with triggers, indexing and searching on them. The index\nfunction seemed like an elegant solution to the problem; although at the\ntime I was completely unaware of 'iscachable';\n\nDo you think this might also explain the following errors i was seeing?\n\nNOTICE: Cannot rename init file\n/moby/pgsql/base/156130/pg_internal.init.19833 to\n/moby/pgsql/base/156130/pg_internal.init: No such file or directory\n\n--brian\n\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Brian Hirt\" <bhirt@mobygames.com>\nCc: \"Vadim Mikheev\" <vmikheev@sectorbase.com>; \"Postgres Hackers\"\n<pgsql-hackers@postgresql.org>; \"Brian A Hirt\" <bhirt@berkhirt.com>\nSent: Tuesday, December 18, 2001 11:48 AM\nSubject: Re: [HACKERS] problems with table corruption continued\n\n\n> \"Brian Hirt\" <bhirt@mobygames.com> writes:\n> > [ example case that creates a duplicate tuple ]\n>\n> Got it. The problem arises in this section of vacuum.c, near the bottom\n> of repair_frag:\n>\n> if (!(tuple.t_data->t_infomask & HEAP_XMIN_COMMITTED))\n> {\n> if ((TransactionId) tuple.t_data->t_cmin != myXID)\n> elog(ERROR, \"Invalid XID in t_cmin (3)\");\n> if (tuple.t_data->t_infomask & HEAP_MOVED_OFF)\n> {\n> itemid->lp_flags &= ~LP_USED;\n> num_tuples++;\n> }\n> else\n> elog(ERROR, \"HEAP_MOVED_OFF was expected (2)\");\n> }\n>\n> This is trying to get rid of the original copy of a tuple that's been\n> moved to another page. The problem is that your index function causes a\n> table scan, which means that by the time control gets here, someone else\n> has looked at this tuple and marked it good --- so the initial test of\n> HEAP_XMIN_COMMITTED fails, and the tuple is never removed!\n>\n> I would say that it's incorrect for vacuum.c to assume that\n> HEAP_XMIN_COMMITTED can't become set on HEAP_MOVED_OFF/HEAP_MOVED_IN\n> tuples during the course of vacuum's processing; after all, the xmin\n> definitely does refer to a committed xact, and we can't realistically\n> assume that we know what processing will be induced by user-defined\n> index functions. Vadim, what do you think? How should we fix this?\n>\n> In the meantime, Brian, I think you ought to get rid of your index\n> CREATE INDEX \"developer_aka_search_idx\" on \"developer_aka\" using btree\n( developer_aka_search_name (\"developer_aka_id\") \"varchar_ops\" );\n> I do not think this index is well-defined: an index function ought\n> to have the property that its output depends solely on its input and\n> cannot change over time. This function cannot make that claim.\n> (In 7.2, you can't even create the index unless you mark the function\n> iscachable, which is really a lie.) I'm not even real sure what you\n> expect the index to do for you...\n>\n> I do not know whether this effect explains all the reports of duplicate\n> tuples we've seen in the last few weeks. We need to ask whether the\n> other complainants were using index functions that tried to do table\n> scans.\n>\n> regards, tom lane\n>\n\n", "msg_date": "Tue, 18 Dec 2001 10:35:22 -0700", "msg_from": "\"Brian Hirt\" <bhirt@mobygames.com>", "msg_from_op": true, "msg_subject": "Re: problems with table corruption continued " }, { "msg_contents": "Yes, both tables had similiar functions, and the corruption was limited to\nonly those two tables.\n\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Brian Hirt\" <bhirt@mobygames.com>\nCc: \"Postgres Hackers\" <pgsql-hackers@postgresql.org>; \"Brian A Hirt\"\n<bhirt@berkhirt.com>\nSent: Tuesday, December 18, 2001 11:53 AM\nSubject: Re: [HACKERS] problems with table corruption continued\n\n\n> \"Brian Hirt\" <bhirt@mobygames.com> writes:\n> > Great, I'm also trying to create a reproducable test case for the\noriginal\n> > problem i reported with duplicate rows/oids/pkeys. Maybe both problems\nare\n> > a result of the same bug; i don't know.\n>\n> Were the duplicate rows all in tables that had functional indexes based\n> on functions similar to developer_aka_search_name? The problem we're\n> seeing here seems to be due to VACUUM not being able to cope with the\n> side effects of the SELECT inside the index function.\n>\n> regards, tom lane\n>\n\n", "msg_date": "Tue, 18 Dec 2001 10:36:28 -0700", "msg_from": "\"Brian Hirt\" <bhirt@mobygames.com>", "msg_from_op": true, "msg_subject": "Re: problems with table corruption continued " }, { "msg_contents": "\"Brian Hirt\" <bhirt@mobygames.com> writes:\n> I've attached three files, a typescript, and two sql files. I found that if\n> the commands were combined into a single file and run in a single pgsql\n> session, the error does not occur -- so it's important to follow the\n> commands exactly like they are in the typescript. If for some reason the\n> errors aren't reproducable on your machines, let me know and we will try to\n> find out what's unique about my setup.\n\nIndeed, it reproduces just fine here --- not only that, but I get an\nAssert failure when I try it in current sources.\n\nMany, many thanks! I shall start digging forthwith...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 18 Dec 2001 12:52:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: problems with table corruption continued " }, { "msg_contents": "Tom,\n\nI probably could have written the function like you suggest in this email,\nand i probably will look into doing so. This was the first time I tried\ncreating an index function, and one of the few times i've used plpgsql. In\ngeneral i have very little experience with doing this kind of stuff in\npostgres (i try to stick to standard SQL as much as possible) and it looks\nlike i've stumbled onto this problem because of a bad design decision on my\npart and a lack of understanding of how index functions work.\n\nThanks for the suggestion.\n\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Brian Hirt\" <bhirt@mobygames.com>\nCc: \"Postgres Hackers\" <pgsql-hackers@postgresql.org>; \"Brian A Hirt\"\n<bhirt@berkhirt.com>\nSent: Tuesday, December 18, 2001 12:20 PM\nSubject: Re: [HACKERS] problems with table corruption continued\n\n\n> \"Brian Hirt\" <bhirt@mobygames.com> writes:\n> > I was trying to avoid adding additional computed fields to the tables\nand\n> > maintaining them with triggers, indexing and searching on them. The\nindex\n> > function seemed like an elegant solution to the problem\n>\n> Understood, but can you write the index function in a way that avoids\n> having it do a SELECT to get at data that it hasn't been passed? I'm\n> wondering if you can't define the function as just\n> f(first_name, last_name) = upper(first_name || ' ' || last_name)\n> and create the index on f(first_name, last_name). You haven't shown us\n> the queries you expect the index to be helpful for, so maybe this is not\n> workable...\n>\n> regards, tom lane\n>\n\n", "msg_date": "Tue, 18 Dec 2001 10:58:24 -0700", "msg_from": "\"Brian Hirt\" <bhirt@mobygames.com>", "msg_from_op": true, "msg_subject": "Re: problems with table corruption continued " }, { "msg_contents": "Well, when my application starts, about 100 backend database connections\nstart up at the same time so this fits in with the circumstance you\ndescribe. However, I'm just using a standard ext2 filesystem on 2.2 linux\nkernel.\n\nIt's good to know that this error should be harmless.\n\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Brian Hirt\" <bhirt@mobygames.com>\nCc: \"Postgres Hackers\" <pgsql-hackers@postgresql.org>; \"Brian A Hirt\"\n<bhirt@berkhirt.com>\nSent: Tuesday, December 18, 2001 12:32 PM\nSubject: Re: [HACKERS] problems with table corruption continued\n\n\n> \"Brian Hirt\" <bhirt@mobygames.com> writes:\n> > Do you think this might also explain the following errors i was seeing?\n>\n> > NOTICE: Cannot rename init file\n> > /moby/pgsql/base/156130/pg_internal.init.19833 to\n> > /moby/pgsql/base/156130/pg_internal.init: No such file or directory\n>\n> No, that error is not coming from anywhere near VACUUM; it's from\n> relcache startup (see write_irels in src/backend/utils/cache/relcache.c).\n> The rename source file has just been created in the very same routine,\n> so it's difficult to see how one would get a \"No such file\" failure from\n> rename().\n>\n> It is possible that several backends create temp init files at the\n> same time and all try to rename their own temp files into place as\n> the pg_internal.init file. However, that should work: the rename\n> man page says\n>\n> The rename() system call causes the source file to be renamed to\n> target. If target exists, it is first removed. Both source and\n> target must be of the same type (that is, either directories or\n> nondirectories), and must reside on the same file system.\n>\n> If target can be created or if it existed before the call, rename()\n> guarantees that an instance of target will exist, even if the system\n> crashes in the midst of the operation.\n>\n> so we should end up with the extra copies deleted and just one init file\n> remaining after the slowest backend renames its copy into place.\n>\n> Do you by chance have your database directory mounted via NFS, or some\n> other strange filesystem where the normal semantics of concurrent rename\n> might not work quite right?\n>\n> FWIW, I believe this condition is pretty harmless, but it would be nice\n> to understand exactly why you're seeing the message.\n>\n> regards, tom lane\n>\n\n", "msg_date": "Tue, 18 Dec 2001 11:07:26 -0700", "msg_from": "\"Brian Hirt\" <bhirt@mobygames.com>", "msg_from_op": true, "msg_subject": "Re: problems with table corruption continued " }, { "msg_contents": "\"Brian Hirt\" <bhirt@mobygames.com> writes:\n> [ example case that creates a duplicate tuple ]\n\nGot it. The problem arises in this section of vacuum.c, near the bottom\nof repair_frag:\n\n if (!(tuple.t_data->t_infomask & HEAP_XMIN_COMMITTED))\n {\n if ((TransactionId) tuple.t_data->t_cmin != myXID)\n elog(ERROR, \"Invalid XID in t_cmin (3)\");\n if (tuple.t_data->t_infomask & HEAP_MOVED_OFF)\n {\n itemid->lp_flags &= ~LP_USED;\n num_tuples++;\n }\n else\n elog(ERROR, \"HEAP_MOVED_OFF was expected (2)\");\n }\n\nThis is trying to get rid of the original copy of a tuple that's been\nmoved to another page. The problem is that your index function causes a\ntable scan, which means that by the time control gets here, someone else\nhas looked at this tuple and marked it good --- so the initial test of\nHEAP_XMIN_COMMITTED fails, and the tuple is never removed!\n\nI would say that it's incorrect for vacuum.c to assume that\nHEAP_XMIN_COMMITTED can't become set on HEAP_MOVED_OFF/HEAP_MOVED_IN\ntuples during the course of vacuum's processing; after all, the xmin\ndefinitely does refer to a committed xact, and we can't realistically\nassume that we know what processing will be induced by user-defined\nindex functions. Vadim, what do you think? How should we fix this?\n\nIn the meantime, Brian, I think you ought to get rid of your index\nCREATE INDEX \"developer_aka_search_idx\" on \"developer_aka\" using btree ( developer_aka_search_name (\"developer_aka_id\") \"varchar_ops\" );\nI do not think this index is well-defined: an index function ought\nto have the property that its output depends solely on its input and\ncannot change over time. This function cannot make that claim.\n(In 7.2, you can't even create the index unless you mark the function\niscachable, which is really a lie.) I'm not even real sure what you\nexpect the index to do for you...\n\nI do not know whether this effect explains all the reports of duplicate\ntuples we've seen in the last few weeks. We need to ask whether the\nother complainants were using index functions that tried to do table\nscans.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 18 Dec 2001 13:48:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: problems with table corruption continued " }, { "msg_contents": "\"Brian Hirt\" <bhirt@mobygames.com> writes:\n> Great, I'm also trying to create a reproducable test case for the original\n> problem i reported with duplicate rows/oids/pkeys. Maybe both problems are\n> a result of the same bug; i don't know.\n\nWere the duplicate rows all in tables that had functional indexes based\non functions similar to developer_aka_search_name? The problem we're\nseeing here seems to be due to VACUUM not being able to cope with the\nside effects of the SELECT inside the index function.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 18 Dec 2001 13:53:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: problems with table corruption continued " }, { "msg_contents": "\"Brian Hirt\" <bhirt@mobygames.com> writes:\n> I was trying to avoid adding additional computed fields to the tables and\n> maintaining them with triggers, indexing and searching on them. The index\n> function seemed like an elegant solution to the problem\n\nUnderstood, but can you write the index function in a way that avoids\nhaving it do a SELECT to get at data that it hasn't been passed? I'm\nwondering if you can't define the function as just\n\tf(first_name, last_name) = upper(first_name || ' ' || last_name)\nand create the index on f(first_name, last_name). You haven't shown us\nthe queries you expect the index to be helpful for, so maybe this is not\nworkable...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 18 Dec 2001 14:20:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: problems with table corruption continued " }, { "msg_contents": "\"Brian Hirt\" <bhirt@mobygames.com> writes:\n> Do you think this might also explain the following errors i was seeing?\n\n> NOTICE: Cannot rename init file\n> /moby/pgsql/base/156130/pg_internal.init.19833 to\n> /moby/pgsql/base/156130/pg_internal.init: No such file or directory\n\nNo, that error is not coming from anywhere near VACUUM; it's from\nrelcache startup (see write_irels in src/backend/utils/cache/relcache.c).\nThe rename source file has just been created in the very same routine,\nso it's difficult to see how one would get a \"No such file\" failure from\nrename().\n\nIt is possible that several backends create temp init files at the\nsame time and all try to rename their own temp files into place as\nthe pg_internal.init file. However, that should work: the rename\nman page says\n\n The rename() system call causes the source file to be renamed to\n target. If target exists, it is first removed. Both source and\n target must be of the same type (that is, either directories or\n nondirectories), and must reside on the same file system.\n\n If target can be created or if it existed before the call, rename()\n guarantees that an instance of target will exist, even if the system\n crashes in the midst of the operation.\n\nso we should end up with the extra copies deleted and just one init file\nremaining after the slowest backend renames its copy into place.\n\nDo you by chance have your database directory mounted via NFS, or some\nother strange filesystem where the normal semantics of concurrent rename\nmight not work quite right?\n\nFWIW, I believe this condition is pretty harmless, but it would be nice\nto understand exactly why you're seeing the message.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 18 Dec 2001 14:32:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: problems with table corruption continued " } ]
[ { "msg_contents": "Build and test run sucessfully on Linux/390 and\nLinux/PlayStation2 with low-order-digit diffs in\ngeometry.\n\nPermaine\n\n", "msg_date": "Wed, 12 Dec 2001 15:23:40 -0500", "msg_from": "\"Permaine Cheung\" <pcheung@redhat.com>", "msg_from_op": true, "msg_subject": "Re: Third call for platform testing " }, { "msg_contents": "> Build and test run sucessfully on Linux/390 and\n> Linux/PlayStation2 with low-order-digit diffs in\n> geometry.\n\nOooh, too cool. Thanks for the report! Could you post results (the\nregression.out file etc) on the developer's web site? Especially for the\nnew platform: we probably need more details on how easy or hard it was\nto get the new platform working...\n\n - Thomas\n", "msg_date": "Wed, 12 Dec 2001 22:38:30 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Third call for platform testing" }, { "msg_contents": "Sure, I'll post the results. Actually, on S/390, there's no need\nto change anything, but on the PS/2, the test-and-set code\ndoesn't work on that CPU, Tom Lane suggested a workaround -\nundefine HAS_TEST_AND_SET and remove slock_t type\ndefinition in the port include file. With those modifications,\nboth the build and tests work fine.\n\nPermaine\n\n----- Original Message ----- \nFrom: Thomas Lockhart <lockhart@fourpalms.org>\nTo: Permaine Cheung <pcheung@redhat.com>\nCc: <pgsql-hackers@postgresql.org>\nSent: Wednesday, December 12, 2001 5:38 PM\nSubject: Re: Third call for platform testing\n\n\n> > Build and test run sucessfully on Linux/390 and\n> > Linux/PlayStation2 with low-order-digit diffs in\n> > geometry.\n> \n> Oooh, too cool. Thanks for the report! Could you post results (the\n> regression.out file etc) on the developer's web site? Especially for the\n> new platform: we probably need more details on how easy or hard it was\n> to get the new platform working...\n> \n> - Thomas\n\n", "msg_date": "Wed, 12 Dec 2001 18:07:06 -0500", "msg_from": "\"Permaine Cheung\" <pcheung@redhat.com>", "msg_from_op": true, "msg_subject": "Re: Third call for platform testing" }, { "msg_contents": "> Build and test run sucessfully on Linux/390 and\n> Linux/PlayStation2 with low-order-digit diffs in\n> geometry.\n\nCool - Postgres on a PlayStation 2!!!\n\nChris\n\n", "msg_date": "Thu, 13 Dec 2001 09:53:01 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Third call for platform testing " }, { "msg_contents": "> Sure, I'll post the results. Actually, on S/390, there's no need\n> to change anything, but on the PS/2, the test-and-set code\n> doesn't work on that CPU, Tom Lane suggested a workaround -\n> undefine HAS_TEST_AND_SET and remove slock_t type\n> definition in the port include file. With those modifications,\n> both the build and tests work fine.\n\nOK, S/390 is easy.\n\nFor PS/2, how does the CPU identify itself? If you have to remove the\nslock_t definition from the port include file then presumably it does\nidentify itself as one of the already supported CPUs (alpha, arm, ia64,\ni386, mips, ppc, sparc, s390), right? Or does some other default setting\nget used that you then #undef in that include file?\n\nA few more details would probably let someone reproduce your result,\nwhich would be enough to consider this as a supported platform. That is,\nas long as you aren't running the only PlayStation 2 in the world with\nLinux?? Heck, on second thought we'd consider it supported even so ;)\n\n - Thomas\n", "msg_date": "Thu, 13 Dec 2001 07:11:35 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Third call for platform testing" }, { "msg_contents": "> \n> OK, S/390 is easy.\n> \n> For PS/2, how does the CPU identify itself? If you have to remove the\n> slock_t definition from the port include file then presumably it does\n> identify itself as one of the already supported CPUs (alpha, arm, ia64,\n> i386, mips, ppc, sparc, s390), right? Or does some other default setting\n> get used that you then #undef in that include file?\n\nI specified --template=linux. :)\n\n> A few more details would probably let someone reproduce your result,\n> which would be enough to consider this as a supported platform. That is,\n> as long as you aren't running the only PlayStation 2 in the world with\n> Linux?? Heck, on second thought we'd consider it supported even so ;)\n> \n\nI'm trying to submit the report, however, I've been getting a\nPostgreSQL query failure saying regresstests does not exist in \n/usr/local/www/developer/regress/regress.php.\n\nPermaine\n\n", "msg_date": "Thu, 13 Dec 2001 08:10:25 -0500", "msg_from": "\"Permaine Cheung\" <pcheung@redhat.com>", "msg_from_op": true, "msg_subject": "Re: Third call for platform testing" }, { "msg_contents": "On Thu, 13 Dec 2001, Permaine Cheung wrote:\n\n> >\n> > OK, S/390 is easy.\n> >\n> > For PS/2, how does the CPU identify itself? If you have to remove the\n> > slock_t definition from the port include file then presumably it does\n> > identify itself as one of the already supported CPUs (alpha, arm, ia64,\n> > i386, mips, ppc, sparc, s390), right? Or does some other default setting\n> > get used that you then #undef in that include file?\n>\n> I specified --template=linux. :)\n>\n> > A few more details would probably let someone reproduce your result,\n> > which would be enough to consider this as a supported platform. That is,\n> > as long as you aren't running the only PlayStation 2 in the world with\n> > Linux?? Heck, on second thought we'd consider it supported even so ;)\n> >\n>\n> I'm trying to submit the report, however, I've been getting a\n> PostgreSQL query failure saying regresstests does not exist in\n> /usr/local/www/developer/regress/regress.php.\n\nMarc moved the database yesterday to another machine. The database\nI use for mirroring, regression tests, the website, and a bunch of\nother things has yet to be restored. I can't do it myself because\nthe other machine is no longer listening. So I'm just about 100%\nout of business.\n\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 13 Dec 2001 08:57:17 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Third call for platform testing" }, { "msg_contents": "> > For PS/2, how does the CPU identify itself? If you have to remove the\n> > slock_t definition from the port include file then presumably it does\n> > identify itself as one of the already supported CPUs (alpha, arm, ia64,\n> > i386, mips, ppc, sparc, s390), right? Or does some other default setting\n> > get used that you then #undef in that include file?\n> I specified --template=linux. :)\n\nAnd it is an x86 CPU?\n\n> > A few more details would probably let someone reproduce your result,\n> > which would be enough to consider this as a supported platform. That is,\n> > as long as you aren't running the only PlayStation 2 in the world with\n> > Linux?? Heck, on second thought we'd consider it supported even so ;)\n> I'm trying to submit the report, however, I've been getting a\n> PostgreSQL query failure saying regresstests does not exist in\n> /usr/local/www/developer/regress/regress.php.\n\nYeah, something apparently broke on the server. Vince?\n\nIn the meantime, you can post those results to this mailing list (or to\nthe -ports mailing list).\n\n - Thomas\n", "msg_date": "Thu, 13 Dec 2001 14:54:15 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Third call for platform testing" }, { "msg_contents": "...\n> Marc moved the database yesterday to another machine. The database\n> I use for mirroring, regression tests, the website, and a bunch of\n> other things has yet to be restored. I can't do it myself because\n> the other machine is no longer listening. So I'm just about 100%\n> out of business.\n\nWhoops. Marc, can we help with something here?\n\n - Thomas\n", "msg_date": "Thu, 13 Dec 2001 14:55:13 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Third call for platform testing" }, { "msg_contents": "> And it is an x86 CPU?\n\nIt's a mips CPU.\n\n\nPermaine\n\n", "msg_date": "Thu, 13 Dec 2001 12:50:14 -0500", "msg_from": "\"Permaine Cheung\" <pcheung@redhat.com>", "msg_from_op": true, "msg_subject": "Re: Third call for platform testing" }, { "msg_contents": "On Thu, 13 Dec 2001, Permaine Cheung wrote:\n\n> > And it is an x86 CPU?\n>\n> It's a mips CPU.\n\nIs there some way of connecting a hard disk to the PS/2? My kid has\none but I never looked at it that close.\n\nBTW, regression tests are back up.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 13 Dec 2001 12:51:42 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Third call for platform testing" }, { "msg_contents": "On Thu, Dec 13, 2001 at 12:51:42PM -0500, Vince Vielhaber wrote:\n> On Thu, 13 Dec 2001, Permaine Cheung wrote:\n> \n> > > And it is an x86 CPU?\n> >\n> > It's a mips CPU.\n> \n> Is there some way of connecting a hard disk to the PS/2? My kid has\n> one but I never looked at it that close.\n\nhttp://www.linuxworld.com/ic_717468_6995_1-3133.html\n\nLinux for PS2 came out of Sony/UK, apparently. It consists of:\n\n * DVD-ROM for all the software\n * USB keyboard\n * USB wheel mouse\n * 10/100 Ethernet adapter\n * 40GB hard drive w/ PCMCIA connector\n * PlayStation2->VGA adapter\n\nCan't seem to _find_ it for the US ones though. Seems early Japanese\nversion of the PS@ had a PCMCIA slot.\n\nRoss\n\n", "msg_date": "Thu, 13 Dec 2001 12:53:48 -0600", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: Third call for platform testing" } ]
[ { "msg_contents": "Having heard nothing on the list yet about the reported unsuccessful\nparallel regression tests on Cygwin with 7.2b3, I thought I'd have a play\nmyself having found a spare few minutes.\n\nSystem: Windows XP Professional, PIII 850MHz, 512Mb RAM, 32Gb disk\nuname -a: CYGWIN_NT-5.1 PC20 1.3.3(0.46/3/2) 2001-09-12 23:54 i686 unknown\n\nSequential regression tests pass repeatedly.\n\nParallel regression tests appear to fail almost randomly. The best I got so\nfar was 3 failures (out of 79 tests), the worst was about 15. In particular\nthe horology & misc tests always seems to fail, whilst the others vary. With\nthe exception of the misc test, all failures appear to be due to failed\nconnections eg:\n\n--- 1,3 ----\n! psql: could not connect to server: Connection refused\n! \tIs the server running on host localhost and accepting\n! \tTCP/IP connections on port 65432?\n\nThe misc test fails with:\n\n*** ./expected/misc.out\tWed Dec 12 20:34:59 2001\n--- ./results/misc.out\tWed Dec 12 21:52:29 2001\n***************\n*** 567,573 ****\n a_star\n abstime_tbl\n aggtest\n- arrtest\n b\n b_star\n box_tbl\n--- 567,572 ----\n***************\n*** 633,641 ****\n point_tbl\n polygon_tbl\n ramp\n- random_tbl\n real_city\n- reltime_tbl\n road\n serialtest\n serialtest_f2_seq\n--- 632,638 ----\n***************\n*** 652,662 ****\n timestamp_tbl\n timestamptz_tbl\n timetz_tbl\n- tinterval_tbl\n toyemp\n varchar_tbl\n xacttest\n! (93 rows)\n \n --SELECT name(equipment(hobby_construct(text 'skywalking', text 'mer')))\nAS equip_name;\n SELECT hobbies_by_name('basketball');\n--- 649,658 ----\n timestamp_tbl\n timestamptz_tbl\n timetz_tbl\n toyemp\n varchar_tbl\n xacttest\n! (89 rows)\n \n --SELECT name(equipment(hobby_construct(text 'skywalking', text 'mer')))\nAS equip_name;\n SELECT hobbies_by_name('basketball');\n\n\nThough again, this varies with each run - looking at misc.sql I assume that\nthis is because of the earlier failures?\n\nI have no idea what's causing these connection failures, but if anyone else\nhas any ideas and would like me to try out anything please let me know -\nassuming of course it's not too late for 7.2 yet... \n\nRegards, Dave.\n\n-- \nDave Page (dpage@postgresql.org)\nhttp://pgadmin.postgresql.org/ \n", "msg_date": "Wed, 12 Dec 2001 22:18:57 -0000", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Platform Testing - Cygwin" }, { "msg_contents": "> Having heard nothing on the list yet about the reported unsuccessful\n> parallel regression tests on Cygwin with 7.2b3, I thought I'd have a play\n> myself having found a spare few minutes.\n\nTom Lane has speculated that some optimizations around our locking code\n(which had been redone for 7.2) might be the culprit for problems in\nCygwin as it apparently was for AIX. He has since fixed the problems at\nleast under AIX.\n\nCould you repeat the test with 7.2b4 (out today??)?.\n\n - Thomas\n\n> System: Windows XP Professional, PIII 850MHz, 512Mb RAM, 32Gb disk\n> uname -a: CYGWIN_NT-5.1 PC20 1.3.3(0.46/3/2) 2001-09-12 23:54 i686 unknown\n> Sequential regression tests pass repeatedly.\n> Parallel regression tests appear to fail almost randomly...\n", "msg_date": "Thu, 13 Dec 2001 05:57:37 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Platform Testing - Cygwin" }, { "msg_contents": "Dave,\n\nOn Wed, Dec 12, 2001 at 10:18:57PM -0000, Dave Page wrote:\n> Parallel regression tests appear to fail almost randomly. The best I got so\n> far was 3 failures (out of 79 tests), the worst was about 15. In particular\n> the horology & misc tests always seems to fail, whilst the others vary. With\n> the exception of the misc test, all failures appear to be due to failed\n> connections eg:\n> \n> --- 1,3 ----\n> ! psql: could not connect to server: Connection refused\n> ! \tIs the server running on host localhost and accepting\n> ! \tTCP/IP connections on port 65432?\n\nThe above is a known MS Winsock limitation and is documented in FAQ_MSWIN:\n\n 2. make check can generate spurious regression test failures due to\n overflowing the listen() backlog queue which causes connection\n refused errors.\n\n> System: Windows XP Professional, PIII 850MHz, 512Mb RAM, 32Gb disk\n ^^^^^^^^^^^^\n\nYour system has a backlog limit of 5. Although a little dated, see the\nfollowing for details:\n\n http://support.microsoft.com/support/kb/articles/Q127/1/44.asp\n\nJason\n", "msg_date": "Thu, 13 Dec 2001 07:32:59 -0500", "msg_from": "Jason Tishler <jason@tishler.net>", "msg_from_op": false, "msg_subject": "Re: Platform Testing - Cygwin" }, { "msg_contents": "\n\nJason Tishler wrote:\n\n>Dave,\n>\n>On Wed, Dec 12, 2001 at 10:18:57PM -0000, Dave Page wrote:\n>\n>>Parallel regression tests appear to fail almost randomly. The best I got so\n>>far was 3 failures (out of 79 tests), the worst was about 15. In particular\n>>the horology & misc tests always seems to fail, whilst the others vary. With\n>>the exception of the misc test, all failures appear to be due to failed\n>>connections eg:\n>>\n>>--- 1,3 ----\n>>! psql: could not connect to server: Connection refused\n>>! \tIs the server running on host localhost and accepting\n>>! \tTCP/IP connections on port 65432?\n>>\n>\n>The above is a known MS Winsock limitation and is documented in FAQ_MSWIN:\n>\n> 2. make check can generate spurious regression test failures due to\n> overflowing the listen() backlog queue which causes connection\n> refused errors.\n>\nCould this not be \"fixed\" in client libs, by having a retry count/timeout.\n\nI guess that having libpq (or any other client) retry the initial \nconnection would solve\nmost of these short queue problems.\n\n>>System: Windows XP Professional, PIII 850MHz, 512Mb RAM, 32Gb disk\n>>\n> ^^^^^^^^^^^^\n>\n>Your system has a backlog limit of 5. Although a little dated, see the\n>following for details:\n>\n> http://support.microsoft.com/support/kb/articles/Q127/1/44.asp\n>\n>Jason\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n>http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n\n", "msg_date": "Fri, 14 Dec 2001 23:24:27 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Platform Testing - Cygwin" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> I guess that having libpq (or any other client) retry the initial \n> connection would solve most of these short queue problems.\n\nAnd get us accused of DOS attempts. Repeated connection attempts\nafter one has been rejected will be seen as unfriendly behavior by\na lot of people.\n\nMicrosoft clearly does not want people running servers on the non-server\nversions of Windows, and I don't see why we should go out of our way\nto circumvent that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 14 Dec 2001 17:38:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Platform Testing - Cygwin " }, { "msg_contents": "Tom Lane wrote:\n> \n> Hannu Krosing <hannu@tm.ee> writes:\n> > I guess that having libpq (or any other client) retry the initial\n> > connection would solve most of these short queue problems.\n> \n> And get us accused of DOS attempts. Repeated connection attempts\n> after one has been rejected will be seen as unfriendly behavior by\n> a lot of people.\n\nAFAIK sendmail keeps trying for days :)\n\n> Microsoft clearly does not want people running servers on the non-server\n> versions of Windows, and I don't see why we should go out of our way\n> to circumvent that.\n\nOk. Just a thought.\n\n-------------\nHannu\n", "msg_date": "Sat, 15 Dec 2001 13:16:44 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Platform Testing - Cygwin" } ]
[ { "msg_contents": "While trying to update the regression test database for PostgreSQL 7.1 beta2\nfor SGI Irix 6.5 (it passed), I got the following message:\n\nWarning: PostgreSQL query failed: ERROR: Relation 'regresstests' does not exist in\n/usr/local/www/developer/regress/regress.php on line 258\nDatabase write failed.\n\nPlease let me know when it's working again. --Bob\n\n+----------------------------------+------------------------------------+\n| Robert E. Bruccoleri, Ph.D. | Phone: 609 737 6383 |\n| President, Congenomics, Inc. | Fax: 609 737 7528 |\n| 114 W Franklin Ave, Suite K1,4,5 | email: bruc@acm.org |\n| P.O. Box 314 | URL: http://www.congen.com/~bruc |\n| Pennington, NJ 08534 | |\n+----------------------------------+------------------------------------+\n", "msg_date": "Wed, 12 Dec 2001 17:29:04 -0500 (EST)", "msg_from": "\"Robert E. Bruccoleri\" <bruc@stone.congenomics.com>", "msg_from_op": true, "msg_subject": "Regression test database is not working" }, { "msg_contents": "On Wed, 12 Dec 2001, Robert E. Bruccoleri wrote:\n\n> While trying to update the regression test database for PostgreSQL 7.1 beta2\n> for SGI Irix 6.5 (it passed), I got the following message:\n>\n> Warning: PostgreSQL query failed: ERROR: Relation 'regresstests' does not exist in\n> /usr/local/www/developer/regress/regress.php on line 258\n> Database write failed.\n>\n> Please let me know when it's working again. --Bob\n\nThere's a whole lot more than that missing.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 13 Dec 2001 06:02:12 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Regression test database is not working" } ]
[ { "msg_contents": "I want to thank Barry and Dave and others for taking the lead in\nimproving our jdbc code. Having you people evaluate and apply patches\nis a big help. In the past, our jdbc interface was quite lacking, but\nno more. This release will have major jdbc improvements, and it will\nonly get better.\n\nThanks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 Dec 2001 20:30:32 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "New JDBC leadership" }, { "msg_contents": "\nI'd like to thank everyone who has been submitting patches! Keep them\ncoming. In general the submissions are of great quality, at least the\nones I have looked at. This makes the jdbc driver, and postgresql much\nmore usable for everyone.\n\nDave\n\n\n\n-----Original Message-----\nFrom: pgsql-jdbc-owner@postgresql.org\n[mailto:pgsql-jdbc-owner@postgresql.org] On Behalf Of Bruce Momjian\nSent: Wednesday, December 12, 2001 8:31 PM\nTo: PostgreSQL jdbc list; PostgreSQL-development\nSubject: [JDBC] New JDBC leadership\n\n\nI want to thank Barry and Dave and others for taking the lead in\nimproving our jdbc code. Having you people evaluate and apply patches\nis a big help. In the past, our jdbc interface was quite lacking, but\nno more. This release will have major jdbc improvements, and it will\nonly get better.\n\nThanks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n\n", "msg_date": "Wed, 12 Dec 2001 21:21:45 -0500", "msg_from": "\"Dave Cramer\" <Dave@micro-automation.net>", "msg_from_op": false, "msg_subject": "Re: New JDBC leadership" } ]
[ { "msg_contents": "Hi,\n\nTom - you mentioned recently that your changes to the parser mean that my\nadd unique code is deprecated. You also mentioned that your changes will\nmake ADD PRIMARY KEY also work.\n\nIf that is true, then the fact that both of these work should be put in the\nHISTORY file (ADD UNIQUE already is) and documented on the ALTER TABLE docs.\nThis should probably be done before a release.\n\nAlso, I don't know if this should be done so late in the cycle, but the ADD\nUNIQUE regression test I added should probably be copied as an ADD PRIMARY\nKEY regression test...\n\nChris\n\n", "msg_date": "Thu, 13 Dec 2001 10:06:20 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "ADD PRIMARY KEY and ADD UNIQUE" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> Hi,\n> \n> Tom - you mentioned recently that your changes to the parser mean that my\n> add unique code is deprecated. You also mentioned that your changes will\n> make ADD PRIMARY KEY also work.\n> \n> If that is true, then the fact that both of these work should be put in the\n> HISTORY file (ADD UNIQUE already is) and documented on the ALTER TABLE docs.\n> This should probably be done before a release.\n\nAdded to HISTORY and release.sgml. I don't see any mention in\nalter_table.sgml.\n\n> Also, I don't know if this should be done so late in the cycle, but the ADD\n> UNIQUE regression test I added should probably be copied as an ADD PRIMARY\n> KEY regression test...\n\nCan you send over a patch? I am not sure what you are suggesting here.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 3 Jan 2002 00:39:34 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ADD PRIMARY KEY and ADD UNIQUE" } ]
[ { "msg_contents": "Hi!\nIn this attachment is my changes in Postgres-7.1.3 for QNX6 Neutrino. Copy this \ninto work directory and run:\npatch -u -i pgsql713-qnx6.diff -p1\n\nBest regards\nAndy Latin\n----\n О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫ О©╫О©╫О©╫О©╫О©╫ http://mail.Rambler.ru/\n О©╫О©╫О©╫О©╫О©╫О©╫О©╫-О©╫О©╫О©╫О©╫О©╫О©╫О©╫ http://ad.rambler.ru/ban.clk?pg=1691&bn=9346", "msg_date": "Thu, 13 Dec 2001 08:01:26 +0300 (MSK)", "msg_from": "Andy Latin <303401@rambler.ru>", "msg_from_op": true, "msg_subject": "" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours. This will have to be\nmerged with the other QNX patch.\n\n---------------------------------------------------------------------------\n\n\nAndy Latin wrote:\n> Hi!\n> In this attachment is my changes in Postgres-7.1.3 for QNX6 Neutrino. Copy this \n> into work directory and run:\n> patch -u -i pgsql713-qnx6.diff -p1\n> \n> Best regards\n> Andy Latin\n> ----\n> ?????????? ????? http://mail.Rambler.ru/\n> ???????-??????? http://ad.rambler.ru/ban.clk?pg=1691&bn=9346\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Feb 2002 19:41:37 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: " }, { "msg_contents": "\nAndy, this is going no need work before being applied, specifically to\nbreak out the QNX-specific stuff and the posix thread stuff. Could you\nperhaps split this into two patches and base it against 7.2?\n\n---------------------------------------------------------------------------\n\nAndy Latin wrote:\n> Hi!\n> In this attachment is my changes in Postgres-7.1.3 for QNX6 Neutrino. Copy this \n> into work directory and run:\n> patch -u -i pgsql713-qnx6.diff -p1\n> \n> Best regards\n> Andy Latin\n> ----\n> ?????????? ????? http://mail.Rambler.ru/\n> ???????-??????? http://ad.rambler.ru/ban.clk?pg=1691&bn=9346\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 7 Mar 2002 22:38:22 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: " } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Thomas Lockhart [mailto:lockhart@fourpalms.org] \n> Sent: 13 December 2001 05:58\n> To: Dave Page\n> Cc: 'pgsql-hackers@postgresql.org'; 'pgsql-cygwin@postgresql.org'\n> Subject: Re: [HACKERS] Platform Testing - Cygwin\n> \n> \n> > Having heard nothing on the list yet about the reported \n> unsuccessful \n> > parallel regression tests on Cygwin with 7.2b3, I thought \n> I'd have a \n> > play myself having found a spare few minutes.\n> \n> Tom Lane has speculated that some optimizations around our \n> locking code (which had been redone for 7.2) might be the \n> culprit for problems in Cygwin as it apparently was for AIX. \n> He has since fixed the problems at least under AIX.\n> \n> Could you repeat the test with 7.2b4 (out today??)?.\n> \n\nStill the same problem :-(. BTW: I have also updated my Cygwin installation\nto \n\nCYGWIN_NT-5.1 PC20 1.3.6(0.47/3/2) 2001-12-08 17:02 i686 unknown\n\nRegards, Dave\n\n", "msg_date": "Thu, 13 Dec 2001 09:54:57 -0000", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Platform Testing - Cygwin" } ]
[ { "msg_contents": "Hi all you hackers :-)\n\nI'm 27 years old. I work on databases purposes in a major french Open Source\ncompany. It is called IDEALX. We made some good stuff in Open Source (like the\n1st free P.K.I. , an XML Editor, etc.. in GPL see idealx.org). Personally, I\nprefer free software, but I prefer work in an Open Source company rather than a\nclassical proprietary computer services company... :-/\n\nI just wanted to great all people here for such a good job!!!\n\nI preconize PostgreSQL as much as I can to customers. I made several migrations\nfrom proprietary RDMBS, such as Oracle, DB2 and Ingres, to PostgreSQL (thanks\nfor Ora2Pg contributors). \n\nI can announce today than a major distributor in France called � AUCHAN � is\nmigrating all technical NT4 servers (PDC) to linux. Oracle 8.0 is thrashed. My\njob is to migrate all Oracle stuff to PostgreSQL. since this group is\ninternational, you'll surely hear about this succes story soon.\n\nI am also Oracle certified DBA, for political/business reasons I don't want to\nexpose in this list (not the good place for that). I practise Oracle for 6 years\nnow. I install Oracle under linux only, but only when I've made all my possible\nto make ithe customer change his mind... At last, I prefer install Oracle under\nlinux rather than Oracle under NT. Next steps are to show the customer that\nPostgreSQL can do the job ;-) \n\nEvery day I'm impressed by PostgreSQL'perfs! I made some benchmarks between \nOracle and PostgreSQL too. I've not studied that much MySQL, PostgreSQL studying\ntakes all my time, and past experiences of big queryies take minuts to go\nfreezed me with MySQL....\n\nI'm a C coder too, and I pass all my free time reading PostgreSQL's code and\nreading all I can find on this project. I hope I'll reach some sufficient skills\nto contribute to this project, in the future. Give me some months ;-)\n\nWaiting this, lemme know where I can help. May be traducing documents? I'm\nfrench born, may be traducing docs in french? I have to contribute too to\ntechdocs links with giving all my scripts to the cookbook. Mugs and T-Shirts are\nquite good looking but I'd like to contribute another way too.\n\nI can give too strong feedback on migrations.\n\nOnce again, thanks to all for such good job.\n\nBest regards,\n\n-- \nJean-Paul ARGUDO \t\tIDEALX S.A.S\nConsultant bases de donn�es\t\t\t15-17, av. de S�gur\nhttp://idealx.com http://idealx.org\t\tF-75007 PARIS\n", "msg_date": "Thu, 13 Dec 2001 14:14:10 +0100", "msg_from": "Jean-Paul ARGUDO <jean-paul.argudo@idealx.com>", "msg_from_op": true, "msg_subject": "Hi all and greatings" }, { "msg_contents": "----- Original Message ----- \nFrom: Jean-Paul ARGUDO <jean-paul.argudo@idealx.com>\nSent: Thursday, December 13, 2001 8:14 AM\n\n> Waiting this, lemme know where I can help. May be traducing documents? I'm\n> french born, may be traducing docs in french? I have to contribute too to\n> techdocs links with giving all my scripts to the cookbook. Mugs and T-Shirts are\n> quite good looking but I'd like to contribute another way too.\n\nHave you looked at the National Language Support effort\nstarted by Peter Eisentraut? You can help in this area to with\nFrench translations of messages from different PostgreSQL components.\n\nSee: http://webmail.postgresql.org/~petere/nls.php\n\n-s\n\n", "msg_date": "Sun, 16 Dec 2001 02:10:21 -0500", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": false, "msg_subject": "Re: Hi all and greatings" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Jason Tishler [mailto:jason@tishler.net] \n> Sent: 13 December 2001 12:33\n> To: Dave Page\n> Cc: 'pgsql-hackers@postgresql.org'; 'pgsql-cygwin@postgresql.org'\n> Subject: Re: [CYGWIN] Platform Testing - Cygwin\n> \n> \n> Dave,\n> \n> On Wed, Dec 12, 2001 at 10:18:57PM -0000, Dave Page wrote:\n> > Parallel regression tests appear to fail almost randomly. \n> The best I \n> > got so far was 3 failures (out of 79 tests), the worst was \n> about 15. \n> > In particular the horology & misc tests always seems to \n> fail, whilst \n> > the others vary. With the exception of the misc test, all failures \n> > appear to be due to failed connections eg:\n> > \n> > --- 1,3 ----\n> > ! psql: could not connect to server: Connection refused\n> > ! \tIs the server running on host localhost and accepting\n> > ! \tTCP/IP connections on port 65432?\n> \n> The above is a known MS Winsock limitation and is documented \n> in FAQ_MSWIN:\n> \n> 2. make check can generate spurious regression test \n> failures due to\n> overflowing the listen() backlog queue which causes connection\n> refused errors.\n> \n> > System: Windows XP Professional, PIII 850MHz, 512Mb RAM, 32Gb disk\n> ^^^^^^^^^^^^\n> \n> Your system has a backlog limit of 5. Although a little \n> dated, see the following for details:\n> \n> http://support.microsoft.com/support/kb/articles/Q127/1/44.asp\n>\n> Jason\n\nAww nuts. I should have thought of that. I'll try again on a Win2K server.\n\nSlap on the wrist for not checking the docs - in my defence I'm recovering\nfrom a rather nasty cold, and I am following on from someone else's reported\nproblem!\n\nThanks Jason,\n\nDave.\n", "msg_date": "Thu, 13 Dec 2001 13:30:05 -0000", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: Platform Testing - Cygwin" }, { "msg_contents": "Dave,\n\nOn Thu, Dec 13, 2001 at 01:30:05PM -0000, Dave Page wrote:\n> Slap on the wrist for not checking the docs - in my defence I'm recovering\n> from a rather nasty cold, and I am following on from someone else's reported\n> problem!\n\nNo slap is necessary. I'm sorry that my terse response did not indicate\nmy appreciation for taking the time to help out.\n\nThanks,\nJason\n", "msg_date": "Thu, 13 Dec 2001 09:05:28 -0500", "msg_from": "Jason Tishler <jason@tishler.net>", "msg_from_op": false, "msg_subject": "Re: Platform Testing - Cygwin" }, { "msg_contents": "...\n> > The above is a known MS Winsock limitation and is documented\n> > in FAQ_MSWIN:\n...\n> Aww nuts. I should have thought of that. I'll try again on a Win2K server.\n\nJason and Dave, would y'all consider this a tested and supported\nplatform then? I'd like to correctly represent this in the ports list in\nthe docs for this release, but don't recall having seen a report such as\n\"Win+Cygwin work as well as they ever have\"...\n\n - Thomas\n", "msg_date": "Thu, 13 Dec 2001 15:08:57 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Platform Testing - Cygwin" }, { "msg_contents": "Thomas,\n\nI have not done any 7.2 testing myself, but Dave reports the following:\n\nOn Wed, Dec 12, 2001 at 10:18:57PM -0000, Dave Page wrote:\n> System: Windows XP Professional, PIII 850MHz, 512Mb RAM, 32Gb disk\n> uname -a: CYGWIN_NT-5.1 PC20 1.3.3(0.46/3/2) 2001-09-12 23:54 i686 unknown\n> \n> Sequential regression tests pass repeatedly.\n\nand:\n\nOn Thu, Dec 13, 2001 at 02:21:08PM -0000, Dave Page wrote:\n> 7.2b4 passes *all* tests both parallel and sequential on Windows 2000\n> Server. \n> \n> On XP Pro, and by the sounds of it, any other non-server releases of\n> Windows, parallel tests will fail randomly due to Winsock backlog limit of 5\n> on these systems (as pointed out by Jason Tishler and documented in\n> FAQ_MSWIN).\n\nOn Thu, Dec 13, 2001 at 03:08:57PM +0000, Thomas Lockhart wrote:\n> Jason and Dave, would y'all consider this a tested and supported\n> platform then? I'd like to correctly represent this in the ports list in\n> the docs for this release, but don't recall having seen a report such as\n> \"Win+Cygwin work as well as they ever have\"...\n\nSince I trust Dave, I feel that the above is an accurate characterization\nof the situation.\n\nJason\n", "msg_date": "Thu, 13 Dec 2001 10:52:14 -0500", "msg_from": "Jason Tishler <jason@tishler.net>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Platform Testing - Cygwin" }, { "msg_contents": "Got it. Thanks.\n\n - Thomas\n", "msg_date": "Thu, 13 Dec 2001 16:05:45 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Platform Testing - Cygwin" } ]
[ { "msg_contents": "Is there a tool that would let me upgrade a database with a new\nschema (without loosing data)?\n\nI have a database, x weeks later there is a new schema.. I want the\npossibility to skip that upgrade (if I don't need the added features,\nI don't want to upgrade)... When I decide I want to upgrade (I need\nthe added features or whatever) I should be able to do so without\nloosing data...\n\nThe schema is 'complete' in each upgrade...\n\n\nDo I make any sense?\n\n-- \n Turbo __ _ Debian GNU Unix _IS_ user friendly - it's just \n ^^^^^ / /(_)_ __ _ ___ __ selective about who its friends are \n / / | | '_ \\| | | \\ \\/ / Debian Certified Linux Developer \n _ /// / /__| | | | | |_| |> < Turbo Fredriksson turbo@bayour.com\n \\\\\\/ \\____/_|_| |_|\\__,_/_/\\_\\ Gothenburg/Sweden\n\nSEAL Team 6 Semtex nitrate Khaddafi Mossad Rule Psix tritium cracking\nsmuggle Serbian quiche Nazi Panama Marxist Ortega\n[See http://www.aclu.org/echelonwatch/index.html for more about this]\n", "msg_date": "13 Dec 2001 14:47:59 +0100", "msg_from": "Turbo Fredriksson <turbo@bayour.com>", "msg_from_op": true, "msg_subject": "Database upgrades" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Thomas Lockhart [mailto:lockhart@fourpalms.org] \n> Sent: 13 December 2001 05:58\n> To: Dave Page\n> Cc: 'pgsql-hackers@postgresql.org'; 'pgsql-cygwin@postgresql.org'\n> Subject: Re: [CYGWIN] [HACKERS] Platform Testing - Cygwin\n> \n> \n> > Having heard nothing on the list yet about the reported \n> unsuccessful \n> > parallel regression tests on Cygwin with 7.2b3, I thought \n> I'd have a \n> > play myself having found a spare few minutes.\n> \n> Tom Lane has speculated that some optimizations around our \n> locking code (which had been redone for 7.2) might be the \n> culprit for problems in Cygwin as it apparently was for AIX. \n> He has since fixed the problems at least under AIX.\n> \n> Could you repeat the test with 7.2b4 (out today??)?.\n> \n> - Thomas\n\nRight, 7.2b4 passes *all* tests both parallel and sequential on Windows 2000\nServer. \n\nOn XP Pro, and by the sounds of it, any other non-server releases of\nWindows, parallel tests will fail randomly due to Winsock backlog limit of 5\non these systems (as pointed out by Jason Tishler and documented in\nFAQ_MSWIN).\n\nRegards, Dave.\n", "msg_date": "Thu, 13 Dec 2001 14:21:08 -0000", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Platform Testing - Cygwin" }, { "msg_contents": "...\n> Right, 7.2b4 passes *all* tests both parallel and sequential on Windows 2000\n> Server.\n> On XP Pro, and by the sounds of it, any other non-server releases of\n> Windows, parallel tests will fail randomly due to Winsock backlog limit of 5\n> on these systems (as pointed out by Jason Tishler and documented in\n> FAQ_MSWIN).\n\nSo ignore the question I sent a minute ago. Thanks for the report!!\n\n - Thomas\n", "msg_date": "Thu, 13 Dec 2001 15:16:15 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Platform Testing - Cygwin" } ]
[ { "msg_contents": "> > Right, 7.2b4 passes *all* tests both parallel and \n> sequential on Windows 2000\n> > Server.\n> > On XP Pro, and by the sounds of it, any other non-server releases of\n> > Windows, parallel tests will fail randomly due to Winsock \n> backlog limit of 5\n> > on these systems (as pointed out by Jason Tishler and documented in\n> > FAQ_MSWIN).\n> \n> So ignore the question I sent a minute ago. Thanks for the report!!\n\nProblem with this report is, that it most certainly is on a single CPU \nsystem. Problems currently only reproduce on SMP, if I read the mails \ncorrectly.\n\nAndreas\n", "msg_date": "Thu, 13 Dec 2001 16:47:10 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Platform Testing - Cygwin" } ]
[ { "msg_contents": "Things have firmed up nicely for the 7.2 release. We currently have\nseveral NetBSD platforms which have not reported success for 7.2, maybe\nonly because they have a relatively small user base (and aging\nprocessors) nowadays. If anyone still has a m68k, VAX or arm running\nNetBSD, let us know!\n\nOh, and a question about Linux/PlayStation2: if it is an x86 processor,\neven without test and set support, should we list it separately? Or is\nit enough like other PCs that it is too fine a distinction to make?\n\n - Thomas\n\nThe problem platforms with comments and questions are:\n\nNetBSD/arm32 Patrick Welche\n Any luck finding time to do the test?\nNetBSD/m68k Bill Studenmund\n Any luck finding another test platform?\nNetBSD/VAX Tom I. Helbekkmo\n Any VAXen out there nowadays?\nSunOS Tatsuo Ishii\n Anyone writing a memcpy for it?\n Tatsuo will be retiring his machine soon.\n\n\nAnd those reported as successful:\n\nAIX Andreas Zeugswetter, Tatsuo, Tom\nBeOS Cyril Velter\nBSD/OS Bruce\nFreeBSD Chris Kings-Lynne\nHPUX Tom, Joseph Conway\nIRIX Luis Amigo\nLinux/Alpha Tom\nLinux/arm Mark Knox\nLinux/MIPS Hisao Shibuya\nLinux/PlayStation2 Permaine Cheung\nLinux/PPC Tom\nLinux/sparc Doug McNaught\nLinux/s390 Permaine Cheung\nLinux/x86 Thomas (and many others ;)\nMacOS-X Gavin Sherry\nNetBSD/Alpha Thomas Thai\nNetBSD/PPC Bill Studenmund\nNetBSD/sparc Matthew Green\nNetBSD/x86 Bill Studenmund\nOpenBSD/sparc Brandon Palmer\nOpenBSD/x86 Brandon Palmer\nOpenUnix Larry Rosenman\nQNX 4 Bernd Tegge\nQNX 6 Igor Kovalenko\nSolaris/sparc Andrew Sullivan\nSolaris/x86 Martin Renters\nTru64 Alessio Bragadini, Bernd Tegge\nWindows/Cygwin Dave Page, Jason Tishler\nWindows/native (clients only) Dave Page\n", "msg_date": "Thu, 13 Dec 2001 16:02:57 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Platform testing (last call?)" }, { "msg_contents": "I have a hp 9000/433s that I can fire up and try if need be.\n\n(1.5.1 of NetBSD).\n\nLER\n\n* Thomas Lockhart <lockhart@fourpalms.org> [011213 10:15]:\n> Things have firmed up nicely for the 7.2 release. We currently have\n> several NetBSD platforms which have not reported success for 7.2, maybe\n> only because they have a relatively small user base (and aging\n> processors) nowadays. If anyone still has a m68k, VAX or arm running\n> NetBSD, let us know!\n> \n> Oh, and a question about Linux/PlayStation2: if it is an x86 processor,\n> even without test and set support, should we list it separately? Or is\n> it enough like other PCs that it is too fine a distinction to make?\n> \n> - Thomas\n> \n> The problem platforms with comments and questions are:\n> \n> NetBSD/arm32 Patrick Welche\n> Any luck finding time to do the test?\n> NetBSD/m68k Bill Studenmund\n> Any luck finding another test platform?\n> NetBSD/VAX Tom I. Helbekkmo\n> Any VAXen out there nowadays?\n> SunOS Tatsuo Ishii\n> Anyone writing a memcpy for it?\n> Tatsuo will be retiring his machine soon.\n> \n> \n> And those reported as successful:\n> \n> AIX Andreas Zeugswetter, Tatsuo, Tom\n> BeOS Cyril Velter\n> BSD/OS Bruce\n> FreeBSD Chris Kings-Lynne\n> HPUX Tom, Joseph Conway\n> IRIX Luis Amigo\n> Linux/Alpha Tom\n> Linux/arm Mark Knox\n> Linux/MIPS Hisao Shibuya\n> Linux/PlayStation2 Permaine Cheung\n> Linux/PPC Tom\n> Linux/sparc Doug McNaught\n> Linux/s390 Permaine Cheung\n> Linux/x86 Thomas (and many others ;)\n> MacOS-X Gavin Sherry\n> NetBSD/Alpha Thomas Thai\n> NetBSD/PPC Bill Studenmund\n> NetBSD/sparc Matthew Green\n> NetBSD/x86 Bill Studenmund\n> OpenBSD/sparc Brandon Palmer\n> OpenBSD/x86 Brandon Palmer\n> OpenUnix Larry Rosenman\n> QNX 4 Bernd Tegge\n> QNX 6 Igor Kovalenko\n> Solaris/sparc Andrew Sullivan\n> Solaris/x86 Martin Renters\n> Tru64 Alessio Bragadini, Bernd Tegge\n> Windows/Cygwin Dave Page, Jason Tishler\n> Windows/native (clients only) Dave Page\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Thu, 13 Dec 2001 10:18:33 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Platform testing (last call?)" }, { "msg_contents": "> I have a hp 9000/433s that I can fire up and try if need be.\n> (1.5.1 of NetBSD).\n\nThat is an m68k? If it is on the \"problem list\" (or not mentioned at\nall), then any testing would be great!\n\n - Thomas\n\n> > The problem platforms with comments and questions are:\n> >\n> > NetBSD/arm32 Patrick Welche\n> > Any luck finding time to do the test?\n> > NetBSD/m68k Bill Studenmund\n> > Any luck finding another test platform?\n> > NetBSD/VAX Tom I. Helbekkmo\n> > Any VAXen out there nowadays?\n> > SunOS Tatsuo Ishii\n> > Anyone writing a memcpy for it?\n> > Tatsuo will be retiring his machine soon.\n", "msg_date": "Thu, 13 Dec 2001 16:22:10 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: Platform testing (last call?)" }, { "msg_contents": "* Thomas Lockhart <lockhart@fourpalms.org> [011213 10:22]:\n> > I have a hp 9000/433s that I can fire up and try if need be.\n> > (1.5.1 of NetBSD).\n> \n> That is an m68k? If it is on the \"problem list\" (or not mentioned at\n> all), then any testing would be great!\nYes, it's a 33 MhZ 68040.\n\n\n> \n> - Thomas\n> \n> > > The problem platforms with comments and questions are:\n> > >\n> > > NetBSD/arm32 Patrick Welche\n> > > Any luck finding time to do the test?\n> > > NetBSD/m68k Bill Studenmund\n> > > Any luck finding another test platform?\n> > > NetBSD/VAX Tom I. Helbekkmo\n> > > Any VAXen out there nowadays?\n> > > SunOS Tatsuo Ishii\n> > > Anyone writing a memcpy for it?\n> > > Tatsuo will be retiring his machine soon.\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Thu, 13 Dec 2001 10:22:53 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Platform testing (last call?)" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n\n> Things have firmed up nicely for the 7.2 release. We currently have\n> several NetBSD platforms which have not reported success for 7.2, maybe\n> only because they have a relatively small user base (and aging\n> processors) nowadays. If anyone still has a m68k, VAX or arm running\n> NetBSD, let us know!\n\nHas anyone tried to run it on OS X 10.1? I know there were problems\nwith the compiler changes that Apple put in but I don't recall if\nthere's an easy fix. \n\nIf someone can say \"it should work\" I'll try to get it compiled and\nworking there--I want to take my PowerBook on holiday and do some\nhacking... \n\n> Oh, and a question about Linux/PlayStation2: if it is an x86 processor,\n> even without test and set support, should we list it separately? Or is\n> it enough like other PCs that it is too fine a distinction to make?\n\nI am quite sure PS2 is not x86. It's some kind of RISC with a lot of\nspecial 3d and sound hardware, but I forget which.\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n", "msg_date": "13 Dec 2001 12:17:44 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Platform testing (last call?)" }, { "msg_contents": "...\n> Has anyone tried to run it on OS X 10.1? ...\n> If someone can say \"it should work\" I'll try to get it compiled and\n> working there--I want to take my PowerBook on holiday and do some\n> hacking...\n\nPretty sure that someone already reported success on 10.1 (10.1.1?). So,\n\"it should work\" ;)\n\n - Thomas\n", "msg_date": "Thu, 13 Dec 2001 17:36:09 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: Platform testing (last call?)" }, { "msg_contents": "Thomas Lockhart wrote:\n\n[clip]\n\n> Oh, and a question about Linux/PlayStation2: if it is an x86 processor,\n> even without test and set support, should we list it separately? Or is\n> it enough like other PCs that it is too fine a distinction to make?\n\nThe Playstation2 does not have an X86 processor. It uses a MIPS R5900.\nSony calls this CPU the \"Emotion Engine.\"\n\nNeil\n\n-- \nNeil Padgett\nRed Hat Canada Ltd. E-Mail: npadgett@redhat.com\n2323 Yonge Street, Suite #300, \nToronto, ON M4P 2C9\n", "msg_date": "Thu, 13 Dec 2001 13:01:54 -0500", "msg_from": "Neil Padgett <npadgett@redhat.com>", "msg_from_op": false, "msg_subject": "Re: Platform testing (last call?)" }, { "msg_contents": "Neil Padgett <npadgett@redhat.com> writes:\n\n> The Playstation2 does not have an X86 processor. It uses a MIPS R5900.\n> Sony calls this CPU the \"Emotion Engine.\"\n\nNitpick: actually from what I've read that term refers to the\nsuper-nifty graphics chipset, not to the CPU itself, which is fairly\nvanilla.\n\nInteresting that the PG compile doesn't try to use MIPS spinlocks on\nthe PS2...\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n", "msg_date": "13 Dec 2001 13:20:45 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Platform testing (last call?)" }, { "msg_contents": "Doug McNaught wrote:\n\n> Interesting that the PG compile doesn't try to use MIPS spinlocks on\n> the PS2...\n\nIt tries. The instructions aren't compatible.\n\nNeil\n\n-- \nNeil Padgett\nRed Hat Canada Ltd. E-Mail: npadgett@redhat.com\n2323 Yonge Street, Suite #300, \nToronto, ON M4P 2C9\n", "msg_date": "Thu, 13 Dec 2001 15:31:35 -0500", "msg_from": "Neil Padgett <npadgett@redhat.com>", "msg_from_op": false, "msg_subject": "Re: Platform testing (last call?)" }, { "msg_contents": "Doug McNaught wrote:\n> \n> Neil Padgett <npadgett@redhat.com> writes:\n> \n> > The Playstation2 does not have an X86 processor. It uses a MIPS R5900.\n> > Sony calls this CPU the \"Emotion Engine.\"\n> \n> Nitpick: actually from what I've read that term refers to the\n> super-nifty graphics chipset, not to the CPU itself, which is fairly\n> vanilla.\n\nActually, Neil is correct. I had the great experience of\nworking on the tool chain for PlayStation2 a few years back.\nAnd, yes, it was a fun project. The Emotion Engine is the\nMIPS core, the vector units and the arbiter. \nFor more information on the Emotion Engine, check out the \nieee.org website (they have a few good Emotion Engine tech\ndocuments/papers).\n \n> Interesting that the PG compile doesn't try to use MIPS spinlocks on\n> the PS2...\n\nThe PlayStation2 MIPS Core does not have such instructions. \n\nCheers,\nPatrick\n--\nPatrick Macdonald \nRed Hat Database Development\n", "msg_date": "Thu, 13 Dec 2001 15:44:06 -0500", "msg_from": "Patrick Macdonald <patrickm@redhat.com>", "msg_from_op": false, "msg_subject": "Re: Platform testing (last call?)" }, { "msg_contents": "On 13 Dec 2001, Doug McNaught wrote:\n\n> Thomas Lockhart <lockhart@fourpalms.org> writes:\n> \n> > Things have firmed up nicely for the 7.2 release. We currently have\n> > several NetBSD platforms which have not reported success for 7.2, maybe\n> > only because they have a relatively small user base (and aging\n> > processors) nowadays. If anyone still has a m68k, VAX or arm running\n> > NetBSD, let us know!\n> \n> Has anyone tried to run it on OS X 10.1? I know there were problems\n> with the compiler changes that Apple put in but I don't recall if\n> there's an easy fix. \n\n10.1.1 compiles/tests fine. See my email on 28th Nov.\n\n\nGavin\n\n", "msg_date": "Fri, 14 Dec 2001 09:35:29 +1100 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: Platform testing (last call?)" }, { "msg_contents": "Doug McNaught <doug@wireboard.com> writes:\n> Has anyone tried to run it on OS X 10.1?\n\nYes, fixed and tested by yours truly ... at least on 10.1.\nI have not installed the 10.1.1 update yet. If anyone has,\nit'd be worth doing a quick check.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Dec 2001 17:49:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Platform testing (last call?) " }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Doug McNaught <doug@wireboard.com> writes:\n> > Has anyone tried to run it on OS X 10.1?\n> \n> Yes, fixed and tested by yours truly ... at least on 10.1.\n> I have not installed the 10.1.1 update yet. If anyone has,\n> it'd be worth doing a quick check.\n\nIf I have time tonight I'll give it a try. \n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n", "msg_date": "13 Dec 2001 17:57:36 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Platform testing (last call?)" }, { "msg_contents": "> Things have firmed up nicely for the 7.2 release. We currently have\n> several NetBSD platforms which have not reported success for 7.2, maybe\n> only because they have a relatively small user base (and aging\n> processors) nowadays. If anyone still has a m68k, VAX or arm running\n> NetBSD, let us know!\n\n<snip>\n\n> And those reported as successful:\n>\n> AIX Andreas Zeugswetter, Tatsuo, Tom\n> BeOS Cyril Velter\n> BSD/OS Bruce\n> FreeBSD Chris Kings-Lynne\n\nShould be FreeBSD/x86 ...\n\nI'm still trying to get my friend's Alpha set up (cables, etc - I will\nhopefully be able to test it before end of betas/RC's)\n\nChris\n\n", "msg_date": "Fri, 14 Dec 2001 10:07:20 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Platform testing (last call?)" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Doug McNaught <doug@wireboard.com> writes:\n> > Has anyone tried to run it on OS X 10.1?\n> \n> Yes, fixed and tested by yours truly ... at least on 10.1.\n> I have not installed the 10.1.1 update yet. If anyone has,\n> it'd be worth doing a quick check.\n\nInstalled 10.1.1, PG compiles and tests out just fine. \n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n", "msg_date": "14 Dec 2001 00:17:19 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Platform testing (last call?)" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Zeugswetter Andreas SB SD [mailto:ZeugswetterA@spardat.at] \n> Sent: 13 December 2001 15:47\n> To: lockhart@fourpalms.org; Dave Page\n> Cc: pgsql-hackers@postgresql.org; pgsql-cygwin@postgresql.org\n> Subject: RE: [CYGWIN] [HACKERS] Platform Testing - Cygwin\n> \n> \n> > > Right, 7.2b4 passes *all* tests both parallel and\n> > sequential on Windows 2000\n> > > Server.\n> > > On XP Pro, and by the sounds of it, any other non-server \n> releases of \n> > > Windows, parallel tests will fail randomly due to Winsock\n> > backlog limit of 5\n> > > on these systems (as pointed out by Jason Tishler and \n> documented in \n> > > FAQ_MSWIN).\n> > \n> > So ignore the question I sent a minute ago. Thanks for the report!!\n> \n> Problem with this report is, that it most certainly is on a \n> single CPU \n> system. Problems currently only reproduce on SMP, if I read the mails \n> correctly.\n> \n> Andreas\n> \n\nAlthough the original test was in Windows XP on a single processor box, the\nfinal tests that all passed were on Windows 2000 Server running on a Dual\nPIII 933MHz box with 1Gb of RAM. The motherboard is an MSI Pro 694D.\n\nRegards, Dave.\n", "msg_date": "Thu, 13 Dec 2001 16:06:04 -0000", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Platform Testing - Cygwin" }, { "msg_contents": "\nDave Page wrote:\n\n>Although the original test was in Windows XP on a single processor box, the\n>final tests that all passed were on Windows 2000 Server running on a Dual\n>PIII 933MHz box with 1Gb of RAM. The motherboard is an MSI Pro 694D.\n>\nHas anyone done any tests comparing PostgreSQL on Win32 and *NIX \nplatforms on\nsame/similar hardware ?\n\nI suspect that the initial connect could be slower on Win32 due to \nreported slowness of\nfork() there, but are there other issues ?\n\n-------------------\nHannu\n\n\n", "msg_date": "Fri, 14 Dec 2001 23:31:00 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Platform Testing - Cygwin" } ]
[ { "msg_contents": "works for me. doug if you'd like an account on a 10.1 machine i can arrange\nthat. i've got both the client and the server flavors available. there was\nsome bizarre two-level-namespace problem, but it seems to work okay now.\n(update, yeah, just checked again to make sure. 10.1, 10.1.1, and\n10.0-10.0.4 are all okay)..\n\nalex\n\n-----Original Message-----\n\nHas anyone tried to run it on OS X 10.1? I know there were problems\nwith the compiler changes that Apple put in but I don't recall if\nthere's an easy fix. \n\n", "msg_date": "Thu, 13 Dec 2001 13:26:46 -0500", "msg_from": "Alex Avriette <a_avriette@acs.org>", "msg_from_op": true, "msg_subject": "Re: Platform testing (last call?)" }, { "msg_contents": "Alex Avriette <a_avriette@acs.org> writes:\n\n> works for me. doug if you'd like an account on a 10.1 machine i can arrange\n> that. i've got both the client and the server flavors available. there was\n> some bizarre two-level-namespace problem, but it seems to work okay now.\n> (update, yeah, just checked again to make sure. 10.1, 10.1.1, and\n> 10.0-10.0.4 are all okay)..\n\nThanks for the offer. I have a PB that I just updated to 10.1 so I'm\ngoing to try to get it running there, probably tonight.\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n", "msg_date": "13 Dec 2001 13:27:08 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Platform testing (last call?)" } ]
[ { "msg_contents": "oh, if youre going to do testing tonight, one thing i didnt do.. apple has\ntheir new 'developer tools' out, which *will* include a different\ngcc/make/other stuff. so you might try installing that too.\n\nalex\n\n-----Original Message-----\nFrom: Doug McNaught [mailto:doug@wireboard.com]\nSent: Thursday, December 13, 2001 1:27 PM\nTo: Alex Avriette\nCc: lockhart@fourpalms.org; Hackers List; Bill Studenmund;\nprlw1@cam.ac.uk; tih@kpnQwest.no; Tatsuo Ishii; Permaine Cheung\nSubject: Re: [HACKERS] Platform testing (last call?)\n\n\nAlex Avriette <a_avriette@acs.org> writes:\n\n> works for me. doug if you'd like an account on a 10.1 machine i can\narrange\n> that. i've got both the client and the server flavors available. there was\n> some bizarre two-level-namespace problem, but it seems to work okay now.\n> (update, yeah, just checked again to make sure. 10.1, 10.1.1, and\n> 10.0-10.0.4 are all okay)..\n\nThanks for the offer. I have a PB that I just updated to 10.1 so I'm\ngoing to try to get it running there, probably tonight.\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n", "msg_date": "Thu, 13 Dec 2001 13:46:38 -0500", "msg_from": "Alex Avriette <a_avriette@acs.org>", "msg_from_op": true, "msg_subject": "Re: Platform testing (last call?)" } ]
[ { "msg_contents": "> I want to compile libpgtcl on a AIX 4.3.2 RS6000 but I have \n> problem at the linking phase.\n> I ran the ./configure like that:\n> $ sh ./configure --with-CC=gcc --with-includes=/usr/local/include\n> --with-libraries=/usr/local/lib --with-tclconfig=/usr/local/lib\n> --with-tkconfig=/usr/local/lib\n> \n> After everything whent fine until time for libpgtcl.so I had those errors.\n> Can you help me\n> Thank you\n> Jacques Talbot\n> \n> \n> gcc -O2 -pipe -Wall -Wmissing-prototypes \n> -Wmissing-declarations -Wl,-H512\n> -Wl,-bM:SRE -Wl,-bI:../../../src/backend/postgres.imp \n> -Wl,-bE:libpgtcl.exp -o\n> libpgtcl.so libpgtcl.a -L../../../src/interfaces/libpq -lpq -lc\n.....\n> ld: 0711-317 ERROR: Undefined symbol: .Tcl_DStringInit\n> ld: 0711-317 ERROR: Undefined symbol: .Tcl_DStringStartSublist\n> collect2: ld returned 8 exit status\n> make[3]: *** [libpgtcl.so] Error 1\n\nconfigure unfortunately does not include the tcl library \nfor linking libpgtcl.so on AIX :-(\n\nIn the meantime Jacques, your tcl installation should have a shared \ntcl library (libtcl8.2.so) or better yet an *.exp, which you can include\nin the above link command. \n\nIn the shell manually execute the link, like:\n\ncd src/pl/tcl\ngcc -O2 -pipe -Wall -Wmissing-prototypes \\\n -Wmissing-declarations -Wl,-H512 \\\n -Wl,-bM:SRE -Wl,-bI:../../../src/backend/postgres.imp \\\n -Wl,-bE:libpgtcl.exp -o \\\n libpgtcl.so libpgtcl.a -L../../../src/interfaces/libpq -lpq -lc \\ \n -L/usr/local/lib -Wl,-bI:/usr/local/lib/libtcl8.2.exp\n\nThe above addition could be generated with TCL_LIB_SPEC.\nThe -L/usr/local/lib is also essential to avoid runtime hassles,\nwhy is this actually not automatic, since --with-libraries was used ?\n\nWould a patch for src/pl/tcl/Makefile still be accepted for 7.2 ?\n\nAndreas\n", "msg_date": "Fri, 14 Dec 2001 10:00:49 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Problem compiling postgres sql --with-tcl" }, { "msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> Would a patch for src/pl/tcl/Makefile still be accepted for 7.2 ?\n\nIf it's simple and low-risk, probably so. Please submit it anyway,\nso we'll have it on file even if the decision is to hold it till\n7.3.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 14 Dec 2001 10:38:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem compiling postgres sql --with-tcl " } ]
[ { "msg_contents": "while explaining this query.\n\nWe're getting this strange result:\nLimit (cost=840340.22..840340.22 rows=6 width=53) (actual\ntime=-3035.03..-3034.97 rows=3 loops=1)\n -> Sort (cost=840340.22..840340.22 rows=7 width=53) (actual\ntime=-3035.07..-3035.05 rows=3 loops=1)\n -> Aggregate (cost=840339.80..840340.13 rows=7 width=53)\n(actual time=-3037.55..-3036.18 rows=3 loops=1)\n -> Group (cost=840339.80..840339.96 rows=67 width=53)\n(actual time=-3038.21..-3036.80 rows=37 loops=1)\n -> Sort (cost=840339.80..840339.80 rows=67\nwidth=53) (actual time=-3038.26..-3038.00 rows=37 loops=1)\n -> Nested Loop (cost=6.88..840337.78 rows=67\nwidth=53) (actual time=532.35..-3040.17 rows=37 loops=1)\n -> Hash Join (cost=6.88..839378.23\nrows=201 width=49) (actual time=531.00..-3094.78 rows=86 loops=1)\n -> Seq Scan on lineitem l1\n(cost=0.00..839343.72 rows=5025 width=8) (actual time=45.33..-3151.70\nrows=3079 loops=1)\n SubPlan\n -> Index Scan using\nlineitem_pkey on lineitem l2 (cost=0.00..18.33 rows=5 width=172)\n(actual time=0.20..0.20 rows=1 loops=30334)\n -> Index Scan using\nlineitem_pkey on lineitem l3 (cost=0.00..18.34 rows=2 width=172)\n(actual time=0.24..0.24 rows=1 loops=29240)\n -> Hash (cost=6.87..6.87 rows=4\nwidth=41) (actual time=5.78..5.78 rows=0 loops=1)\n -> Hash Join\n(cost=1.31..6.87 rows=4 width=41) (actual time=1.46..5.71 rows=3\nloops=1)\n -> Seq Scan on\nsupplier (cost=0.00..5.00 rows=100 width=37) (actual time=0.06..3.24\nrows=100 loops=1)\n -> Hash\n(cost=1.31..1.31 rows=1 width=4) (actual time=0.61..0.61 rows=0 loops=1)\n\n -> Seq Scan on\nnation (cost=0.00..1.31 rows=1 width=4) (actual time=0.47..0.56 rows=1\nloops=1)\n -> Index Scan using orders_pkey on\norders (cost=0.00..4.76 rows=1 width=4) (actual time=0.57..0.58 rows=0\nloops=86)\nTotal runtime: -3032.37 msec\nCan someone explain this?", "msg_date": "Fri, 14 Dec 2001 10:48:36 +0100", "msg_from": "Luis Amigo <lamigo@atc.unican.es>", "msg_from_op": true, "msg_subject": "can someone explain that?" }, { "msg_contents": "On Fri, Dec 14, 2001 at 10:48:36AM +0100, Luis Amigo wrote:\n> while explaining this query.\n> \n> We're getting this strange result:\n> Limit (cost=840340.22..840340.22 rows=6 width=53) (actual\n> time=-3035.03..-3034.97 rows=3 loops=1)\n\nHow long did this query take? Was it over an hour?\n-- \nMartijn van Oosterhout <kleptog@svana.org>\nhttp://svana.org/kleptog/\n> Terrorists can only take my life. Only my government can take my freedom.\n", "msg_date": "Fri, 14 Dec 2001 21:43:45 +1100", "msg_from": "Martijn van Oosterhout <kleptog@svana.org>", "msg_from_op": false, "msg_subject": "Re: can someone explain that?" }, { "msg_contents": "Martijn van Oosterhout wrote:\n\n> On Fri, Dec 14, 2001 at 10:48:36AM +0100, Luis Amigo wrote:\n> > while explaining this query.\n> >\n> > We're getting this strange result:\n> > Limit (cost=840340.22..840340.22 rows=6 width=53) (actual\n> > time=-3035.03..-3034.97 rows=3 loops=1)\n>\n> How long did this query take? Was it over an hour?\n> --\n> Martijn van Oosterhout <kleptog@svana.org>\n> http://svana.org/kleptog/\n> > Terrorists can only take my life. Only my government can take my freedom.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n18sec\nRegards\n\n", "msg_date": "Fri, 14 Dec 2001 13:32:11 +0100", "msg_from": "Luis Amigo <lamigo@atc.unican.es>", "msg_from_op": true, "msg_subject": "Re: can someone explain that?" } ]
[ { "msg_contents": "So was anything decided?\n\nWill postmaster have a \"-C\" option to specify a configuration file?\nWithin this file will one be allowed to specify data directory and/or hba\nconfig file?\n", "msg_date": "Fri, 14 Dec 2001 05:50:21 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Explicit config file, status?" } ]
[ { "msg_contents": "\n\n A simple question: when is last day when is possible send some text\n for 7.2? (or is now too late?:-)\n\n Karel\n\n PS. I have and still work on some Czech transaltions.\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Fri, 14 Dec 2001 12:01:34 +0100", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": true, "msg_subject": "deadline date for .po translators" }, { "msg_contents": "Karel Zak writes:\n\n> A simple question: when is last day when is possible send some text\n> for 7.2? (or is now too late?:-)\n\nBefore RC1 preferrably. Spelling fixes possibly thereafter.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Fri, 14 Dec 2001 16:18:20 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: deadline date for .po translators" } ]
[ { "msg_contents": "Hi Dave,\n\nIt is observed that while creating table, once we define the attributes and\ntheir properties, we can not change the attribute type. e.g. If we have\ndefined an attribute of type int. Now if one want to change it to numeric,\nit is not allowed thorugh the pgAdmin GUI. However we have the option of\nfurther addition of the attribute to the table. Also it is not possible to\nremove the present attribute(s) in the table.\n\nIs this a valid to have this kind of restriction if one want to do similar\nchange?? I think either we have to go to the psql prompt and use some\ncommand, or refine the whole table after dropping the old one. \n\nIs there any solution to this through GUI, as it is easy to do it.\n\n-anil.\n\n\n", "msg_date": "Fri, 14 Dec 2001 18:34:06 +0530", "msg_from": "Anil Jangam <anilj@indts.com>", "msg_from_op": true, "msg_subject": "Requirement for pgAdmin browser." }, { "msg_contents": "Le Vendredi 14 D�cembre 2001 14:04, vous avez �crit :\n> It is observed that while creating table, once we define the attributes and\n> their properties, we can not change the attribute type. e.g. If we have\n> defined an attribute of type int. Now if one want to change it to numeric,\n> it is not allowed thorugh the pgAdmin GUI. However we have the option of\n> further addition of the attribute to the table. Also it is not possible to\n> remove the present attribute(s) in the table.\n\nPostgreSQL does not allow table modification. The community have been waiting \nfor an ALTER TABLE DROP COLUMN and type PROMOTION / DEMOTION, for a long \ntime. Same as for Views and triggers : it is impossible to alter them... The \npolitics of PostgreSQL is to offer advanced features before basic ones.\n\nIs anyone working seriously on those issues?\n\n>> Is this a valid to have this kind of restriction if one want to do similar\n> change?? I think either we have to go to the psql prompt and use some\n> command, or refine the whole table after dropping the old one.\n> Is there any solution to this through GUI, as it is easy to do it.\n \nThe only workaround by now is to use CREATE TABLE AS and dropping the old \ntable.\n\nIn pgAdmin I, we used to offer a development mode where you could modify all \nparameters. Then the changes were applied to the database in a pseudo \ncompilation (drop/create). Then, we decided to wait for real ALTER features \nin PostgreSQL. We are still waiting for them.\n\nBest regards\nJean-Michel POURE\n", "msg_date": "Sat, 15 Dec 2001 15:56:23 +0100", "msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>", "msg_from_op": false, "msg_subject": "Re: Requirement for pgAdmin browser." }, { "msg_contents": "> Le Vendredi 14 D?cembre 2001 14:04, vous avez ?crit :\n> > It is observed that while creating table, once we define the attributes and\n> > their properties, we can not change the attribute type. e.g. If we have\n> > defined an attribute of type int. Now if one want to change it to numeric,\n> > it is not allowed thorugh the pgAdmin GUI. However we have the option of\n> > further addition of the attribute to the table. Also it is not possible to\n> > remove the present attribute(s) in the table.\n> \n> PostgreSQL does not allow table modification. The community have been waiting \n> for an ALTER TABLE DROP COLUMN and type PROMOTION / DEMOTION, for a long \n> time. Same as for Views and triggers : it is impossible to alter them... The \n> politics of PostgreSQL is to offer advanced features before basic ones.\n> \n> Is anyone working seriously on those issues?\n\nAgreed. We couldn't decide the best way to do it, so we did nothing. I\nhave hopes for 7.3.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 15 Dec 2001 13:06:24 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Requirement for pgAdmin browser." } ]
[ { "msg_contents": "Tom,\n\nare there any reasons why 'explain analyze' produces different numbers ?\n\nI just did dump/reload my test database and for the first time I get:\n\nastronet=# explain analyze select *\nastronet-# from\nastronet-# messages,message_parts\nastronet-# where messages.status_id in (5,8) AND (\nastronet(# message_parts.fts_index @ '{65331}') AND message_parts.msg_id =\nastronet-# messages.msg_id AND ( messages.sections &&\nastronet(# '{1168509,1168511,1168513,1168516,1170263,300000001,300000003,300000004,300000005,300000006,300000007,300000008,300000009,300000010,300000011,300000012,300000013,300000014,300000015,300000016,300000017,300000027,300000028,300000029,300000030,300000031,300000032,300000033,300000034,300000036,300000037,300000038,300000039,300000040,300000041,300000042,300000043,300000044,300000045,300000046,300000047,300000048,300000049,300000050,300000051,300000052,300000053,300000054,300000055,300000056,300000057,300000058,300000059,300000060,300000061,300000062,300000063,300000064,300000065,300000066,300000067,300000068,300000069,300000070,300000071,300000074,300000075,300000077,300000078,300000079,300000080,300000081}');\nNOTICE: QUERY PLAN:\n\nHash Join (cost=47.29..543.28 rows=1 width=812) (actual time=171.42..2594.08 rows=180 loops=1)\n -> Index Scan using fts_idx on message_parts (cost=0.00..495.38 rows=124 width=148) (actual time=1.03..2335.54 rows=226 loops=1)\n -> Hash (cost=47.28..47.28 rows=1 width=664) (actual time=169.95..169.95 rows=0 loops=1)\n -> Index Scan using section_idx on messages (cost=0.00..47.28 rows=1 width=664) (actual time=0.35..155.83 rows=3777 loops=1)\nTotal runtime: 2596.59 msec\n\nEXPLAIN\n\nimmediately repeat query (nothing changed):\n\nastronet=# explain analyze select *\nastronet-# from\nastronet-# messages,message_parts\nastronet-# where messages.status_id in (5,8) AND (\nastronet(# message_parts.fts_index @ '{65331}') AND message_parts.msg_id =\nastronet-# messages.msg_id AND ( messages.sections &&\nastronet(# '{1168509,1168511,1168513,1168516,1170263,300000001,300000003,300000004,300000005,300000006,300000007,300000008,300000009,300000010,300000011,300000012,300000013,300000014,300000015,300000016,300000017,300000027,300000028,300000029,300000030,300000031,300000032,300000033,300000034,300000036,300000037,300000038,300000039,300000040,300000041,300000042,300000043,300000044,300000045,300000046,300000047,300000048,300000049,300000050,300000051,300000052,300000053,300000054,300000055,300000056,300000057,300000058,300000059,300000060,300000061,300000062,300000063,300000064,300000065,300000066,300000067,300000068,300000069,300000070,300000071,300000074,300000075,300000077,300000078,300000079,300000080,300000081}');\nNOTICE: QUERY PLAN:\n\nHash Join (cost=47.29..543.28 rows=1 width=812) (actual time=182.29..448.05 rows=180 loops=1)\n -> Index Scan using fts_idx on message_parts (cost=0.00..495.38 rows=124 width=148) (actual time=0.85..176.46 rows=226 loops=1)\n -> Hash (cost=47.28..47.28 rows=1 width=664) (actual time=181.05..181.05 rows=0 loops=1)\n -> Index Scan using section_idx on messages (cost=0.00..47.28 rows=1 width=664) (actual time=0.63..166.68 rows=3777 loops=1)\nTotal runtime: 448.85 msec\n\nEXPLAIN\n\nafter 'vacuum full analyze' another plan:\n\nastronet=# explain analyze select *\nastronet-# from\nastronet-# messages,message_parts\nastronet-# where messages.status_id in (5,8) AND (\nastronet(# message_parts.fts_index @ '{65331}') AND message_parts.msg_id =\nastronet-# messages.msg_id AND ( messages.sections &&\nastronet(# '{1168509,1168511,1168513,1168516,1170263,300000001,300000003,300000004,300000005,300000006,300000007,300000008,300000009,300000010,300000011,300000012,300000013,300000014,300000015,300000016,300000017,300000027,300000028,300000029,300000030,300000031,300000032,300000033,300000034,300000036,300000037,300000038,300000039,300000040,300000041,300000042,300000043,300000044,300000045,300000046,300000047,300000048,300000049,300000050,300000051,300000052,300000053,300000054,300000055,300000056,300000057,300000058,300000059,300000060,300000061,300000062,300000063,300000064,300000065,300000066,300000067,300000068,300000069,300000070,300000071,300000074,300000075,300000077,300000078,300000079,300000080,300000081}');\nNOTICE: QUERY PLAN:\n\nNested Loop (cost=0.00..364.68 rows=1 width=1039) (actual time=37.10..740.14 rows=180 loops=1)\n -> Index Scan using section_idx on messages (cost=0.00..47.28 rows=3 width=268) (actual time=0.34..175.43 rows=3777 loops=1)\n -> Index Scan using message_parts_mp on message_parts (cost=0.00..102.57 rows=1 width=771) (actual time=0.12..0.14 rows=0 loops=3777)\nTotal runtime: 740.76 msec\n\nEXPLAIN\n\n\nI see 'fts_idx' not used anymore. section_idx and fts_idx are gist\nindices (contrib/intarray), message_parts_mp is:\nastronet=# \\d message_parts_mp\nIndex \"message_parts_mp\"\n Column | Type\n---------+---------\n msg_id | integer\n part_id | integer\nbtree\n\n\nMy question: does it normal that output of 'explain analyze' is unstable\neven if nothing was changed.\n\nAlso, I understand and we discussed that currently GiST doesn't provides\nany useful information to optimizer but last explain somehow decided\nto use 'section_idx' index but not use 'fts_idx' !\n\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 14 Dec 2001 17:23:39 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "7.2 cvs: explain analyze instabilities" }, { "msg_contents": "When the first time query was executed, data was fetched by OS into memory\nbuffers. Second time, data was already in-memory.\n\nAnd of course, vacuum updates statistics and gives different plan...\n\n-alex\n\nOn Fri, 14 Dec 2001, Oleg Bartunov wrote:\n\n> Tom,\n> \n> are there any reasons why 'explain analyze' produces different numbers ?\n> \n> I just did dump/reload my test database and for the first time I get:\n> \n> astronet=# explain analyze select *\n> astronet-# from\n> astronet-# messages,message_parts\n> astronet-# where messages.status_id in (5,8) AND (\n> astronet(# message_parts.fts_index @ '{65331}') AND message_parts.msg_id =\n> astronet-# messages.msg_id AND ( messages.sections &&\n> astronet(# '{1168509,1168511,1168513,1168516,1170263,300000001,300000003,300000004,300000005,300000006,300000007,300000008,300000009,300000010,300000011,300000012,300000013,300000014,300000015,300000016,300000017,300000027,300000028,300000029,300000030,300000031,300000032,300000033,300000034,300000036,300000037,300000038,300000039,300000040,300000041,300000042,300000043,300000044,300000045,300000046,300000047,300000048,300000049,300000050,300000051,300000052,300000053,300000054,300000055,300000056,300000057,300000058,300000059,300000060,300000061,300000062,300000063,300000064,300000065,300000066,300000067,300000068,300000069,300000070,300000071,300000074,300000075,300000077,300000078,300000079,300000080,300000081}');\n> NOTICE: QUERY PLAN:\n> \n> Hash Join (cost=47.29..543.28 rows=1 width=812) (actual time=171.42..2594.08 rows=180 loops=1)\n> -> Index Scan using fts_idx on message_parts (cost=0.00..495.38 rows=124 width=148) (actual time=1.03..2335.54 rows=226 loops=1)\n> -> Hash (cost=47.28..47.28 rows=1 width=664) (actual time=169.95..169.95 rows=0 loops=1)\n> -> Index Scan using section_idx on messages (cost=0.00..47.28 rows=1 width=664) (actual time=0.35..155.83 rows=3777 loops=1)\n> Total runtime: 2596.59 msec\n> \n> EXPLAIN\n> \n> immediately repeat query (nothing changed):\n> \n> astronet=# explain analyze select *\n> astronet-# from\n> astronet-# messages,message_parts\n> astronet-# where messages.status_id in (5,8) AND (\n> astronet(# message_parts.fts_index @ '{65331}') AND message_parts.msg_id =\n> astronet-# messages.msg_id AND ( messages.sections &&\n> astronet(# '{1168509,1168511,1168513,1168516,1170263,300000001,300000003,300000004,300000005,300000006,300000007,300000008,300000009,300000010,300000011,300000012,300000013,300000014,300000015,300000016,300000017,300000027,300000028,300000029,300000030,300000031,300000032,300000033,300000034,300000036,300000037,300000038,300000039,300000040,300000041,300000042,300000043,300000044,300000045,300000046,300000047,300000048,300000049,300000050,300000051,300000052,300000053,300000054,300000055,300000056,300000057,300000058,300000059,300000060,300000061,300000062,300000063,300000064,300000065,300000066,300000067,300000068,300000069,300000070,300000071,300000074,300000075,300000077,300000078,300000079,300000080,300000081}');\n> NOTICE: QUERY PLAN:\n> \n> Hash Join (cost=47.29..543.28 rows=1 width=812) (actual time=182.29..448.05 rows=180 loops=1)\n> -> Index Scan using fts_idx on message_parts (cost=0.00..495.38 rows=124 width=148) (actual time=0.85..176.46 rows=226 loops=1)\n> -> Hash (cost=47.28..47.28 rows=1 width=664) (actual time=181.05..181.05 rows=0 loops=1)\n> -> Index Scan using section_idx on messages (cost=0.00..47.28 rows=1 width=664) (actual time=0.63..166.68 rows=3777 loops=1)\n> Total runtime: 448.85 msec\n> \n> EXPLAIN\n> \n> after 'vacuum full analyze' another plan:\n> \n> astronet=# explain analyze select *\n> astronet-# from\n> astronet-# messages,message_parts\n> astronet-# where messages.status_id in (5,8) AND (\n> astronet(# message_parts.fts_index @ '{65331}') AND message_parts.msg_id =\n> astronet-# messages.msg_id AND ( messages.sections &&\n> astronet(# '{1168509,1168511,1168513,1168516,1170263,300000001,300000003,300000004,300000005,300000006,300000007,300000008,300000009,300000010,300000011,300000012,300000013,300000014,300000015,300000016,300000017,300000027,300000028,300000029,300000030,300000031,300000032,300000033,300000034,300000036,300000037,300000038,300000039,300000040,300000041,300000042,300000043,300000044,300000045,300000046,300000047,300000048,300000049,300000050,300000051,300000052,300000053,300000054,300000055,300000056,300000057,300000058,300000059,300000060,300000061,300000062,300000063,300000064,300000065,300000066,300000067,300000068,300000069,300000070,300000071,300000074,300000075,300000077,300000078,300000079,300000080,300000081}');\n> NOTICE: QUERY PLAN:\n> \n> Nested Loop (cost=0.00..364.68 rows=1 width=1039) (actual time=37.10..740.14 rows=180 loops=1)\n> -> Index Scan using section_idx on messages (cost=0.00..47.28 rows=3 width=268) (actual time=0.34..175.43 rows=3777 loops=1)\n> -> Index Scan using message_parts_mp on message_parts (cost=0.00..102.57 rows=1 width=771) (actual time=0.12..0.14 rows=0 loops=3777)\n> Total runtime: 740.76 msec\n> \n> EXPLAIN\n> \n> \n> I see 'fts_idx' not used anymore. section_idx and fts_idx are gist\n> indices (contrib/intarray), message_parts_mp is:\n> astronet=# \\d message_parts_mp\n> Index \"message_parts_mp\"\n> Column | Type\n> ---------+---------\n> msg_id | integer\n> part_id | integer\n> btree\n> \n> \n> My question: does it normal that output of 'explain analyze' is unstable\n> even if nothing was changed.\n> \n> Also, I understand and we discussed that currently GiST doesn't provides\n> any useful information to optimizer but last explain somehow decided\n> to use 'section_idx' index but not use 'fts_idx' !\n> \n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n> \n\n", "msg_date": "Fri, 14 Dec 2001 09:51:35 -0500 (EST)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: 7.2 cvs: explain analyze instabilities" }, { "msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> My question: does it normal that output of 'explain analyze' is unstable\n> even if nothing was changed.\n\nCertainly. Buffering issues alone would cause actual runtime of a first\nrun to vary from subsequent runs.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 14 Dec 2001 10:24:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2 cvs: explain analyze instabilities " } ]
[ { "msg_contents": "See http://www.businessweek.com/bwdaily/dnflash/dec2001/nf20011211_3015.htm \nand figure out why I've posted it here... :-)\n\n(spoiler: a security hardware company is using PostgreSQL in a security \nappliance based on open source software) _In_Business_Week_.:-O\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 14 Dec 2001 09:24:54 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": true, "msg_subject": "Interesting article." } ]
[ { "msg_contents": "Tom wrote:\n> > Would a patch for src/pl/tcl/Makefile still be accepted for 7.2 ?\n> \n> If it's simple and low-risk, probably so. Please submit it anyway,\n> so we'll have it on file even if the decision is to hold it till\n> 7.3.\n\nUnfortunately the issue is in both interfaces/libpgtcl and pl/tcl.\nlibpgtcl is not prepared for the tclConfig.sh stuff, thus it would be\nmore than a few lines.\n\nSecond I do not understand why the Makefile in pl/tcl is so complicated, \nand not similar e.g. to the plpython one, so the first task should be to \nsimplify the Makefile to use Makefile.shlib, and not cook it's own soup.\n\nBasically:\n\tNAME=pltcl\n\tOBJS=pltcl.o \n\tSHLIB_LINK=$(TCL_LIB_SPEC) $(TCL_LIBS)\n\tall: all-lib\n\nThird unfortunately my daughter is sick and my wife asked me to come home,\nso I won't be able to work on it till Monday :-(\n\nAndreas\n", "msg_date": "Fri, 14 Dec 2001 17:02:17 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Problem compiling postgres sql --with-tcl " }, { "msg_contents": "Zeugswetter Andreas SB SD writes:\n\n> Second I do not understand why the Makefile in pl/tcl is so complicated,\n> and not similar e.g. to the plpython one, so the first task should be to\n> simplify the Makefile to use Makefile.shlib, and not cook it's own soup.\n\nAll the oddities in that makefile are there because presumably some system\nneeded them in the past. Personally, I think the whole makefile is one\nbig bug, but I wasn't too eager to touch it because it seemed to work.\nNow that we have an actual case where it doesn't I guess we need to look\nat it.\n\n> Basically:\n> \tNAME=pltcl\n> \tOBJS=pltcl.o\n> \tSHLIB_LINK=$(TCL_LIB_SPEC) $(TCL_LIBS)\n> \tall: all-lib\n\nThat seems about right. Please try it out at your convenience.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sat, 15 Dec 2001 19:08:22 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Problem compiling postgres sql --with-tcl " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Zeugswetter Andreas SB SD writes:\n>> Second I do not understand why the Makefile in pl/tcl is so complicated,\n>> and not similar e.g. to the plpython one, so the first task should be to\n>> simplify the Makefile to use Makefile.shlib, and not cook it's own soup.\n\n> All the oddities in that makefile are there because presumably some system\n> needed them in the past.\n\nNo, it's a historical thing: the Makefile.shlib stuff didn't exist when\npltcl was developed. I'm not entirely sure why I didn't try to fold\npltcl in with the Makefile.shlib approach when we started doing that.\nPossibly I was just thinking \"don't fix what ain't broken\". But now\nI'd vote for changing over.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 15 Dec 2001 13:19:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem compiling postgres sql --with-tcl " } ]
[ { "msg_contents": "Hi,\n\n the guy under the nickname WhiteNinja: These idiots at GMX\n again don't accept mail from Yahoo. Do you have another\n address?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Fri, 14 Dec 2001 15:23:16 -0500 (EST)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": true, "msg_subject": "Mail for WhiteNinja" } ]
[ { "msg_contents": "Hi,\n\n I'm trying to clean up a bug in pg_dump where specifying a table\nwith the '-t tablename' argument fails to generate the necessary\nCREATE SEQUENCE statements for columns of type SERIAL that are not\nnamed \"id\" (example at bottom of email).\n\n So... The gist of the problem is that there /appears/ to be no \ndirect way to determine the sequence(s) referenced in any nextval(...)\ncolumn defaults. Below is only relationship I've found between the \ntable \"test2\" and the SERIAL-created sequence \"test2_i_seq\".\n\nbrent=# select adsrc from pg_attrdef \nbrent-# where adrelid=(select oid from pg_class where relname='test2');\n adsrc \n--------------------------------\n nextval('\"test2_i_seq\"'::text)\n(1 row)\n\nHave I missed a more basic/straightforward relationship between these\ntwo in the system catalogs?\n\n\nI propose adding a function to pg_dump.c for now. I'll work on putting\nthis knowledge into the backend post-7.2, and toward solving the \nDROP TABLE automatically dropping SERIAL-created sequences problem.\n\nthanks.\n brent\n\n======================================================================\nsleepy:/usr/local/pg-7.2/bin\nbrent$ ./psql -c '\\d test2'\n Table \"test2\"\n Column | Type | Modifiers \n--------+-----------------------+-------------------------------------------------\n n | character varying(32) | \n i | integer | not null default nextval('\"test2_i_seq\"'::text)\nUnique keys: test2_i_key\n\nsleepy:/usr/local/pg-7.2/bin\nbrent$ ./pg_dump -t test2 brent\n--\n-- Selected TOC Entries:\n--\n\\connect - brent\n\n--\n-- TOC Entry ID 2 (OID 16571)\n--\n-- Name: test2 Type: TABLE Owner: brent\n--\n\nCREATE TABLE \"test2\" (\n \"n\" character varying(32),\n \"i\" integer DEFAULT nextval('\"test2_i_seq\"'::text) NOT NULL\n);\n\n--\n-- Data for TOC Entry ID 4 (OID 16571)\n--\n-- Name: test2 Type: TABLE DATA Owner: brent\n--\n\n\nCOPY \"test2\" FROM stdin;\n\\.\n--\n-- TOC Entry ID 3 (OID 16573)\n--\n-- Name: \"test2_i_key\" Type: INDEX Owner: brent\n--\n\nCREATE UNIQUE INDEX test2_i_key ON test2 USING btree (i);\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Fri, 14 Dec 2001 22:53:27 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": true, "msg_subject": "system catalog relation of a table and a serial sequence" }, { "msg_contents": "Brent Verner wrote:\n\n> I propose adding a function to pg_dump.c for now. I'll work on putting\n> this knowledge into the backend post-7.2, and toward solving the\n> DROP TABLE automatically dropping SERIAL-created sequences problem.\n\nThen some mechanism should be devised for disallowing other \ntables/functions/... from using these sequences.\n\n---------------\nHannu\n", "msg_date": "Sat, 15 Dec 2001 13:05:19 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: system catalog relation of a table and a serial sequence" }, { "msg_contents": "[2001-12-14 22:53] Brent Verner said:\n| I propose adding a function to pg_dump.c for now.\n\nPatch adding a getSerialSequenceName() function to pg_dump.[ch] is\nattached.\n\nI'm aware that this is not the /best/ solution to this problem, but\nit is better than the current breakage in pg_dump.\n\nfeedback appreciated.\n\ncheers.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Sat, 15 Dec 2001 20:32:56 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": true, "msg_subject": "Re: system catalog relation of a table and a serial sequence" }, { "msg_contents": "[2001-12-15 20:32] Brent Verner said:\n| [2001-12-14 22:53] Brent Verner said:\n| | I propose adding a function to pg_dump.c for now.\n| \n| Patch adding a getSerialSequenceName() function to pg_dump.[ch] is\n| attached.\n\n...too quick on the send. Patch attached for real this time ;-)\n\n b\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman", "msg_date": "Sat, 15 Dec 2001 20:33:50 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": true, "msg_subject": "Re: system catalog relation of a table and a serial sequence" }, { "msg_contents": "Brent Verner <brent@rcfile.org> writes:\n> I'm aware that this is not the /best/ solution to this problem, but\n> it is better than the current breakage in pg_dump.\n\nI'd dispute that, primarily because the patch blithely assumes that\nthere is no other kind of default value than a serial-created nextval().\nIt looks to me like it will either coredump or do the wrong thing\nwith other default-value strings.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 15 Dec 2001 21:02:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: system catalog relation of a table and a serial sequence " }, { "msg_contents": "[2001-12-15 21:02] Tom Lane said:\n| Brent Verner <brent@rcfile.org> writes:\n| > I'm aware that this is not the /best/ solution to this problem, but\n| > it is better than the current breakage in pg_dump.\n| \n| I'd dispute that, primarily because the patch blithely assumes that\n| there is no other kind of default value than a serial-created nextval().\n| It looks to me like it will either coredump or do the wrong thing\n| with other default-value strings.\n\nmonkey me! Yes, quite a nasty oversight! I'll clean this up.\n\nthanks.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Sat, 15 Dec 2001 21:10:05 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": true, "msg_subject": "Re: system catalog relation of a table and a serial sequence" }, { "msg_contents": "While you're at it, why not fix the code so that it can deal with\nmultiple SERIALs attached to a table?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 15 Dec 2001 21:12:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: system catalog relation of a table and a serial sequence " }, { "msg_contents": "[2001-12-15 21:12] Tom Lane said:\n| While you're at it, why not fix the code so that it can deal with\n| multiple SERIALs attached to a table?\n\nwill do. I'd appreciate a bit of advice on both of the issues to\nbe addressed.\n\n1) Is a strcmp(firststrtok,\"nextval('\") == 0 sufficient to determine\n that the adsrc is indeed one that we're looking for? If not, \n suggestions are greatly appreciated :-)\n\n2) Should this function now look like .. ?\n char** getSerialSequenceNames(const char* table)\n Or would you suggest it return a smarter struct?\n\nthanks.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Sat, 15 Dec 2001 21:29:07 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": true, "msg_subject": "Re: system catalog relation of a table and a serial sequence" }, { "msg_contents": "Brent Verner <brent@rcfile.org> writes:\n> 1) Is a strcmp(firststrtok,\"nextval('\") == 0 sufficient to determine\n> that the adsrc is indeed one that we're looking for? If not, \n> suggestions are greatly appreciated :-)\n\nI would not use strtok at all, but look for nextval('\" at the start\nof the string and \"'::text) at the end. If both match, and there's\nat least one character between, then the intervening text can be\npresumed to be a sequence name. You might further check that the\napparent sequence name ends with _seq --- if not, it wasn't generated\nby SERIAL.\n\n> 2) Should this function now look like .. ?\n> char** getSerialSequenceNames(const char* table)\n> Or would you suggest it return a smarter struct?\n\nchar** (null-terminated vector) would probably work.\n\nBTW, don't forget you have the OID of the table available from the table\nlist, so you can avoid the subselect, as well as the relname quoting\nissues that you didn't take care of. When a tablename argument is\nprovided, I'd be inclined to make a pre-pass over the table list to see\nif it matches any non-sequence table names, and if so build a list of\ntheir associated sequence name(s). Keep in mind that we'll probably\ngeneralize the tablename argument to support wildcarding someday soon,\nso it'd be good if the code could cope with more than one matching\ntable.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 15 Dec 2001 21:43:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: system catalog relation of a table and a serial sequence " }, { "msg_contents": "[2001-12-15 21:43] Tom Lane said:\n| Brent Verner <brent@rcfile.org> writes:\n| > 1) Is a strcmp(firststrtok,\"nextval('\") == 0 sufficient to determine\n| > that the adsrc is indeed one that we're looking for? If not, \n| > suggestions are greatly appreciated :-)\n| \n| I would not use strtok at all, but look for nextval('\" at the start\n| of the string and \"'::text) at the end. If both match, and there's\n| at least one character between, then the intervening text can be\n| presumed to be a sequence name. You might further check that the\n| apparent sequence name ends with _seq --- if not, it wasn't generated\n| by SERIAL.\n\nWhy not use strtok? The following should be safe, no?\n\n t1 = strtok(adsrc,\"\\\"\");\n t2 = strtok(NULL,\"\\\"\");\n t3 = strtok(NULL,\"\\\"\");\n\n if( t0 && t2\n && strcmp(t0,\"nextval('\") == 0\n && strcmp(t2,\"'::text)\") == 0 ){\n /* this is a call to nextval, check for t1 =~ /_seq$/ */\n \n }\n\n| > 2) Should this function now look like .. ?\n| > char** getSerialSequenceNames(const char* table)\n| > Or would you suggest it return a smarter struct?\n| \n| char** (null-terminated vector) would probably work.\n| \n| BTW, don't forget you have the OID of the table available from the table\n| list, so you can avoid the subselect, as well as the relname quoting\n| issues that you didn't take care of. When a tablename argument is\n| provided, I'd be inclined to make a pre-pass over the table list to see\n| if it matches any non-sequence table names, and if so build a list of\n| their associated sequence name(s). Keep in mind that we'll probably\n| generalize the tablename argument to support wildcarding someday soon,\n| so it'd be good if the code could cope with more than one matching\n| table.\n\ngotcha.\n\nthanks.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Sat, 15 Dec 2001 23:00:06 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": true, "msg_subject": "Re: system catalog relation of a table and a serial sequence" }, { "msg_contents": "[2001-12-15 22:26] Rod Taylor said:\n| \n| > Brent Verner <brent@rcfile.org> writes:\n| > > 1) Is a strcmp(firststrtok,\"nextval('\") == 0 sufficient to\n| determine\n| > > that the adsrc is indeed one that we're looking for? If not,\n| > > suggestions are greatly appreciated :-)\n| >\n| > I would not use strtok at all, but look for nextval('\" at the start\n| > of the string and \"'::text) at the end. If both match, and there's\n| > at least one character between, then the intervening text can be\n| > presumed to be a sequence name. You might further check that the\n| > apparent sequence name ends with _seq --- if not, it wasn't\n| generated\n| > by SERIAL.\n| \n| Wouldn't you want to include user sequences that are required for\n| using the table? If someone has used their own sequence as the\n| default value for a column it would be nice to have it dumped as well.\n\nThis is my thought as well. Hopefully Tom will concur.\n\ncheers.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Sat, 15 Dec 2001 23:03:32 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] system catalog relation of a table and a serial\n sequence" }, { "msg_contents": "> | > You might further check that the\n> | > apparent sequence name ends with _seq --- if not, it wasn't\n> | > generated by SERIAL.\n> | \n> | Wouldn't you want to include user sequences that are required for\n> | using the table? If someone has used their own sequence as the\n> | default value for a column it would be nice to have it dumped as well.\n\n> This is my thought as well. Hopefully Tom will concur.\n\nWell, that's why I said \"might\". I'm not sure what the correct behavior\nis here. If we had an actual SERIAL datatype --- that is, we could\nunambiguously tell that a given column was SERIAL --- then a case could\nbe made that \"pg_dump -t table\" should dump only those sequences\nassociated with table's SERIAL columns.\n\nI think it'd be a bit surprising if \"pg_dump -t table\" would dump\nsequences declared independently of the table. An example where you'd\nlikely not be happy with that is if the same sequence is being used to\nfeed multiple tables.\n\nI agree that dumping all such sequences will often be the desired\nbehavior, but that doesn't leave me convinced that it's the right\nthing to do.\n\nAny comments out there?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 15 Dec 2001 23:17:54 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] system catalog relation of a table and a serial\n\tsequence" }, { "msg_contents": "Brent Verner <brent@rcfile.org> writes:\n> Why not use strtok?\n\nWell, it's ugly (I don't like non-reentrant library routines), it's\nnot really buying anything, and I don't think you've got the corner\ncases right anyway. I'd go for something like\n\n\tif (strlen(adsrc) > 19 &&\n\t strncmp(adsrc, \"nextval('\\\"\", 10) == 0 &&\n\t strcmp(adsrc + strlen(adsrc) - 9, \"\\\"'::text)\") == 0)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 15 Dec 2001 23:25:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: system catalog relation of a table and a serial sequence " }, { "msg_contents": "[2001-12-15 23:17] Tom Lane said:\n| > | > You might further check that the\n| > | > apparent sequence name ends with _seq --- if not, it wasn't\n| > | > generated by SERIAL.\n| > | \n| > | Wouldn't you want to include user sequences that are required for\n| > | using the table? If someone has used their own sequence as the\n| > | default value for a column it would be nice to have it dumped as well.\n| \n| > This is my thought as well. Hopefully Tom will concur.\n| \n| Well, that's why I said \"might\". I'm not sure what the correct behavior\n| is here. If we had an actual SERIAL datatype --- that is, we could\n| unambiguously tell that a given column was SERIAL --- then a case could\n| be made that \"pg_dump -t table\" should dump only those sequences\n| associated with table's SERIAL columns.\n| \n| I think it'd be a bit surprising if \"pg_dump -t table\" would dump\n| sequences declared independently of the table. An example where you'd\n| likely not be happy with that is if the same sequence is being used to\n| feed multiple tables.\n| \n| I agree that dumping all such sequences will often be the desired\n| behavior, but that doesn't leave me convinced that it's the right\n| thing to do.\n| \n| Any comments out there?\n\nsure :-) What we can do is determine /any/ sequence referenced \nby a nextval(..) attribute default with the following SELECT query.\n\ncreate sequence non_serial_sequence;\ncreate table aaa ( \n id serial, \n nonid int default nextval('non_serial_sequence') \n);\nSELECT adsrc FROM pg_attrdef WHERE adrelid=(\n SELECT oid FROM pg_class WHERE relname='aaa'\n);\n\n adsrc \n--------------------------------------\n nextval('\"aaa_id_seq\"'::text)\n nextval('non_serial_sequence'::text)\n\nWe get the nextval(..) calls to both of the referenced sequences,\nand the strtok code I'm using extracts the proper sequence names.\nAm I overlooking something here? Is there any other way a nextval(..)\nadsrc would appear not containing a sequence related to this relation?\n\ncheers.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Sat, 15 Dec 2001 23:38:19 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] system catalog relation of a table and a serial\n sequence" }, { "msg_contents": "[2001-12-15 23:25] Tom Lane said:\n| Brent Verner <brent@rcfile.org> writes:\n| > Why not use strtok?\n| \n| Well, it's ugly (I don't like non-reentrant library routines), it's\n| not really buying anything, and I don't think you've got the corner\n| cases right anyway. I'd go for something like\n\nHow about strtok_r? I /really/ like the fact that strtok will\neat either of the tokens ['\"] that might be around the sequence\nname... just call me lazy :-)\n\nthanks.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Sat, 15 Dec 2001 23:52:30 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": true, "msg_subject": "Re: system catalog relation of a table and a serial sequence" }, { "msg_contents": "Brent Verner <brent@rcfile.org> writes:\n> How about strtok_r? I /really/ like the fact that strtok will\n> eat either of the tokens ['\"] that might be around the sequence\n> name... just call me lazy :-)\n\nThat behavior creates one of the \"corner cases\" I was alluding to.\nShall I leave the difficulty as an exercise for the student?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 16 Dec 2001 00:42:54 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: system catalog relation of a table and a serial sequence " }, { "msg_contents": "[2001-12-16 00:42] Tom Lane said:\n| Brent Verner <brent@rcfile.org> writes:\n| > How about strtok_r? I /really/ like the fact that strtok will\n| > eat either of the tokens ['\"] that might be around the sequence\n| > name... just call me lazy :-)\n| \n| That behavior creates one of the \"corner cases\" I was alluding to.\n| Shall I leave the difficulty as an exercise for the student?\n\nsure. I'm assuming the strtok_r is not acceptable :-) I'll work on\nthis a bit more tonight. Expect a better patch sometiime tomorrow.\n\nthanks.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Sun, 16 Dec 2001 01:20:18 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": true, "msg_subject": "Re: system catalog relation of a table and a serial sequence" }, { "msg_contents": "At 01:20 16/12/01 -0500, Brent Verner wrote:\n>\n> Expect a better patch sometiime tomorrow.\n>\n\nThe hard part in this might be getting pg_restore to know that the sequence\nis part of the table definition; perhaps it is adequate to use the 'deps'\nfield. \n\nI think it is currently unused for SEQUENCE (and SEQUENCE SET) entries, so\nwe could assume that if it is set, the sequence is logically part of a\ntable. You could set the deps to the table OID, then the restore operation\n(_tocEntryRequired) could scan the TOC for a matching table, and see if the\nmatching table is being restored (ie. _tocEntryRequired would, in the case\nof 'SEQUENCE' and 'SEQUENCE SET' entries, scan the entire TOC for the\nmatching table then call itself recursively on the table entry.\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sun, 16 Dec 2001 18:12:56 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: system catalog relation of a table and a serial" }, { "msg_contents": "On Sun, 2001-12-16 at 17:17, Tom Lane wrote:\n> > | > You might further check that the\n> > | > apparent sequence name ends with _seq --- if not, it wasn't\n> > | > generated by SERIAL.\n> > | \n> > | Wouldn't you want to include user sequences that are required for\n> > | using the table? If someone has used their own sequence as the\n> > | default value for a column it would be nice to have it dumped as well.\n> \n> > This is my thought as well. Hopefully Tom will concur.\n> \n> Well, that's why I said \"might\". I'm not sure what the correct behavior\n> is here. If we had an actual SERIAL datatype --- that is, we could\n> unambiguously tell that a given column was SERIAL --- then a case could\n> be made that \"pg_dump -t table\" should dump only those sequences\n> associated with table's SERIAL columns.\n> \n> I think it'd be a bit surprising if \"pg_dump -t table\" would dump\n> sequences declared independently of the table. An example where you'd\n> likely not be happy with that is if the same sequence is being used to\n> feed multiple tables.\n> \n> I agree that dumping all such sequences will often be the desired\n> behavior, but that doesn't leave me convinced that it's the right\n> thing to do.\n> \n> Any comments out there?\n\nAlong with \"DROP COLUMN\" this is probably one of the biggest \"I can't\nbelieve it doesn't\" things out there.\n\nI would tend to say that Brent's patch, in dumping all of the sequences\nused by a table, is erring on the _correct_ side of caution.\n\nRemember that someone who this is a problem for can easily post-process\nthe sequence out of the dump with sed or something, but someone for whom\nthe opposite is true doesn't have anything like as trivial a job to put\nit back in there.\n\nCheers,\n\t\t\t\t\tAndrew.\n-- \n--------------------------------------------------------------------\nAndrew @ Catalyst .Net.NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)916-7201 MOB: +64(21)635-694 OFFICE: +64(4)499-2267\n\n", "msg_date": "16 Dec 2001 20:25:00 +1300", "msg_from": "Andrew McMillan <andrew@catalyst.net.nz>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] system catalog relation of a table and a" }, { "msg_contents": "[2001-12-16 00:42] Tom Lane said:\n| Brent Verner <brent@rcfile.org> writes:\n| > How about strtok_r? I /really/ like the fact that strtok will\n| > eat either of the tokens ['\"] that might be around the sequence\n| > name... just call me lazy :-)\n| \n| That behavior creates one of the \"corner cases\" I was alluding to.\n| Shall I leave the difficulty as an exercise for the student?\n\nOk... I ended up working longer than I'd thought :-)\n\n* no strtok were used in this patch. ;-)\n* Handles both serial-sequences and user-sequences referenced in \n nextval(...) default column defs.\n* Loop over tables so we can check wildcard table name in the future\n per your suggestion. I've only noted a TODO: regarding the wildcard\n matching.\n* Instead of using a NULL terminated char** array to hold the collected\n sequence names, I put in a simple strarray ADT -- mostly so I could\n have the strarrayContains() test to call from the conditional around\n dumpSequence(). If this is just dumb, I'll replace it with a simple\n char** implementation. Did I overlook some utility funcs in the\n PG source that already does this? If so, I'll gladly use those.\n* Patch is really attached :-P\n\ncomments?\n\ntired.\n b\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman", "msg_date": "Sun, 16 Dec 2001 06:30:21 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": true, "msg_subject": "Re: system catalog relation of a table and a serial sequence" }, { "msg_contents": "Tom Lane writes:\n\n> I think it'd be a bit surprising if \"pg_dump -t table\" would dump\n> sequences declared independently of the table. An example where you'd\n> likely not be happy with that is if the same sequence is being used to\n> feed multiple tables.\n>\n> I agree that dumping all such sequences will often be the desired\n> behavior, but that doesn't leave me convinced that it's the right\n> thing to do.\n>\n> Any comments out there?\n\nThe more general question is: Should 'pg_dump -t table' dump all objects\nthat \"table\" depends on? Keep in mind that this could mean you have to\ndump the entire database (think foreign keys). In my mind, dumping an\narbitrary subset of dependencies is not a proper solution, though.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sun, 16 Dec 2001 23:23:56 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] system catalog relation of a table and a" }, { "msg_contents": "[2001-12-16 06:30] Brent Verner said:\n| [2001-12-16 00:42] Tom Lane said:\n| | Brent Verner <brent@rcfile.org> writes:\n| | > How about strtok_r? I /really/ like the fact that strtok will\n| | > eat either of the tokens ['\"] that might be around the sequence\n| | > name... just call me lazy :-)\n| | \n| | That behavior creates one of the \"corner cases\" I was alluding to.\n| | Shall I leave the difficulty as an exercise for the student?\n| \n| Ok... I ended up working longer than I'd thought :-)\n| \n| * no strtok were used in this patch. ;-)\n| * Handles both serial-sequences and user-sequences referenced in \n| nextval(...) default column defs.\n| * Loop over tables so we can check wildcard table name in the future\n| per your suggestion. I've only noted a TODO: regarding the wildcard\n| matching.\n| * Instead of using a NULL terminated char** array to hold the collected\n| sequence names, I put in a simple strarray ADT -- mostly so I could\n| have the strarrayContains() test to call from the conditional around\n| dumpSequence(). If this is just dumb, I'll replace it with a simple\n| char** implementation. Did I overlook some utility funcs in the\n| PG source that already does this? If so, I'll gladly use those.\n| * Patch is really attached :-P\n\nThis patch needs a fix already... I just realized (while playing with\nthis code in a different context) that I forgot to change the malloc \nline in strarrayInit() after typedef'ing strarray as pointer to struct,\ninstead of just the struct.\n\n- strarray _ary = (strarray)malloc(sizeof(strarray));\n+ strarray _ary = (strarray)malloc(sizeof(struct strarray));\n\ncheers.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Mon, 17 Dec 2001 09:19:54 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": true, "msg_subject": "Re: system catalog relation of a table and a serial sequence" }, { "msg_contents": "[2001-12-16 23:23] Peter Eisentraut said:\n| Tom Lane writes:\n| \n| > I think it'd be a bit surprising if \"pg_dump -t table\" would dump\n| > sequences declared independently of the table. An example where you'd\n| > likely not be happy with that is if the same sequence is being used to\n| > feed multiple tables.\n| >\n| > I agree that dumping all such sequences will often be the desired\n| > behavior, but that doesn't leave me convinced that it's the right\n| > thing to do.\n| >\n| > Any comments out there?\n| \n| The more general question is: Should 'pg_dump -t table' dump all objects\n| that \"table\" depends on? Keep in mind that this could mean you have to\n| dump the entire database (think foreign keys). In my mind, dumping an\n| arbitrary subset of dependencies is not a proper solution, though.\n\nDo you care to share your ideas on what a proper solution /would/ be?\n\n I agree wholly with you that it is worse to dump the \"arbitrary \nsubset\" of related objects along with a table.\n\n Assuming that 'pg_dump $ARGS db_1 > psql db_2' should never fail, \nwe must either dump only table schema for ARGS=\"-t table\" or dump \n/all/ dependencies for the same ARGS.\n \n Clearly, we are not in a position to dump all dependencies right now.\nCan we make the change that '-t table' is limited to dumping schema?\n\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Mon, 17 Dec 2001 09:48:58 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] system catalog relation of a table and a serial\n sequence" }, { "msg_contents": "Brent Verner writes:\n\n> | The more general question is: Should 'pg_dump -t table' dump all objects\n> | that \"table\" depends on? Keep in mind that this could mean you have to\n> | dump the entire database (think foreign keys). In my mind, dumping an\n> | arbitrary subset of dependencies is not a proper solution, though.\n>\n> Do you care to share your ideas on what a proper solution /would/ be?\n\nEither all dependencies or no dependencies would be a \"proper\" solution,\nin my mind. Which one we should use is not obvious, that's why I stated\nthat question.\n\nWhen you think about it, dumping the dependencies turns out to be less\nuseful than it seems at first. Since any object can be a dependency for\nmore than one object it would not work in general to do\n\npg_dump -t table1 > out1\npg_dump -t table2 > out2\n\npsql -f out1\npsql -f out2\n\nunless you mess with CREATE OR REPLACE, which we don't have for all\nobjects and which would probably not be possible to execute in all\nsituations.\n\nSo the only real use for \"dump object X and all dependencies\" would be to\nextract a functional subset of one database into another. But that seems\nto be a lot less common operation.\n\nTherefore I think that we should go with \"no dependencies\" (which is also\na lot easier, no doubt).\n\n(Whether we should consider serial columns to be a dependency or an\nintegral part is a different question.)\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 17 Dec 2001 23:02:40 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] system catalog relation of a table and a" }, { "msg_contents": "[2001-12-17 09:48] Brent Verner said:\n| [2001-12-16 23:23] Peter Eisentraut said:\n| | Tom Lane writes:\n| | \n| | > I think it'd be a bit surprising if \"pg_dump -t table\" would dump\n| | > sequences declared independently of the table. An example where you'd\n| | > likely not be happy with that is if the same sequence is being used to\n| | > feed multiple tables.\n| | >\n| | > I agree that dumping all such sequences will often be the desired\n| | > behavior, but that doesn't leave me convinced that it's the right\n| | > thing to do.\n| | >\n| | > Any comments out there?\n| | \n| | The more general question is: Should 'pg_dump -t table' dump all objects\n| | that \"table\" depends on? Keep in mind that this could mean you have to\n| | dump the entire database (think foreign keys). In my mind, dumping an\n| | arbitrary subset of dependencies is not a proper solution, though.\n| \n| Do you care to share your ideas on what a proper solution /would/ be?\n| \n| I agree wholly with you that it is worse to dump the \"arbitrary \n| subset\" of related objects along with a table.\n| \n| Assuming that 'pg_dump $ARGS db_1 > psql db_2' should never fail, \n| we must either dump only table schema for ARGS=\"-t table\" or dump \n| /all/ dependencies for the same ARGS.\n| \n| Clearly, we are not in a position to dump all dependencies right now.\n| Can we make the change that '-t table' is limited to dumping schema?\n\n We need to have some new command line args to allow the user to \nchoose their desired behavior. I have a patch for pg_dump that adds:\n\n-k, --serial-sequences when dumping schema for a single table, output\n CREATE SEQUENCE statements and setval() function\n calls for SERIAL columns in the table\n-K, --all-sequences when dumping schema for a single table, output\n CREATE SEQUENCE statements and setval() function\n calls for ALL sequences referenced in any DEFAULT\n column definition in the table\n\n By default, no sequence statements are dumped when using the\n'-t table' switch to address the real concern that we can't practically\ndump /all/ dependencies on a single table (this late in beta). In \norder to deal with the case where multiple tables are feeding from the\nsequence, a safer setval() call will be made so the nextval will never\nbe set to a lower value. This is intended to setval such that \nsubsequent inserts into tables feeding off a(n already existing) \nsequence will never fail due to duplicate values. \n\n To determine if a sequence is a serial, I am testing if the seq\nname ends with \"_seq\". When '-K' is used, I'm grabbing all sequences\nreferenced in any nextval(..) DEFAULT definitions on the table.\n \nSample output is below. If anyone is interested in trying this patch,\nyou may fetch it from \n http://rcfile.org/posthack/pg_dump.diff.3\n\n There is still a problem where using '-c' might drop a shared \nsequence when dumping a table feeding from it. I also just thought\nthat it might be safer to dump all referenced sequences when using\n'-s -t table'.\n\ncomments? advice?\n\nthanks,\n b\n\n\nbrent$ ./pg_dump -d -K -t t2 brent\n-- [comments removed]\n\nCREATE SEQUENCE \"shared_sequence\" start 1 increment 1 maxvalue 9223372036854775807 minvalue 1 cache 1;\n\nCREATE SEQUENCE \"t2_a_seq\" start 1 increment 1 maxvalue 9223372036854775807 minvalue 1 cache 1;\n\nCREATE TABLE \"t2\" (\n \"a\" integer DEFAULT nextval('\"t2_a_seq\"'::text) NOT NULL,\n \"b\" integer DEFAULT nextval('shared_sequence'::text) NOT NULL\n);\n\nINSERT INTO \"t2\" VALUES (1,25);\nINSERT INTO \"t2\" VALUES (2,26);\nINSERT INTO \"t2\" VALUES (3,27);\nINSERT INTO \"t2\" VALUES (4,28);\nINSERT INTO \"t2\" VALUES (5,29);\n\nCREATE UNIQUE INDEX t2_a_key ON t2 USING btree (a);\n\nSELECT setval ('\"shared_sequence\"', (SELECT CASE \n WHEN 29 > nextval('\"shared_sequence\"')\n THEN 29\n ELSE (currval('\"shared_sequence\"') - 1)\n END),\n true);\n\nSELECT setval ('\"t2_a_seq\"', (SELECT CASE \n WHEN 5 > nextval('\"t2_a_seq\"')\n THEN 5\n ELSE (currval('\"t2_a_seq\"') - 1)\n END),\n true);\n\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Tue, 1 Jan 2002 18:01:52 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] system catalog relation of a table and a serial\n sequence" }, { "msg_contents": "\nBrent, do you have a new, final patch that you want to submit for this?\n\n\n---------------------------------------------------------------------------\n\nBrent Verner wrote:\n> [2001-12-16 06:30] Brent Verner said:\n> | [2001-12-16 00:42] Tom Lane said:\n> | | Brent Verner <brent@rcfile.org> writes:\n> | | > How about strtok_r? I /really/ like the fact that strtok will\n> | | > eat either of the tokens ['\"] that might be around the sequence\n> | | > name... just call me lazy :-)\n> | | \n> | | That behavior creates one of the \"corner cases\" I was alluding to.\n> | | Shall I leave the difficulty as an exercise for the student?\n> | \n> | Ok... I ended up working longer than I'd thought :-)\n> | \n> | * no strtok were used in this patch. ;-)\n> | * Handles both serial-sequences and user-sequences referenced in \n> | nextval(...) default column defs.\n> | * Loop over tables so we can check wildcard table name in the future\n> | per your suggestion. I've only noted a TODO: regarding the wildcard\n> | matching.\n> | * Instead of using a NULL terminated char** array to hold the collected\n> | sequence names, I put in a simple strarray ADT -- mostly so I could\n> | have the strarrayContains() test to call from the conditional around\n> | dumpSequence(). If this is just dumb, I'll replace it with a simple\n> | char** implementation. Did I overlook some utility funcs in the\n> | PG source that already does this? If so, I'll gladly use those.\n> | * Patch is really attached :-P\n> \n> This patch needs a fix already... I just realized (while playing with\n> this code in a different context) that I forgot to change the malloc \n> line in strarrayInit() after typedef'ing strarray as pointer to struct,\n> instead of just the struct.\n> \n> - strarray _ary = (strarray)malloc(sizeof(strarray));\n> + strarray _ary = (strarray)malloc(sizeof(struct strarray));\n> \n> cheers.\n> brent\n> \n> -- \n> \"Develop your talent, man, and leave the world something. Records are \n> really gifts from people. To think that an artist would love you enough\n> to share his music with anyone is a beautiful thing.\" -- Duane Allman\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 7 Mar 2002 19:55:42 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: system catalog relation of a table and a serial sequence" }, { "msg_contents": "\nBrent, is this the final version?\n\n---------------------------------------------------------------------------\n\nBrent Verner wrote:\n> [2001-12-17 09:48] Brent Verner said:\n> | [2001-12-16 23:23] Peter Eisentraut said:\n> | | Tom Lane writes:\n> | | \n> | | > I think it'd be a bit surprising if \"pg_dump -t table\" would dump\n> | | > sequences declared independently of the table. An example where you'd\n> | | > likely not be happy with that is if the same sequence is being used to\n> | | > feed multiple tables.\n> | | >\n> | | > I agree that dumping all such sequences will often be the desired\n> | | > behavior, but that doesn't leave me convinced that it's the right\n> | | > thing to do.\n> | | >\n> | | > Any comments out there?\n> | | \n> | | The more general question is: Should 'pg_dump -t table' dump all objects\n> | | that \"table\" depends on? Keep in mind that this could mean you have to\n> | | dump the entire database (think foreign keys). In my mind, dumping an\n> | | arbitrary subset of dependencies is not a proper solution, though.\n> | \n> | Do you care to share your ideas on what a proper solution /would/ be?\n> | \n> | I agree wholly with you that it is worse to dump the \"arbitrary \n> | subset\" of related objects along with a table.\n> | \n> | Assuming that 'pg_dump $ARGS db_1 > psql db_2' should never fail, \n> | we must either dump only table schema for ARGS=\"-t table\" or dump \n> | /all/ dependencies for the same ARGS.\n> | \n> | Clearly, we are not in a position to dump all dependencies right now.\n> | Can we make the change that '-t table' is limited to dumping schema?\n> \n> We need to have some new command line args to allow the user to \n> choose their desired behavior. I have a patch for pg_dump that adds:\n> \n> -k, --serial-sequences when dumping schema for a single table, output\n> CREATE SEQUENCE statements and setval() function\n> calls for SERIAL columns in the table\n> -K, --all-sequences when dumping schema for a single table, output\n> CREATE SEQUENCE statements and setval() function\n> calls for ALL sequences referenced in any DEFAULT\n> column definition in the table\n> \n> By default, no sequence statements are dumped when using the\n> '-t table' switch to address the real concern that we can't practically\n> dump /all/ dependencies on a single table (this late in beta). In \n> order to deal with the case where multiple tables are feeding from the\n> sequence, a safer setval() call will be made so the nextval will never\n> be set to a lower value. This is intended to setval such that \n> subsequent inserts into tables feeding off a(n already existing) \n> sequence will never fail due to duplicate values. \n> \n> To determine if a sequence is a serial, I am testing if the seq\n> name ends with \"_seq\". When '-K' is used, I'm grabbing all sequences\n> referenced in any nextval(..) DEFAULT definitions on the table.\n> \n> Sample output is below. If anyone is interested in trying this patch,\n> you may fetch it from \n> http://rcfile.org/posthack/pg_dump.diff.3\n> \n> There is still a problem where using '-c' might drop a shared \n> sequence when dumping a table feeding from it. I also just thought\n> that it might be safer to dump all referenced sequences when using\n> '-s -t table'.\n> \n> comments? advice?\n> \n> thanks,\n> b\n> \n> \n> brent$ ./pg_dump -d -K -t t2 brent\n> -- [comments removed]\n> \n> CREATE SEQUENCE \"shared_sequence\" start 1 increment 1 maxvalue 9223372036854775807 minvalue 1 cache 1;\n> \n> CREATE SEQUENCE \"t2_a_seq\" start 1 increment 1 maxvalue 9223372036854775807 minvalue 1 cache 1;\n> \n> CREATE TABLE \"t2\" (\n> \"a\" integer DEFAULT nextval('\"t2_a_seq\"'::text) NOT NULL,\n> \"b\" integer DEFAULT nextval('shared_sequence'::text) NOT NULL\n> );\n> \n> INSERT INTO \"t2\" VALUES (1,25);\n> INSERT INTO \"t2\" VALUES (2,26);\n> INSERT INTO \"t2\" VALUES (3,27);\n> INSERT INTO \"t2\" VALUES (4,28);\n> INSERT INTO \"t2\" VALUES (5,29);\n> \n> CREATE UNIQUE INDEX t2_a_key ON t2 USING btree (a);\n> \n> SELECT setval ('\"shared_sequence\"', (SELECT CASE \n> WHEN 29 > nextval('\"shared_sequence\"')\n> THEN 29\n> ELSE (currval('\"shared_sequence\"') - 1)\n> END),\n> true);\n> \n> SELECT setval ('\"t2_a_seq\"', (SELECT CASE \n> WHEN 5 > nextval('\"t2_a_seq\"')\n> THEN 5\n> ELSE (currval('\"t2_a_seq\"') - 1)\n> END),\n> true);\n> \n> \n> -- \n> \"Develop your talent, man, and leave the world something. Records are \n> really gifts from people. To think that an artist would love you enough\n> to share his music with anyone is a beautiful thing.\" -- Duane Allman\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 7 Mar 2002 20:00:02 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] system catalog relation of a table and a serial" }, { "msg_contents": "[2002-03-07 20:00] Bruce Momjian said:\n| \n| Brent, is this the final version?\n\nCan you hold 'til the weekend? I'd like to look this over again,\nbut I'm swamped til tomorrow evening. I'll send a final patch\nFriday night unless someone objects to the approach.\n\nthanks.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Thu, 7 Mar 2002 20:25:36 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] system catalog relation of a table and a serial\n sequence" }, { "msg_contents": "Brent Verner wrote:\n> [2002-03-07 20:00] Bruce Momjian said:\n> | \n> | Brent, is this the final version?\n> \n> Can you hold 'til the weekend? I'd like to look this over again,\n> but I'm swamped til tomorrow evening. I'll send a final patch\n> Friday night unless someone objects to the approach.\n\nWe are in no rush. Just send it over when you are ready. Thanks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 7 Mar 2002 20:27:04 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] system catalog relation of a table and a serial" } ]
[ { "msg_contents": "\nWhat are we doing with the SunOS port? It is failing the 'bit'\nregression test. I can easily do the work if people tell me how they\nwant this handled.\n\n\n---------------------------------------------------------------------------\n\n> > > SunOS Tatsuo Ishii\n> > > Are we giving up on this one? Still relevant?\n> > \n> > What should we do? The only remaining issue is a non-8-bit-clean\n> > memcmp, which seems pretty easy to fix it.\n> \n> Yes, seems we could go a few directions with SunOS:\n> \n> \tLeave bit types broken on that platform, document it\n> \tHard-code in a memcmp() in C for just that platform in varbit.c\n> \tAdd configure test and real memcmp() function for bad platforms\n> \n> Anyone want to vote on these? Personally, SunOS seems like the\n> granddaddy of ports and I would hate to see it leave, especially when we\n> are so close.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 15 Dec 2001 00:52:51 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Third call for platform testing" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> What are we doing with the SunOS port? It is failing the 'bit'\n> regression test. I can easily do the work if people tell me how they\n> want this handled.\n\nIncluding the necessary testing? How will you manage that?\n\nIMHO, this will not and should not get dealt with unless some SunOS\nusers step up to the plate to do it. Tatsuo evidently doesn't care\nenough to do the work himself, and we haven't seen any other respondents\nfor SunOS in a long time. It may be a granddaddy platform, but it's\nlooking a lot like a granddaddy that's passed on.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 15 Dec 2001 03:04:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Third call for platform testing " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > What are we doing with the SunOS port? It is failing the 'bit'\n> > regression test. I can easily do the work if people tell me how they\n> > want this handled.\n> \n> Including the necessary testing? How will you manage that?\n>\n> IMHO, this will not and should not get dealt with unless some SunOS\n> users step up to the plate to do it. Tatsuo evidently doesn't care\n> enough to do the work himself, and we haven't seen any other respondents\n> for SunOS in a long time. It may be a granddaddy platform, but it's\n> looking a lot like a granddaddy that's passed on.\n\nI assume I would do the patch, Tatsuo would test it, and it would then\nbe applied. I work for Tatsuo now, so he doesn't have to do the work. \nI can do it for him. :-)\n\nSRA has clients running SunOS, so even though there aren't people on the\nmailing list, there are users.\n\nI can do the work and get it tested. I just want to know how people\nwant it fixed. Seeing as no one is saying anything, I will go ahead\nwith my best guess, have Tatsuo test it, and post the patch for review.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 15 Dec 2001 21:28:08 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Third call for platform testing" } ]
[ { "msg_contents": "Dear all,\n\nHas anyone used mno-cygwin to build native Windows applications?\nCan it be used to compile PostgreSQL natively under Windows?\n\nBest regards,\nJean-Michel\n", "msg_date": "Sat, 15 Dec 2001 15:42:21 +0100", "msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>", "msg_from_op": true, "msg_subject": "mno-cygwin" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us] \n> Sent: 15 December 2001 18:06\n> To: jm.poure@freesurf.fr\n> Cc: pgsql-hackers@postgresql.org; pgsql-cygwin@postgresql.org\n> Subject: Re: [HACKERS] [CYGWIN] Requirement for pgAdmin browser.\n>\n> > PostgreSQL does not allow table modification. The community \n> have been \n> > waiting\n> > for an ALTER TABLE DROP COLUMN and type PROMOTION / \n> DEMOTION, for a long \n> > time. Same as for Views and triggers : it is impossible to \n> alter them... The \n> > politics of PostgreSQL is to offer advanced features before \n> basic ones.\n> > \n> > Is anyone working seriously on those issues?\n> \n> Agreed. We couldn't decide the best way to do it, so we did \n> nothing. I have hopes for 7.3.\n\nThis is starting to become an issue for me. Personally I can live without\nthese features (though I'd rather not) because I know how to do these things\nmanually, however many pgAdmin users cannot (without help/pointing in the\nright direction). Since I rewrote pgAdmin as pgAdmin II, the program is that\nmuch more stable and easy to use (blowing my own trumpet somewhat!!) that\nprobably the most common support question I get asked these days is 'how do\nI drop a column?'.\n\nI don't want to try to pressure the developers on this as I know what that's\nlike myself, but this issue is imho *really* beginning to hold PostgreSQL\nback from being less of a hackers database and more commonly used like M$\nSQL (which though evil _is_ easy to use :-) ).\n\nRegards, Dave.\n", "msg_date": "Sat, 15 Dec 2001 19:32:56 -0000", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Requirement for pgAdmin browser." }, { "msg_contents": "> This is starting to become an issue for me. Personally I can live without\n> these features (though I'd rather not) because I know how to do these things\n> manually, however many pgAdmin users cannot (without help/pointing in the\n> right direction). Since I rewrote pgAdmin as pgAdmin II, the program is that\n> much more stable and easy to use (blowing my own trumpet somewhat!!) that\n> probably the most common support question I get asked these days is 'how do\n> I drop a column?'.\n> \n> I don't want to try to pressure the developers on this as I know what that's\n> like myself, but this issue is imho *really* beginning to hold PostgreSQL\n> back from being less of a hackers database and more commonly used like M$\n> SQL (which though evil _is_ easy to use :-) ).\n\nYes. Sometimes we can't decide so we do nothing in hopes a better\nsolution will emerge. In some cases, such a solution does not exist and\nwe have to just do our best to implement these missing features.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 15 Dec 2001 16:35:27 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Requirement for pgAdmin browser." } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Hannu Krosing [mailto:hannu@tm.ee] \n> Sent: 14 December 2001 18:31\n> To: Dave Page\n> Cc: 'Zeugswetter Andreas SB SD'; lockhart@fourpalms.org; \n> pgsql-hackers@postgresql.org; pgsql-cygwin@postgresql.org\n> Subject: Re: [CYGWIN] Platform Testing - Cygwin\n> \n> \n> \n> Dave Page wrote:\n> \n> >Although the original test was in Windows XP on a single \n> processor box, \n> >the final tests that all passed were on Windows 2000 Server \n> running on \n> >a Dual PIII 933MHz box with 1Gb of RAM. The motherboard is \n> an MSI Pro \n> >694D.\n> >\n> Has anyone done any tests comparing PostgreSQL on Win32 and *NIX \n> platforms on\n> same/similar hardware ?\n> \n> I suspect that the initial connect could be slower on Win32 due to \n> reported slowness of\n> fork() there, but are there other issues ?\n\nI believe one of the guys at Greatbridge wroteup some benchmark results\ncomparing Cygwin/*nix. I don't know where they can be found though but I've\ngot a hunch Jason might have them(?).\n\nRegards, Dave.\n", "msg_date": "Sat, 15 Dec 2001 19:47:40 -0000", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: Platform Testing - Cygwin" }, { "msg_contents": "On Sat, Dec 15, 2001 at 07:47:40PM -0000, Dave Page wrote:\n> I believe one of the guys at Greatbridge wroteup some benchmark results\n> comparing Cygwin/*nix. I don't know where they can be found though but I've\n> got a hunch Jason might have them(?).\n\nAll that I can offer is the following:\n\n http://archives.postgresql.org/pgsql-cygwin/2001-08/msg00029.php\n\nand specifically:\n\n On Wed, Aug 08, 2001 at 05:50:10PM +0000, Terry Carlin wrote:\n > BTW, Up through 40 users, PostgreSQL under CYGWIN using the TPC-C \n > benchmark performed very much the same as Linux PostgreSQL on the\n > exact hardware.\n\nJason\n", "msg_date": "Mon, 17 Dec 2001 10:42:01 -0500", "msg_from": "Jason Tishler <jason@tishler.net>", "msg_from_op": false, "msg_subject": "Re: Platform Testing - Cygwin" } ]
[ { "msg_contents": "I agree with Lee, I also like Oracle's options for a discard file, so\nyou can look at what was rejected, fix your problem and reload if\nnecessary just the rejects.\n\nJim\n\n\n> Peter Eisentraut writes:\n> > I think allowing this feature would open up a world of new\n> > dangerous ideas, such as ignoring check contraints or foreign keys\n> > or magically massaging other tables so that the foreign keys are\n> > satisfied, or ignoring default values, or whatever. The next step\n> > would then be allowing the same optimizations in INSERT. I feel\n> > COPY should load the data and that's it. If you don't like the\n> > data you have then you have to fix it first.\n> \n> I agree that PostgreSQL's checks during COPY are a bonus and I\n> wouldn't dream of not having them. Many database systems provide a\n> fast bulkload by ignoring these constraits and cross references -\n> that's a tricky/horrid situation.\n> \n> However I suppose the question is should such 'invalid data' abort the\n> transaction, it seems a bit drastic...\n> \n> I suppose i'm not really after a IGNORE DUPLICATES option, but rather\n> a CONTINUE ON ERROR kind of thing.\n> \n> Regards, Lee.\n> \n> \n\n\n", "msg_date": "Sun, 16 Dec 2001 09:12:14 -0500", "msg_from": "\"Jim Buttafuoco\" <jim@buttafuoco.net>", "msg_from_op": true, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates? " }, { "msg_contents": "Jim Buttafuoco writes:\n\n> I agree with Lee, I also like Oracle's options for a discard file, so\n> you can look at what was rejected, fix your problem and reload if\n> necessary just the rejects.\n\nHow do you know which one is the duplicate and which one is the good one?\nMore likely you will have to fix the entire thing. Anything else would\nundermine the general data model except in specific use cases.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sun, 16 Dec 2001 23:24:04 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates? " }, { "msg_contents": "Peter Eisentraut writes:\n > Jim Buttafuoco writes:\n > > I agree with Lee, I also like Oracle's options for a discard file, so\n > > you can look at what was rejected, fix your problem and reload if\n > > necessary just the rejects.\n > How do you know which one is the duplicate and which one is the good one?\n > More likely you will have to fix the entire thing. Anything else would\n > undermine the general data model except in specific use cases.\n\nIn the general case most data is sequential, in which case it would be\nnormal to assume that the first record is the definitive one. Most\ndatabase systems go with this assumption apart from MySQL which gives\nthe user a choice between IGNORE or UPDATE...\n\nLee.\n", "msg_date": "Mon, 17 Dec 2001 12:43:45 +0000 (GMT)", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates? " }, { "msg_contents": "Peter Eisentraut writes:\n > Jim Buttafuoco writes:\n > > I agree with Lee, I also like Oracle's options for a discard file, so\n > > you can look at what was rejected, fix your problem and reload if\n > > necessary just the rejects.\n > How do you know which one is the duplicate and which one is the good one?\n > More likely you will have to fix the entire thing. Anything else would\n > undermine the general data model except in specific use cases.\n\nConsider SELECT DISTINCT - which is the 'duplicate' and which one is\nthe good one?\n\nLee.\n", "msg_date": "Mon, 17 Dec 2001 12:48:30 +0000 (GMT)", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates? " }, { "msg_contents": "Lee Kindness writes:\n\n> Consider SELECT DISTINCT - which is the 'duplicate' and which one is\n> the good one?\n\nIt's not the same thing. SELECT DISTINCT only eliminates rows that are\ncompletely the same, not only equal in their unique contraints.\n\nMaybe you're thinking of SELECT DISTINCT ON (). Observe the big warning\nthat the result of that statement are random unless ORDER BY is used. --\nBut that's not the same thing either. We've never claimed that the COPY\ninput has an ordering assumption. In fact you're asking for a bit more\nthan an ordering assumption, you're saying that the earlier data is better\nthan the later data. I think in a random use case that is more likely\n*not* to be the case because the data at the end is newer.\n\n\nBtw., here's another concern about this proposed feature: If I do a\nclient-side COPY, how will you sent the \"ignored\" rows back to the client?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 17 Dec 2001 21:59:01 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates? " }, { "msg_contents": "Peter Eisentraut writes:\n > Lee Kindness writes:\n > > Consider SELECT DISTINCT - which is the 'duplicate' and which one is\n > > the good one?\n > It's not the same thing. SELECT DISTINCT only eliminates rows that are\n > completely the same, not only equal in their unique contraints.\n > Maybe you're thinking of SELECT DISTINCT ON (). Observe the big warning\n > that the result of that statement are random unless ORDER BY is used. --\n > But that's not the same thing either. We've never claimed that the COPY\n > input has an ordering assumption. In fact you're asking for a bit more\n > than an ordering assumption, you're saying that the earlier data is better\n > than the later data. I think in a random use case that is more likely\n > *not* to be the case because the data at the end is newer.\n\nYou're right - I was meaning 'SELECT DISTINCT ON ()'. However I'm only\nusing it as an example of where the database is choosing (be it\nrandomly) the data to discarded. While I've said in this thread that\n'COPY FROM IGNORE DUPLICATES' would ignore later duplicates I'm not\nreally that concerned about what it ignores; first, later, random,\n... I agree if it was of concern then it should be pre-processed.\n\n > Btw., here's another concern about this proposed feature: If I do\n > a client-side COPY, how will you sent the \"ignored\" rows back to\n > the client?\n\nAgain a number of different ideas have been mixed up in the\ndiscussion. Oracle's logging option was only given as an example of\nhow other database systems deal with this option - If it wasn't\nexplicitly given then it's reasonable to discard the extra\ninformation.\n\nWhat really would be nice in the SQL-world is a standardised COPY\nstatement...\n\nBest regards, Lee Kindness.\n", "msg_date": "Tue, 18 Dec 2001 10:09:14 +0000 (GMT)", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates? " }, { "msg_contents": "Lee Kindness <lkindness@csl.co.uk> writes:\n> You're right - I was meaning 'SELECT DISTINCT ON ()'. However I'm only\n> using it as an example of where the database is choosing (be it\n> randomly) the data to discarded.\n\nNot a good example to support your argument. The entire point of\nDISTINCT ON (imho) is that the rows that are kept or discarded are\n*not* random, but can be selected by the user by specifying additional\nsort columns. DISTINCT ON would be pretty useless if it weren't for\nthat flexibility. The corresponding concept in COPY will need to\nprovide flexible means for deciding which row to keep and which to\ndrop, else it'll be pretty useless.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 18 Dec 2001 10:04:08 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates? " }, { "msg_contents": "Tom Lane writes:\n > Lee Kindness <lkindness@csl.co.uk> writes:\n > > You're right - I was meaning 'SELECT DISTINCT ON ()'. However I'm only\n > > using it as an example of where the database is choosing (be it\n > > randomly) the data to discarded.\n > Not a good example to support your argument. The entire point of\n > DISTINCT ON (imho) is that the rows that are kept or discarded are\n > *not* random, but can be selected by the user by specifying additional\n > sort columns. DISTINCT ON would be pretty useless if it weren't for\n > that flexibility. The corresponding concept in COPY will need to\n > provide flexible means for deciding which row to keep and which to\n > drop, else it'll be pretty useless.\n\nAt which point it becomes quicker to resort to INSERT...\n\nHere's the crux question - how can I get management to go with\nPostgreSQL when a core operation (import of data into a transient\ndatabase) is at least 6 times slower than the current version?\n\nWith a lot of work investigating the incoming data, the number of\nincoming duplicates has been massively reduced by fixing/tackling at\nsource. However rouge values do still crop up (the data originates\nfrom a real-time system with multiple hardware inputs from multiple\nhardware vendors) and when they do (even just 1) the performance dies.\nAdd to this terrabytes of legacy data...\n\nWhile you may see the option of ignoring duplicates in COPY as 'pretty\nuseless', it obviously has its place/use otherwise every other\ndatabase system wouldn't have support for it! (not that following the\npack is always a good idea)\n\nIn an ideal world 'COPY FROM' would only be used with data output by\n'COPY TO' and it would be nice and sanitised. However in some fields\nthis often is not a possibility due to performance constraints!\n\nBest regards,\n\n-- \n Lee Kindness, Senior Software Engineer, Concept Systems Limited.\n http://services.csl.co.uk/ http://www.csl.co.uk/\n", "msg_date": "Tue, 18 Dec 2001 15:51:50 +0000 (GMT)", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates? " }, { "msg_contents": "Lee Kindness <lkindness@csl.co.uk> writes:\n> In an ideal world 'COPY FROM' would only be used with data output by\n> 'COPY TO' and it would be nice and sanitised. However in some fields\n> this often is not a possibility due to performance constraints!\n\nOf course, the more bells and whistles we add to COPY, the slower it\nwill get, which rather defeats the purpose no?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 18 Dec 2001 10:59:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates? " }, { "msg_contents": "Tom Lane writes:\n > Lee Kindness <lkindness@csl.co.uk> writes:\n > > In an ideal world 'COPY FROM' would only be used with data output by\n > > 'COPY TO' and it would be nice and sanitised. However in some fields\n > > this often is not a possibility due to performance constraints!\n > Of course, the more bells and whistles we add to COPY, the slower it\n > will get, which rather defeats the purpose no?\n\nIndeed, but as I've mentioned in this thread in the past, the code\npath for COPY FROM already does a check against the unique index (if\nthere is one) but bombs-out rather than handling it...\n\nIt wouldn't add any execution time if there were no duplicates in the\ninput!\n\nregards, Lee.\n", "msg_date": "Tue, 18 Dec 2001 16:04:13 +0000 (GMT)", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates? " }, { "msg_contents": "Lee Kindness wrote:\n> Tom Lane writes:\n> > Lee Kindness <lkindness@csl.co.uk> writes:\n> > > In an ideal world 'COPY FROM' would only be used with data output by\n> > > 'COPY TO' and it would be nice and sanitised. However in some fields\n> > > this often is not a possibility due to performance constraints!\n> > Of course, the more bells and whistles we add to COPY, the slower it\n> > will get, which rather defeats the purpose no?\n> \n> Indeed, but as I've mentioned in this thread in the past, the code\n> path for COPY FROM already does a check against the unique index (if\n> there is one) but bombs-out rather than handling it...\n> \n> It wouldn't add any execution time if there were no duplicates in the\n> input!\n\nI know many purists object to allowing COPY to discard invalid rows in\nCOPY input, but it seems we have lots of requests for this feature, with\nfew workarounds except pre-processing the flat file. Of course, if they\nuse INSERT, they will get errors that they can just ignore. I don't see\nhow allowing errors in COPY is any more illegal, except that COPY is one\ncommand while multiple INSERTs are separate commands.\n\nSeems we need to allow such a capability, if only crudely. I don't\nthink we can create a discard file because of the problem with remote\nCOPY.\n\nI think we can allow something like:\n\n\tCOPY FROM '/tmp/x' WITH ERRORS 2\n\nmeaning we will allow at most two errors and will report the error line\nnumbers to the user. I think this syntax clearly indicates that errors\nare being accepted in the input. An alternate syntax would allow an\nunlimited number of errors:\n\n\tCOPY FROM '/tmp/x' WITH ERRORS\n\nThe errors can be non-unique errors, or even CHECK constraint errors.\n\nUnless I hear complaints, I will add it to TODO.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 2 Jan 2002 16:09:36 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I think we can allow something like:\n> \tCOPY FROM '/tmp/x' WITH ERRORS 2\n\nThis is not going to happen, at least not until after there's a\nwholesale revision of error handling. As things stand we do not\nhave a choice: elog(ERROR) must abort the transaction, because we\ncan't guarantee that things are in good enough shape to continue.\nSee the archives for previous discussions.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 Jan 2002 16:49:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates? " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I think we can allow something like:\n> > \tCOPY FROM '/tmp/x' WITH ERRORS 2\n> \n> This is not going to happen, at least not until after there's a\n> wholesale revision of error handling. As things stand we do not\n> have a choice: elog(ERROR) must abort the transaction, because we\n> can't guarantee that things are in good enough shape to continue.\n> See the archives for previous discussions.\n\nYes, I realize we need subtransactions or something, but we should add\nit to the TODO list if it is a valid request, right?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 2 Jan 2002 17:02:26 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I think we can allow something like:\n> COPY FROM '/tmp/x' WITH ERRORS 2\n\n> Yes, I realize we need subtransactions or something, but we should add\n> it to the TODO list if it is a valid request, right?\n\nWell, I don't like that particular API in any case. Why would I think\nthat 2 errors are okay and 3 are not, if I'm loading a\nmany-thousand-line COPY file? Wouldn't it matter *what* the errors\nare, at least as much as how many there are? \"Discard duplicate rows\"\nis one thing, but \"ignore bogus data\" (eg, unrecognizable timestamps)\nis not the same animal at all.\n\nAs someone already remarked, the correct, useful form of such a feature\nis to echo the rejected lines to some sort of output file that I can\nlook at afterwards. How many errors there are is not the issue.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 Jan 2002 17:18:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates? " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I think we can allow something like:\n> > COPY FROM '/tmp/x' WITH ERRORS 2\n> \n> > Yes, I realize we need subtransactions or something, but we should add\n> > it to the TODO list if it is a valid request, right?\n> \n> Well, I don't like that particular API in any case. Why would I think\n> that 2 errors are okay and 3 are not, if I'm loading a\n> many-thousand-line COPY file? Wouldn't it matter *what* the errors\n\nI threw the count idea in as a possible compromise. :-)\n\n> are, at least as much as how many there are? \"Discard duplicate rows\"\n> is one thing, but \"ignore bogus data\" (eg, unrecognizable timestamps)\n> is not the same animal at all.\n\nYes, when we have error codes, it would be nice to specify certain\nerrors to ignore.\n\n> As someone already remarked, the correct, useful form of such a feature\n> is to echo the rejected lines to some sort of output file that I can\n> look at afterwards. How many errors there are is not the issue.\n\nHow about for TODO:\n\n\t* Allow COPY to report error lines and continue; requires\n\tnested transactions; optionally allow error codes to be specified\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 2 Jan 2002 18:40:20 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates?" }, { "msg_contents": "> How about for TODO:\n> \t* Allow COPY to report error lines and continue; requires\n> \tnested transactions; optionally allow error codes to be specified\n\nOkay, that seems reasonable.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 Jan 2002 19:05:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates? " }, { "msg_contents": "Tom Lane wrote:\n> > How about for TODO:\n> > \t* Allow COPY to report error lines and continue; requires\n> > \tnested transactions; optionally allow error codes to be specified\n> \n> Okay, that seems reasonable.\n\nGood. Now that I think of it, nested transactions don't seem required. \nWe already allow pg_dump to dump a database using INSERTs, and we don't\nput those inserts in a single transaction when we load them:\n\t\n\tCREATE TABLE \"test\" (\n\t \"x\" integer\n\t);\n\t\n\tINSERT INTO \"test\" VALUES (1);\n\tINSERT INTO \"test\" VALUES (2);\n\nShould we be wrapping these INSERTs in a transaction? Can we do COPY\nwith each row being its own transaction?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 2 Jan 2002 22:24:26 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates?" }, { "msg_contents": "Bruce Momjian wrote:\n> Tom Lane wrote:\n> > > How about for TODO:\n> > > \t* Allow COPY to report error lines and continue; requires\n> > > \tnested transactions; optionally allow error codes to be specified\n> > \n> > Okay, that seems reasonable.\n> \n> Good. Now that I think of it, nested transactions don't seem required. \n> We already allow pg_dump to dump a database using INSERTs, and we don't\n> put those inserts in a single transaction when we load them:\n> \t\n> \tCREATE TABLE \"test\" (\n> \t \"x\" integer\n> \t);\n> \t\n> \tINSERT INTO \"test\" VALUES (1);\n> \tINSERT INTO \"test\" VALUES (2);\n> \n> Should we be wrapping these INSERTs in a transaction? Can we do COPY\n> with each row being its own transaction?\n\nOK, added to TODO:\n\n o Allow COPY to report error lines and continue; optionally \n allow error codes to be specified\n\nSeems nested transactions are not required if we load each COPY line in\nits own transaction, like we do with INSERT from pg_dump.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 3 Jan 2002 13:24:26 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Seems nested transactions are not required if we load each COPY line in\n> its own transaction, like we do with INSERT from pg_dump.\n\nI don't think that's an acceptable answer. Consider\n\n\t\tBEGIN;\n\t\tother stuff;\n\t\tCOPY ....;\n\t\tother stuff;\n\t\tROLLBACK;\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 03 Jan 2002 15:35:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates? " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Seems nested transactions are not required if we load each COPY line in\n> > its own transaction, like we do with INSERT from pg_dump.\n> \n> I don't think that's an acceptable answer. Consider\n\nOh, very good point. \"Requires nested transactions\" added to TODO.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 3 Jan 2002 15:42:05 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates?" } ]
[ { "msg_contents": "I guess I will maintain this for people who want it.\n\nThis allows\n\npostmaster -C /etc/pgsql/mydb.conf\n\nThe \"-C\" option specifies a configuration file.\n\nIn the config file there are two more options:\n\ndatadir = '/u01/postgres'\nhbaconfig = '/etc/pgsql/pg_hba.conf'\n\nThe \"datadir\" option specifies where the postgresql data directory resides. (My\noriginal patch used the setting \"pgdatadir\" in which the \"pg\" seemed\nredundant.)\n\nThe \"hbaconfig\" specifies where postgresql will look for the pg_hba.conf file.\n\nIf the \"-D\" option is specified on the command line, it overides the \"datadir\"\noption in the config file. (This is a different behavior than my original\npatch)\n\nIf No \"datadir\" is specified, it must be specified either on the command line\nor the normal PGDATA environment variable.\n\nIf no \"hbaconfig\" setting is set, the it will look for pg_hba.config in the\ndata directory as always.\n\nOne can start many databases with the same settings as:\n\npostmaster -C /path/default.conf -p 5432 -D /path/name1\npostmaster -C /path/default.conf -p 5433 -D /path/name2\npostmaster -C /path/default.conf -p 5434 -D /path/name3", "msg_date": "Sun, 16 Dec 2001 09:35:58 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Explicit config patch 7.2B4" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n\n> I guess I will maintain this for people who want it.\n\nI'd definitely encourage you to try to get it into 7.3 when that\nopens--the list seems to be generally positive about it, and as long\nas existing setups work as-is (which seems to be the case) I think\nit's a valuable addition.\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n", "msg_date": "16 Dec 2001 10:26:55 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Explicit config patch 7.2B4" }, { "msg_contents": "On Sun, Dec 16, 2001 at 09:35:58AM -0500, mlw wrote:\n> This allows\n> \n> postmaster -C /etc/pgsql/mydb.conf\n\nNice.\n\n> In the config file there are two more options:\n> \n> datadir = '/u01/postgres'\n> hbaconfig = '/etc/pgsql/pg_hba.conf'\n\nWhat about pg_ident.conf?\n\n-- \nmarko\n\n", "msg_date": "Sun, 16 Dec 2001 19:33:01 +0200", "msg_from": "Marko Kreen <marko@l-t.ee>", "msg_from_op": false, "msg_subject": "Re: Explicit config patch 7.2B4" }, { "msg_contents": "mlw writes:\n\n> This allows\n>\n> postmaster -C /etc/pgsql/mydb.conf\n>\n> The \"-C\" option specifies a configuration file.\n\nI'm still not happy about this, because given a pre-configured or already\nrunning system it is difficult or impossible to find out which\nconfiguration file is being used. This offsets in many ways the improved\nusability you're trying to achieve.\n\nI think an 'include' directive for postgresql.conf would solve this\nproblem more generally (since it allows many more sharing models) and\nwould also give us a good tool when we get to the configuration of\nalternative storage locations.\n\nProbably a command-line option could prove useful for testing purposes,\netc., but I feel that by default the configuration should be written down\nin some easy-to-find file. This is consistent with the move away from\ncommand-line options that we have made with postgresql.conf.\n\nProbably we could make the option -C to mean \"imagine an include directive\nwritten at the very start [or end?] of $PGDATA/postgresql.conf\". With the\ndefault empty file this would achieve exactly the same thing as you're\ntrying.\n\nComments?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sun, 16 Dec 2001 23:23:39 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Explicit config patch 7.2B4" }, { "msg_contents": "Peter Eisentraut wrote:\n> \n> mlw writes:\n> \n> > This allows\n> >\n> > postmaster -C /etc/pgsql/mydb.conf\n> >\n> > The \"-C\" option specifies a configuration file.\n> \n> I'm still not happy about this, because given a pre-configured or already\n> running system it is difficult or impossible to find out which\n> configuration file is being used. This offsets in many ways the improved\n> usability you're trying to achieve.\n\nI do not agree. A command line option which points to a configuration file IS\nthe standard way to start a server under UNIX.\n\n> \n> I think an 'include' directive for postgresql.conf would solve this\n> problem more generally (since it allows many more sharing models) and\n> would also give us a good tool when we get to the configuration of\n> alternative storage locations.\n\nAn include directive would be useful, obviously, but it is not in exclusion of\na more flexible configuration file.\n\n> \n> Probably a command-line option could prove useful for testing purposes,\n> etc., but I feel that by default the configuration should be written down\n> in some easy-to-find file. This is consistent with the move away from\n> command-line options that we have made with postgresql.conf.\n\nI am having the hardest time understanding your antipathy toward an explicit\nconfiguration file. I just don't have any idea of why you are fighting it so\nhardly. As far as I can see there is no reason not to do it, and every other\nimportant server on UNIX supports this construct.\n\nAgain, I just don't get it. Standards are standards, and an explicit\nconfiguration file is a defacto standard.\n\n> \n> Probably we could make the option -C to mean \"imagine an include directive\n> written at the very start [or end?] of $PGDATA/postgresql.conf\". With the\n> default empty file this would achieve exactly the same thing as you're\n> trying.\n\nThe WHOLE idea is to get away from a configuration file mixed with the data. I\nthink the notion of having configuration contained in the same location as data\nis bad. Furthermore, forcing this construct is worse.\n> \n> Comments?\n\nI really don't understand why you don't want this. There isn't a single\nimportant UNIX server which forces its configuration file to be contained\nwithin its data / operational directory. Not one. Why is postgresql \"better\"\nfor being less flexible?\n\nWhat is the harm in including this functionality?\n", "msg_date": "Sun, 16 Dec 2001 17:37:13 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: Explicit config patch 7.2B4" }, { "msg_contents": "On Sunday 16 December 2001 05:23 pm, Peter Eisentraut wrote:\n> I'm still not happy about this, because given a pre-configured or already\n> running system it is difficult or impossible to find out which\n> configuration file is being used. This offsets in many ways the improved\n> usability you're trying to achieve.\n\nWould -C and its argument not be written to $PGDATA/postmaster.opts if \nstarted with pg_ctl? I've not looked at the patch directly, but it would be \nregular behaviour for that to show up, right? If it's in postmaster.opts, \nthen it's trivial to examine and determin where things are coming from.\n\n> I think an 'include' directive for postgresql.conf would solve this\n> problem more generally (since it allows many more sharing models) and\n> would also give us a good tool when we get to the configuration of\n> alternative storage locations.\n\nI like an include directive. But not for the reasons you like it.\n\n> Probably we could make the option -C to mean \"imagine an include directive\n> written at the very start [or end?] of $PGDATA/postgresql.conf\". With the\n> default empty file this would achieve exactly the same thing as you're\n> trying.\n\nNo -- he's trying to get the config out of $PGDATA. The config \n_does_not_belong_with_the_data_. While it _could_ be put there if the admin \nchooses, it should not be tied to $PGDATA.\n\nI'll have to echo Mark's query, though: Why are you fighting this, Peter? \nThis functionality mirrors the standard behaviour for daemons. Name a \nstandard daemon package other than postgresql that automatically assumes the \nconfig is with dynamic data, and overwrites an existing config when the \ndynamic data area is reinitialized.\n\nI'm willing to compromise to a degree on this issue. However, Mark already \nhas working code. Sounds like a candidate for a patch -- maybe even a \nfeature patch. And I'll admit to some desire to apply this patch for the \nRPMset -- but I probably won't, as the RPMset shouldn't do anything that \ndrastic. However, it wouldn't surprize me in the least for a distributor \nsuch as Red Hat to apply this patch.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sun, 16 Dec 2001 21:43:17 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Explicit config patch 7.2B4" }, { "msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> I'll have to echo Mark's query, though: Why are you fighting this, Peter? \n\nPeter's not the only one who's unhappy.\n\n> This functionality mirrors the standard behaviour for daemons.\n\nThat's been Mark's primary argument all along, and what it ignores is\nthat the standard behavior for daemons is designed around the assumption\nthat a system is running only one copy of any given daemon. That's a\nfine assumption for most daemons but an unacceptable one for Postgres.\n\nI'm prepared to accept some kind of compromise on this issue, but I'm\nreally tired of hearing the useless \"other daemons do it this way\"\nargument. Could we hear some more-relevant argument?\n\nI rather liked Peter's idea of treating the feature as an implicit\ninclusion. Maybe there's an even-better approach out there, but so far\nthat's the best idea I've heard.\n\n> Name a standard daemon package other than postgresql that\n> automatically assumes the config is with dynamic data, and overwrites\n> an existing config when the dynamic data area is reinitialized.\n\ninitdb will not overwrite an existing config. Try it.\n\n> However, it wouldn't surprize me in the least for a distributor \n> such as Red Hat to apply this patch.\n\nOh, I doubt it...\n\n\t\t\tregards, tom lane\n\t\t\tRed Hat Database project\n", "msg_date": "Sun, 16 Dec 2001 23:13:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Explicit config patch 7.2B4 " }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> That's been Mark's primary argument all along, and what it ignores is\n> that the standard behavior for daemons is designed around the assumption\n> that a system is running only one copy of any given daemon. That's a\n> fine assumption for most daemons but an unacceptable one for Postgres.\n\nI'd say that's not completely accurate. I've seen and run sites with\nmore than one {httpd, sendmail} running. The basic idea is:\n\n* If no config file is specified, either look for it in a standard\n place, or complain bitterly. Sendmail looks for /etc/sendmail.cf\n (usually); Apache looks in a place configured at compile time\n (/etc/httpd/conf/httpd.conf on RedHat systems).\n\n* If a config file *is* specified, use it. It tells you where to look \n for other stuff (queue directory, webserver root or whatever).\n\nThe above scheme is used by many different daemons and is *perfectly*\nconducive to running multiple copies. What makes you say it isn't?\n\n> I'm prepared to accept some kind of compromise on this issue, but I'm\n> really tired of hearing the useless \"other daemons do it this way\"\n> argument. Could we hear some more-relevant argument?\n\nHow is a patch that (a) perfectly preserves existing behavior, and (b) \nallows for more flexibility in configuration, a bad thing?\n\nI'm not going to lose a lot of sleep if Mark's patch isn't adopted. I\nwill say, however, that as a long-time Un*x sysadmin (Ultrix, Irix,\nSolaris, BSD, Linux) PG's method of configuration struck me as a bit\nweird when I first saw it. It obviously does the job, but I like the\nidea of giving users and packagers a configuration method that's still\nsufficiently flexible and is more familiar to some.\n\nMy $0.02...\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n", "msg_date": "17 Dec 2001 00:06:22 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Explicit config patch 7.2B4" }, { "msg_contents": "\nTom Lane wrote:\n> \n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > I'll have to echo Mark's query, though: Why are you fighting this, Peter?\n> \n> Peter's not the only one who's unhappy.\n\nI still do not understand why.\n\n> \n> > This functionality mirrors the standard behaviour for daemons.\n> \n> That's been Mark's primary argument all along, and what it ignores is\n> that the standard behavior for daemons is designed around the assumption\n> that a system is running only one copy of any given daemon. That's a\n> fine assumption for most daemons but an unacceptable one for Postgres.\n\nIt makes NO such argument at all! It allows an admin to explicitly share a\nconfiguration file (or not). I allows for one central place in which multiple\nconfiguration files can be stored and backed up. It allows for the explicit\nsharing of pg_hba.conf files. (It was noted that pg_ident.conf should be added,\nI am looking at it.)\n\nMost of all it does these things WITHOUT symlinks. Most admins I know like\nsymlinks as a method of working around a short coming in a product, but would\nrather have the configurability designed into the product.\n\nI wrote this patch to make it easier for me to administer multiple databases on\none machine. To make it easier for UNIX admins to follow the dba's trail. I\nwrote this so a system was a bit more self documenting without having to follow\nsymlinks.\n\n> \n> I'm prepared to accept some kind of compromise on this issue, but I'm\n> really tired of hearing the useless \"other daemons do it this way\"\n> argument. Could we hear some more-relevant argument?\n\nThis something else I don't understand. Why would you NOT give an admin the\nsort of configurability they expect?\n\n> \n> I rather liked Peter's idea of treating the feature as an implicit\n> inclusion. Maybe there's an even-better approach out there, but so far\n> that's the best idea I've heard.\n\nIn my eyes, \"implicit\" means unexpected. I love the idea of an include\ndirective, that is a great idea, but it does not address the separation of data\nand config.\n\nPeople new to PostgreSQL will look for he config in /usr/local/pgsql/etc, but\nit won't be there. You can specify the config directory with configure\n(sysconfdir), but it is never used.\n\n> \n> > Name a standard daemon package other than postgresql that\n> > automatically assumes the config is with dynamic data, and overwrites\n> > an existing config when the dynamic data area is reinitialized.\n> \n> initdb will not overwrite an existing config. Try it.\n\nThat's good to know.\n\n> \n> > However, it wouldn't surprize me in the least for a distributor\n> > such as Red Hat to apply this patch.\n> \n> Oh, I doubt it...\n\nTom, I really don't understand the resistance. It seem irrational to me, and I\nam really trying to figure it out.\n\nObviously you must have your reasons, but all I've read thus far is the\nargument that \"PostgreSQL is different than other daemons.\" Maybe I'm pushing\ntoo hard for this and making people defensive, but I just don't get it.\n\nThe core reasons for this patch:\n(1) Move configuration out of $PGDATA and into some centralized location. I do\nnot know of a single admin that would object to this.\n(2) Allow admins to share the pg_hba.conf file.\n\nI try to get pg_ident when I get a chance, that's a good idea.\n\nThe one thing about this I do not like is that it is a \"share all\" or \"share\nnone\" solution. An include directive would be helpful here, but I think it is a\ngood start.\n", "msg_date": "Mon, 17 Dec 2001 07:22:00 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: Explicit config patch 7.2B4" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> That's been Mark's primary argument all along, and what it ignores is\n> that the standard behavior for daemons is designed around the assumption\n> that a system is running only one copy of any given daemon. That's a\n> fine assumption for most daemons but an unacceptable one for Postgres.\n\nI don't think there would be that much problems with it, but you can\nalways have defaults in one location and allow them to be overridable.\n\n> > However, it wouldn't surprize me in the least for a distributor \n> > such as Red Hat to apply this patch.\n> \n> Oh, I doubt it...\n> \n> \t\t\tregards, tom lane\n> \t\t\tRed Hat Database project\n\nDon't. I'd do it in a heartbeat - I'd love to have /etc/postgresql/\nwith defaults. Configuration files should not be located in /var.\n\nregards, Trond Eivind Glomsr�d\ndeveloper, Red Hat Linux developer\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "17 Dec 2001 11:26:28 -0500", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Explicit config patch 7.2B4" }, { "msg_contents": "On Mon, 2001-12-17 at 12:22, mlw wrote:\n> > That's been Mark's primary argument all along, and what it ignores is\n> > that the standard behavior for daemons is designed around the assumption\n> > that a system is running only one copy of any given daemon. That's a\n> > fine assumption for most daemons but an unacceptable one for Postgres.\n> \n> It makes NO such argument at all! It allows an admin to explicitly share a\n> configuration file (or not). I allows for one central place in which multiple\n> configuration files can be stored and backed up. It allows for the explicit\n> sharing of pg_hba.conf files. (It was noted that pg_ident.conf should be added,\n> I am looking at it.)\n> \n> Most of all it does these things WITHOUT symlinks. Most admins I know like\n> symlinks as a method of working around a short coming in a product, but would\n> rather have the configurability designed into the product.\n> \n> I wrote this patch to make it easier for me to administer multiple databases on\n> one machine. To make it easier for UNIX admins to follow the dba's trail. I\n> wrote this so a system was a bit more self documenting without having to follow\n> symlinks.\n\nAs Debian packager I would very much like to see a more convenient and\ncontrollable configuration system.\n\nRoss described last week the Debian method of symlinking the real config\nfiles in /etc/postgresql to $PGDATA/; this method is forced on us by\nDebian policy, which requires that all config files be in /etc. The\nundesired result is that it is not convenient to run multiple servers on\na single machine using the Debian package.\n\nI am currently considering a method of getting round this by creating a\nsubdirectory of /etc/postgresql for each server instance, but it would\nbe nice to be able to share global options in more convenient ways than\nby multiplying symlinks. \n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n\n \"For I say, through the grace given unto me, to every \n man that is among you: Do not think of yourself more \n highly than you ought, but rather think of yourself \n with sober judgement, in accordance with the measure \n of faith God has given you.\" Romans 12:3", "msg_date": "17 Dec 2001 17:08:59 +0000", "msg_from": "Oliver Elphick <olly@lfix.co.uk>", "msg_from_op": false, "msg_subject": "Re: Explicit config patch 7.2B4" }, { "msg_contents": "On Sunday 16 December 2001 11:13 pm, Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > This functionality mirrors the standard behaviour for daemons.\n\n> That's been Mark's primary argument all along, and what it ignores is\n> that the standard behavior for daemons is designed around the assumption\n> that a system is running only one copy of any given daemon. That's a\n> fine assumption for most daemons but an unacceptable one for Postgres.\n\nReally? Most daemons allow a default config location, and then either allow \nor require a config file path on the command line. AOLserver _requires_ the \npath on the command line; named allows it, sendmail allows it, etc.\n\nI routinely run multiple nameds on machines behind NAT. Just a simple config \nfile path away. I routinely run multiple instances of other programs as well.\n\nNote that neither I, Mark, Doug, or anyone else is asking for a change in the \ndefault behavior. I personally just want it to be _allowed_ behavior. \nNothing more; nothing less.\n\n> I rather liked Peter's idea of treating the feature as an implicit\n> inclusion. Maybe there's an even-better approach out there, but so far\n> that's the best idea I've heard.\n\nI rather dislike the idea that $PGDATA is where the config file lives. I \nquite particularly dislike the 'implicit include' idea.\n\n> > Name a standard daemon package other than postgresql that\n> > automatically assumes the config is with dynamic data, and overwrites\n> > an existing config when the dynamic data area is reinitialized.\n\n> initdb will not overwrite an existing config. Try it.\n\nOk, I'll concede that. But having postgresql.conf in $PGDATA makes it more \ndifficult for an admin to wipe $PGDATA and start over. For that matter, \npg_hba.conf is there, too, and I disagree that users should be forced to put \nit in $PGDATA.\n\n> > However, it wouldn't surprize me in the least for a distributor\n> > such as Red Hat to apply this patch.\n\n> Oh, I doubt it...\n\n> \t\t\tregards, tom lane\n> \t\t\tRed Hat Database project\n\nHaving seen Trond's reply, I have to laugh....as I _know_ he disagrees with \nyou (and knew such before he replied -- this has been a thorn in his side for \nawhile. Might prove to be an interesting 'discussion' inside RH over it. \nBut, again, an 'optional' command line switch should be a no-brainer. It \njust seems to me to be rather unproductive to not allow people more \nflexibility in a regular way. Symlinks are not the answer, either.\n\nBut RH is not the only distributor of PostgreSQL. And, for the record, the \nDebian packages already do something very similar. Wouldn't it be better for \nthe core package to support this, rather than each distributor doing it \ndifferently from each other?\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 17 Dec 2001 12:30:14 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Explicit config patch 7.2B4" }, { "msg_contents": "Marko Kreen wrote:\n> \n> On Sun, Dec 16, 2001 at 09:35:58AM -0500, mlw wrote:\n> > This allows\n> >\n> > postmaster -C /etc/pgsql/mydb.conf\n> \n> Nice.\n> \n> > In the config file there are two more options:\n> >\n> > datadir = '/u01/postgres'\n> > hbaconfig = '/etc/pgsql/pg_hba.conf'\n> \n> What about pg_ident.conf?\n\nI will submit a complete patch that includes pg_ident.conf as well.\n\nHere is a question for the great minds here:\n\nIf a user has used the \"-C\" option, as in:\n\npostmaster -C /etc/pgsql/mydb.conf\n\nShould I then, and first, see if there is a \"/etc/pgsql/pg_hba.conf\" or\n\"/etc/pgsql/pg_ident.conf\" and use it as an explicit path?\n\n\nHow about:\n\npostmaster -C /etc/pgsql\n\nShould I then look for:\n\n/etc/pgsql/postgresql.conf\n/etc/pgsql/pg_hba.conf\n/etc/pgsql/pg_ident.conf\n\n?\n\nJust curious what you think.\n", "msg_date": "Mon, 17 Dec 2001 12:56:53 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: Explicit config patch 7.2B4" }, { "msg_contents": "On Monday 17 December 2001 12:56 pm, mlw wrote:\n> Here is a question for the great minds here:\n> If a user has used the \"-C\" option, as in:\n\n> postmaster -C /etc/pgsql/mydb.conf\n\n> Should I then, and first, see if there is a \"/etc/pgsql/pg_hba.conf\" or\n> \"/etc/pgsql/pg_ident.conf\" and use it as an explicit path?\n\n> How about:\n> postmaster -C /etc/pgsql\n\nI agree to both; of course, this should be overrideable in the conf file.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 17 Dec 2001 13:16:41 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Explicit config patch 7.2B4" }, { "msg_contents": "I was looking around, and for this to be a good patch, I have to add it\nto \"postgres\" as well as \"postmaster.\"\n\n\nIn the 7.1.3 source, there is a command line option \"-C\" (don't show\nversion number) which seems to have been depricated in 7.1.x (no longer\ndocumented, but still active.)\n\nIs is \"-C\" dangerous at this stage? Should I use another command line\noption?\n", "msg_date": "Mon, 17 Dec 2001 13:32:23 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: Explicit config patch 7.2B4, not \"-C\" ??" }, { "msg_contents": "Trond Eivind Glomsr�d writes:\n\n> Don't. I'd do it in a heartbeat - I'd love to have /etc/postgresql/\n> with defaults. Configuration files should not be located in /var.\n\nThis problem has already been solved with symlinks a long time ago.\nNothing new here. In fact, for users that are accustomed to the\n\"original\" source distribution it'd be a much easier-to-understand\napproach. (Whether it's \"better\" in the general sense is being\ndiscussed.)\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 17 Dec 2001 21:58:01 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Explicit config patch 7.2B4" }, { "msg_contents": "On Monday 17 December 2001 03:58 pm, Peter Eisentraut wrote:\n> Trond Eivind Glomsr�d writes:\n> > Don't. I'd do it in a heartbeat - I'd love to have /etc/postgresql/\n> > with defaults. Configuration files should not be located in /var.\n\n> This problem has already been solved with symlinks a long time ago.\n> Nothing new here.\n\nUsing symlinks is a workaround, not a solution.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 17 Dec 2001 15:59:34 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Explicit config patch 7.2B4" }, { "msg_contents": "mlw writes:\n\n> Is is \"-C\" dangerous at this stage? Should I use another command line\n> option?\n\nUse -C and eliminate/ignore the other use.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n\n", "msg_date": "Mon, 17 Dec 2001 23:03:05 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Explicit config patch 7.2B4, not \"-C\" ??" }, { "msg_contents": "On Mon, Dec 17, 2001 at 12:56:53PM -0500, mlw wrote:\n> Marko Kreen wrote:\n> > On Sun, Dec 16, 2001 at 09:35:58AM -0500, mlw wrote:\n> > > In the config file there are two more options:\n> > >\n> > > datadir = '/u01/postgres'\n> > > hbaconfig = '/etc/pgsql/pg_hba.conf'\n\n> I will submit a complete patch that includes pg_ident.conf as well.\n> \n> Here is a question for the great minds here:\n> \n> If a user has used the \"-C\" option, as in:\n> \n> postmaster -C /etc/pgsql/mydb.conf\n> \n> Should I then, and first, see if there is a \"/etc/pgsql/pg_hba.conf\" or\n> \"/etc/pgsql/pg_ident.conf\" and use it as an explicit path?\n\nIt should search them only if they are not mentioned in .conf, I\nguess.\n\n> How about:\n> \n> postmaster -C /etc/pgsql\n> \n> Should I then look for:\n> \n> /etc/pgsql/postgresql.conf\n> /etc/pgsql/pg_hba.conf\n> /etc/pgsql/pg_ident.conf\n\nOne suggestion to you: try to think how different approaches\nlook when documented. User must be able to predict program\nbehaviour only by looking at docs. If 3 lines of C add\n2 messy paragraphs to docs, then it is probably unnecessary\n'feature'. Also what if I say '-C /etc/pgsql/db.conf' and there \nis no pg_hba.conf there. It should give error, not go secretly\nsearching it in $PGDATA.\n\n-- \nmarko\n\n", "msg_date": "Tue, 18 Dec 2001 18:56:16 +0200", "msg_from": "Marko Kreen <marko@l-t.ee>", "msg_from_op": false, "msg_subject": "Re: Explicit config patch 7.2B4" }, { "msg_contents": "mlw writes:\n\n> If a user has used the \"-C\" option, as in:\n>\n> postmaster -C /etc/pgsql/mydb.conf\n>\n> Should I then, and first, see if there is a \"/etc/pgsql/pg_hba.conf\" or\n> \"/etc/pgsql/pg_ident.conf\" and use it as an explicit path?\n\n> How about:\n>\n> postmaster -C /etc/pgsql\n>\n> Should I then look for:\n>\n> /etc/pgsql/postgresql.conf\n> /etc/pgsql/pg_hba.conf\n> /etc/pgsql/pg_ident.conf\n\nI like the latter better because of the symmetry of -C with -D.\n\nBtw., the following issues still need to be addressed:\n\n- Location of SSL certificate files (Are they appropriate for /etc? What\n does Apache do?)\n\n- Location of secondary password files. By default they should probably\n track pg_hba.conf. Is that enough?\n\n- Location of charset.conf and the associated recode tables\n\nFor a start, putting all of these under -C is probably sufficient. More\ncomplicated setups can be achieved using symlinks. ;-)\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 18 Dec 2001 23:24:54 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Explicit config patch 7.2B4" }, { "msg_contents": "On Tue, 2001-12-18 at 17:24, Peter Eisentraut wrote:\n> - Location of SSL certificate files (Are they appropriate for /etc? What\n> does Apache do?)\n\nUsing Apache and modssl under debian linux, the certs live in\n/etc/apache. Similarly, crypto keys for Nessus live in /etc/nessusd.\nSo /etc/postgresql would be reasonable.\n\n-- \nAndrew G. Hammond mailto:drew@xyzzy.dhs.org \nhttp://xyzzy.dhs.org/~drew/\n56 2A 54 EF 19 C0 3B 43 72 69 5B E3 69 5B A1 1F \n613-389-5481\n5CD3 62B0 254B DEB1 86E0 8959 093E F70A B457 84B1\n\"To blow recursion you must first blow recur\" -- me", "msg_date": "18 Dec 2001 19:29:17 -0500", "msg_from": "\"Andrew G. Hammond\" <drew@xyzzy.dhs.org>", "msg_from_op": false, "msg_subject": "Re: Explicit config patch 7.2B4" }, { "msg_contents": "> Using Apache and modssl under debian linux, the certs live in\n> /etc/apache. Similarly, crypto keys for Nessus live in /etc/nessusd.\n> So /etc/postgresql would be reasonable.\n\nJust a note from a FreeBSD (ie. a decent filesystem standard layout) it\nhorrifies me to see post-install packages put stuff in /etc/. Of course,\nwhomever writes the FreeBSD port will override this default and put it in\n/usr/local/etc/pgsql.\n\nChris\n\n", "msg_date": "Wed, 19 Dec 2001 10:45:55 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Explicit config patch 7.2B4" }, { "msg_contents": "On Wed, 19 Dec 2001, Christopher Kings-Lynne wrote:\n\n> > Using Apache and modssl under debian linux, the certs live in\n> > /etc/apache. Similarly, crypto keys for Nessus live in /etc/nessusd.\n> > So /etc/postgresql would be reasonable.\n>\n> Just a note from a FreeBSD (ie. a decent filesystem standard layout) it\n> horrifies me to see post-install packages put stuff in /etc/. Of course,\n> whomever writes the FreeBSD port will override this default and put it in\n> /usr/local/etc/pgsql.\n\nWhich is why I avoid rpm, deb, package, etc. The support nightmare it\ncauses when vendors start upchucking various bits and pieces of the\nprogram all over the drive. Then the poor user tries explaining what\nhe did or tried to do in /var, /etc, /opt and a bunch of other places\n(up to and not necessarily excluding the trunk of the car) and figuring\nout something as simple as where a certain file is so the permissions\ncan be verified or where the include files and libraries happen to be\nhiding.\n\nNo, this is not an invite for the discussion of whether or not vendors\nshould or should not scatter files all over the filesystem. It's only\na statement of what it causes on the support end - no, not all people\ncontact the vendor of the os when they have a problem with a program.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 18 Dec 2001 22:27:02 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Explicit config patch 7.2B4" }, { "msg_contents": "On Tue, 2001-12-18 at 22:27, Vince Vielhaber wrote:\n> On Wed, 19 Dec 2001, Christopher Kings-Lynne wrote:\n> \n> > > Using Apache and modssl under debian linux, the certs live in\n> > > /etc/apache. Similarly, crypto keys for Nessus live in /etc/nessusd.\n> > > So /etc/postgresql would be reasonable.\n> >\n> > Just a note from a FreeBSD (ie. a decent filesystem standard layout) it\n> > horrifies me to see post-install packages put stuff in /etc/. Of course,\n> > whomever writes the FreeBSD port will override this default and put it in\n> > /usr/local/etc/pgsql.\n> \n> Which is why I avoid rpm, deb, package, etc. The support nightmare it\n> causes when vendors start upchucking various bits and pieces of the\n> program all over the drive. Then the poor user tries explaining what\n> he did or tried to do in /var, /etc, /opt and a bunch of other places\n> (up to and not necessarily excluding the trunk of the car) and figuring\n> out something as simple as where a certain file is so the permissions\n> can be verified or where the include files and libraries happen to be\n> hiding.\n> \n> No, this is not an invite for the discussion of whether or not vendors\n> should or should not scatter files all over the filesystem. It's only\n> a statement of what it causes on the support end - no, not all people\n> contact the vendor of the os when they have a problem with a program.\n\nFunny, I have exactly the same opinion about stuff scattered all over\nthe filesystem, but that's one of the reasons I like debian. They don't\nscatter stuff, they organize it. And, at least to me things make sense\nthat way. Config files are under /etc. All of them. For every\npackage.\n\nSince it's utterly impossible to get a whole bunch of different people\nto agree about where stuff belongs, or even to have a rational\ndiscussion on the topic, having the distros impose this sort of thing by\nfiat seems to be the only way to get any kind of consistency at all.\n\nHonestly, I really don't give a damn what filesystem layout I end up\nusing, as long as it's reasonably simple and logical.\n\nHowever I will say that personally, I like having a path that's less\nthan a gigabyte. Debian delivers that for me. But hey, to each their\nown.\n\nObFlame: BSD sux. That little devil looks kinda fruity to me, and I'll\nbet Tux could whup his ass.\n\n-- \nAndrew G. Hammond mailto:drew@xyzzy.dhs.org \nhttp://xyzzy.dhs.org/~drew/\n56 2A 54 EF 19 C0 3B 43 72 69 5B E3 69 5B A1 1F \n613-389-5481\n5CD3 62B0 254B DEB1 86E0 8959 093E F70A B457 84B1\n\"To blow recursion you must first blow recur\" -- me", "msg_date": "19 Dec 2001 15:41:38 -0500", "msg_from": "\"Andrew G. Hammond\" <drew@xyzzy.dhs.org>", "msg_from_op": false, "msg_subject": "Re: Explicit config patch 7.2B4" }, { "msg_contents": "On 19 Dec 2001, Andrew G. Hammond wrote:\n\n> ObFlame: BSD sux. That little devil looks kinda fruity to me, and I'll\n> bet Tux could whup his ass.\n\nI was going to respond until I saw this. I don't waste my time with\nchildren.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 19 Dec 2001 16:17:14 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Explicit config patch 7.2B4" }, { "msg_contents": "Vince Vielhaber <vev@michvhf.com> writes:\n\n> On 19 Dec 2001, Andrew G. Hammond wrote:\n> \n> > ObFlame: BSD sux. That little devil looks kinda fruity to me, and I'll\n> > bet Tux could whup his ass.\n> \n> I was going to respond until I saw this. I don't waste my time with\n> children.\n> \n> Vince.\n\nWhat, you didn't see the ObFlame: disclosure. It seemed like a fairly\nobvious joke to me.\n\nI agree with Andrew. One of the things that I like about Debian is\nthat all of the config files for every package that is installed via\nthe Debian packaging system is somewhere in /etc. Yes, this makes it\ndifficult (sometimes) to translate the documentation for packages that\nexpect to have that information somewhere else, but it is very useful\nfor those cases (like PostgreSQL for example) that put the\nconfiguration files somewhere totally strange (the application's data\ndirectory is not where Unix admins start looking for config files).\n\nThe long and the short of it is that pretty much every Unix has a\npreference for where the configuration files, data files, log files,\netc, should go, and the various users probably aren't ever going to\nsimply agree that one way is better than the rest. After all, if\nthere is one common trait among Unix admins it is the belief in a \"one\ntrue way.\" Unfortunately, that \"one true way\" seems to vary radically\ndepending on which Unix admin you talk to. I might agree with Debian\nthat configuration files belong in /etc, you might feel that they\nbelong somewhere else. Currently, however, it doesn't matter what we\nwant. The configuration files end up in the PostgreSQL data directory\nwhether we like it or not. We can use symlinks to pretend that these\nfiles are somewhere else, but we can't really move them.\n\nBeing able to configure the placement of these files would be a huge\nwin. We can always default to leaving the files right where they\ncurrently are, but making this configurable would help those people\nwho package PostgreSQL for one organization or another tremendously.\nAnd since most of PostgreSQL's users end up using the version supplied\nby their vendor (whether it's a FreeBSD port or a RedHat RPM or\nwhatever) it clearly is a win to give these packagers the flexibility\nthey need.\n\nNot that any of you should listen to me, as I am just some random\nPostgreSQL user who happens to be subscribed to HACKERS because I\nwanted to see what the real scoop on 7.2 was :).\n\nJason\n", "msg_date": "19 Dec 2001 15:22:41 -0700", "msg_from": "Jason Earl <jason.earl@simplot.com>", "msg_from_op": false, "msg_subject": "Re: Explicit config patch 7.2B4" }, { "msg_contents": "On 19 Dec 2001, Jason Earl wrote:\n\n> Vince Vielhaber <vev@michvhf.com> writes:\n>\n> > On 19 Dec 2001, Andrew G. Hammond wrote:\n> >\n> > > ObFlame: BSD sux. That little devil looks kinda fruity to me, and I'll\n> > > bet Tux could whup his ass.\n> >\n> > I was going to respond until I saw this. I don't waste my time with\n> > children.\n> >\n> > Vince.\n>\n> What, you didn't see the ObFlame: disclosure. It seemed like a fairly\n> obvious joke to me.\n\nI fail to see how it was appropriate to the subject or this list.\nAnd \"ObFlame\" means 'obligatory flame' meaning he was obligated to\nadd the flame. I fail to see why he'd be obliged to flame for no\nreason.\n\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 19 Dec 2001 22:02:52 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Explicit config patch 7.2B4" } ]
[ { "msg_contents": "Hi all,\n\nI've examined a cygwin hangup issue for a pretty long time. \nIt seems there are plural causes though it's hard for me\nto identify them all.\n\nAnyway I found some unexpected SIGALRM cases.\nIt may be caused by a cygwin's bug but isn't it safer to\nreturn immediately from HandleDeadLock in any platform\nunless the backend is waiting for a lock ?\n\nThe following is a back trace I saw very luckily.\n\n#0 0x77f827e8 in _libkernel32_a_iname ()\n#1 0x77e56a15 in _libkernel32_a_iname ()\n#2 0x77e56a3d in _libkernel32_a_iname ()\n#3 0x00587535 in semop ()\n#4 0x005086b0 in IpcSemaphoreLock (semId=2688, sem=1, interruptOK=0\n'\\000')\n at ipc.c:422\n#5 0x005114f7 in LWLockAcquire (lockid=LockMgrLock, mode=LW_EXCLUSIVE)\n at lwlock.c:272\n#6 0x005101ba in HandleDeadLock (postgres_signal_arg=14) at proc.c:862\n#7 0x6100fb63 in _libkernel32_a_iname ()\n#8 0x0058650d in do_semop ()\n#9 0x00587535 in semop ()\n#10 0x005086b0 in IpcSemaphoreLock (semId=2688, sem=1, interruptOK=0\n'\\000')\n at ipc.c:422\n#11 0x005114f7 in LWLockAcquire (lockid=LockMgrLock, mode=LW_EXCLUSIVE)\n at lwlock.c:272\n#12 0x0050e60b in LockRelease (lockmethod=1, locktag=0x22f338,\nxid=17826,\n lockmode=1) at lock.c:1018\n#13 0x0050c8f5 in UnlockRelation (relation=0xa06c168, lockmode=1) at\nlmgr.c:217\n#14 0x0041d29e in index_endscan (scan=0xa08a8f8) at indexam.c:288\n#15 0x004ae558 in ExecCloseR (node=0xa089a88) at execAmi.c:232\n#16 0x004b8de2 in ExecEndIndexScan (node=0xa089a88) at\nnodeIndexscan.c:474\n#17 0x004b1e41 in ExecEndNode (node=0xa089a88, parent=0x0)\n at execProcnode.c:495\n\nregards,\nHiroshi Inoue\n", "msg_date": "Mon, 17 Dec 2001 06:56:53 +0900", "msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>", "msg_from_op": true, "msg_subject": "unexpected SIGALRM" }, { "msg_contents": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> Anyway I found some unexpected SIGALRM cases.\n> It may be caused by a cygwin's bug but isn't it safer to\n> return immediately from HandleDeadLock in any platform\n> unless the backend is waiting for a lock ?\n\nIf we can't rely on the signal handling facilities to interrupt only\nwhen they're supposed to, I think HandleDeadlock is the least of our\nworries :-(. I'm not excited about inserting an ad-hoc test to work\naround (only) one manifestation of a system-level bug.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 16 Dec 2001 19:11:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] unexpected SIGALRM " }, { "msg_contents": "Tom Lane wrote:\n> \n> \"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> > Anyway I found some unexpected SIGALRM cases.\n> > It may be caused by a cygwin's bug but isn't it safer to\n> > return immediately from HandleDeadLock in any platform\n> > unless the backend is waiting for a lock ?\n> \n> If we can't rely on the signal handling facilities to interrupt only\n> when they're supposed to, I think HandleDeadlock is the least of our\n> worries :-(.\n\nI'm not sure if it's a cygwin issue.\nIsn't it preferable for a dbms to be insensitive to\nother(e.g OS's) bugs anyway ?\nOr how about blocking SIGALRM signals except when\nthe backend is waiting for a lock ? It seems a better\nfix because it would also fix another issue.\n\n> I'm not excited about inserting an ad-hoc test to work\n> around (only) one manifestation of a system-level bug.\n\nOK so cygwin isn't considered as a supported platform ?\n\nretgards,\nHiroshi Inoue\n", "msg_date": "Mon, 17 Dec 2001 10:34:02 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] unexpected SIGALRM" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Tom Lane wrote:\n>> I'm not excited about inserting an ad-hoc test to work\n>> around (only) one manifestation of a system-level bug.\n\n> OK so cygwin isn't considered as a supported platform ?\n\nI don't consider it our responsibility to work around cygwin bugs,\nas opposed to reporting said bugs and expecting the cygwin folk to\nfix 'em.\n\nIf the cost of such a workaround is minimal, then I'd be willing to\nconsider it; but in this case, you're talking about adding another pair\nof kernel calls to every lock blockage. That seems nontrivial.\nBut the more important argument is this: if cygwin contains a bug that\nallows it to fire interrupts when it should not, how much improvement\ndo we really get from plugging this one hole? Surely there are other\nplaces that will have similar problems. For that matter, how can you\nbe sure that adding a sigsetmask call will prevent it from firing the\ninterrupt --- how is that any more secure than setitimer?\n\nI'd say the correct course of action is to report the problem to the\ncygwin people first, and ask them whether a user-level workaround is\npossible/useful.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 16 Dec 2001 20:47:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] unexpected SIGALRM " }, { "msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Tom Lane wrote:\n> >> I'm not excited about inserting an ad-hoc test to work\n> >> around (only) one manifestation of a system-level bug.\n> \n> > OK so cygwin isn't considered as a supported platform ?\n> \n> I don't consider it our responsibility to work around cygwin bugs,\n> as opposed to reporting said bugs and expecting the cygwin folk to\n> fix 'em.\n\nOK I would leave as it is. I've already wasted a lot of time.\nIt has been 3 months since a pgbench hangup problem was\nreported by Yutaka Tanida.\n \nregards,\nHiroshi Inoue\n", "msg_date": "Mon, 17 Dec 2001 12:19:20 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] unexpected SIGALRM" } ]
[ { "msg_contents": "Thomas,\n\ndo you know if there are perl equivalent for datetime validity check ?\nI use Date::Parse to validate datetime stuff I insert into postgresql,\nbut seems postgres is smarter. For example,\n'Mon, 23 Nov 1998 08:37:16 -5:00' if fine for postgresql but\nproduces error in Date::Parse :-)\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n", "msg_date": "Mon, 17 Dec 2001 01:10:23 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "pgsql's datetime perl equivalent ?" } ]
[ { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n> 'Mon, 23 Nov 1998 08:37:16 -5:00' if fine for postgresql but\n> produces error in Date::Parse :-)\n\nDate::Parse is not parsing it because of your timezone: \"-5:00\"\nis not legal; according to ISO 8601 it should be one of:\n+HH:MM, +HH, -HH:MM, -HH\n(see the TIMEZONES section in perldoc Date::Manip for a complete list)\n\nJust add a zero before the five and Date::Manip and postgresql \nwill be more than happy to play with the date.\n\nGreg Sabino Mullane\ngreg@turnstep.com\nPGP Key: 0x14964AC8 200112162039\n\n-----BEGIN PGP SIGNATURE-----\nComment: http://www.turnstep.com/pgp.html\n\niD8DBQFyqV9gvJuQZxSWSsgRAhtpAKCxCr1+jnNPOHXd4c0kGQTSDWJVWACg/t16\nSdw0ayy+FfS0Bueywx0SvHc=\n=qETw\n-----END PGP SIGNATURE-----\n\n\n", "msg_date": "Tue, 17 Dec 2030 01:52:07 -0000", "msg_from": "\"Greg Sabino Mullane\" <greg@turnstep.com>", "msg_from_op": true, "msg_subject": "Re: pgsql's datetime perl equivalent ?" } ]
[ { "msg_contents": "I have made binary distributions of;\n\n- unixODBC \n- PostgreSQL Driver\n- MySQL Driver\n\navailible to the public at http://www.codebydesign.com/DataArchitect.\n\nThese binary distributions are based upon Open Source code which was \nconfigured and built on the Mac OSX.\n\nAll distributions use the standard Mac installer to ensure that they are as \neasy to install as possible. Furthermore; the drivers are auto-registered \nduring their install using the unixODBC odbcinst command-line tool.\n\nThe graphical parts of unixODBC, including the driver config dialogs, are \nsupported by a Qt runtime which also installs using the standard Mac \ninstaller. The GUI stuff needs to be tweeked a bit but it appears to be quite \nusefull.\n\nThis work was done as part of the development of Data Architect. It is hoped \nthat it is usefull to others.\n\nPlease support the development of open clients for data access by purchasing \na copy of Data Architect.\n\nPeter\n\nBTW: I used qmake to build these in order to bypass the current GNU auto-tool \nconfusion on OSX. The qmake project files and my notes are in the unixODBC \ncvs at Source Forge.\n", "msg_date": "Sun, 16 Dec 2001 18:03:21 -0800", "msg_from": "Peter Harvey <pharvey@codebydesign.com>", "msg_from_op": true, "msg_subject": "ODBC on OSX" }, { "msg_contents": "> \n> I have made binary distributions of;\n> \n> - unixODBC \n> - PostgreSQL Driver\n> - MySQL Driver\n> \n> availible to the public at http://www.codebydesign.com/DataArchitect.\n> \n> These binary distributions are based upon Open Source code which was \n> configured and built on the Mac OSX.\n> \n> All distributions use the standard Mac installer to ensure that they are as \n> easy to install as possible. Furthermore; the drivers are auto-registered \n> during their install using the unixODBC odbcinst command-line tool.\n> \n> The graphical parts of unixODBC, including the driver config dialogs, are \n> supported by a Qt runtime which also installs using the standard Mac \n> installer. The GUI stuff needs to be tweeked a bit but it appears to be quite \n> usefull.\n> \n> This work was done as part of the development of Data Architect. It is hoped \n> that it is usefull to others.\n> \n> Please support the development of open clients for data access by purchasing \n> a copy of Data Architect.\n> \n> Peter\n> \n> BTW: I used qmake to build these in order to bypass the current GNU auto-tool \n> confusion on OSX. The qmake project files and my notes are in the unixODBC \n> cvs at Source Forge.\n> \nCool\nChristoph\n", "msg_date": "Tue, 18 Dec 2001 9:05:58 MET", "msg_from": "Christoph Haller <ch@rodos.fzk.de>", "msg_from_op": false, "msg_subject": "Re: ODBC on OSX" } ]